Connect with us

University of Alberta researchers are among two Canadian research teams chosen for significant new funding from the Government of Canada and JDRF Canada to develop new stem cell-based therapies for treating Type 1 diabetes.

The projects will each receive $1.5 million from the Canadian Institutes of Health Research Institute of Nutrition, Metabolism and Diabetes (CIHR-INMD) along with matching dollars from JDRF Canada, for total funding of $3 million per project over five years. The funding investment marks the anniversary of the Canadian discovery of insulin 100 years ago, a significant turning point in the work to defeat diabetes.

While notable progress has been made over the last century, new therapies and a cure for diabetes remain elusive. The funding is hoped to help accelerate the development of new treatments.

Building a better cell for transplantation

Greg Korbutt and Andrew Pepper with the U of A’s Department of Surgery are co-principal investigators on a project led by Cristina Maria Nostro out of the University of Toronto. Since islet-cell transplantation was established through the Edmonton Protocol in 2000 as a therapy for patients with Type 1 diabetes, hundreds of patients around the world have benefited from the treatment. However, donor scarcity, poor islet survival after transplant and the need for patients to take immune-suppressing drugs have limited who can be offered the therapy. The team’s project hopes to find ways to expand islet transplantation by developing insulin-producing stem cells that won’t trigger an immune response.

“They would be hypoimmunogenic, which is a fancy way of saying ghost-like cells,” said Pepper, an associate professor of surgery and member of the Alberta Diabetes Institute (ADI). “The immune system won’t see them, in theory. So if we transplant these cells, hopefully we won’t need anti-rejection drugs.”

The team also hopes to develop and test a method of transplantation for the stem cells that would be safer for the patient and more easily monitored. Traditionally, patients receiving islet transplantation through the Edmonton Protocol would have islet cells delivered directly to their liver. In their project, the team will instead deliver stem cells to a site directly underneath the skin.

“We don’t like to rely on one chance or one therapy. So let’s say these hypoimmunogenic dose cells are not really as ghost-like as we think; we want to be able to deliver immunosuppressive drugs just to the graft itself, just to the transplant site,” said Pepper.

“We hope that combination approach to the hypoimmunogenic cells and our localized drug delivery will really decrease any risks or toxicity, and hopefully improve the transplant function of these stem cell-derived islets.”

If successful in its first two goals, the team will move to a third stage of the project, driving the therapy closer to the clinic by developing the cell line at a clinical formulation that would be acceptable for human transplantation. It would also lead to a clinical trial.

The process would rely on Korbutt’s expertise as director of the U of A’s Alberta Cell Therapy Manufacturing facility—and would be a step up from current islet transplantations that rely on donor islets.

“When we look clinically at our patients that are getting human islets from organ donors, everybody’s getting a different cell prep and with completely different viability function. So it’s really hard to look at it and go, ‘Well, how do you change things when everybody’s getting different cells?’” said Korbutt. “If we can develop a universal stem-cell line, all would be getting the same cells. Then you can compare transplantation efficacy in all these patients.”

“So if we could have a universal cell that doesn’t require immunosuppression, we can really expand the amount of patients around the world who could get transplants because we’re not limited by supply. Right now it’s a very small percentage that are eligible,” noted Pepper.

“This isn’t just theory. There’s actually a practical path to the clinic using Good Manufacturing Practice-grade cells should the data warrant that direction,” he added.

Developing a mature insulin-producing stem cell

Patrick MacDonald, professor of pharmacology and ADI member, is a co-principal investigator on a study led out of the University of British Columbia to develop a more mature insulin-producing stem cell for use in transplantation.

“Although we can make insulin-producing cells from stem cells, the cells that are produced currently are what you might call immature,” said MacDonald. “We can push the cells all the way up to something that resembles insulin-producing cells in infants. We want to improve the functionality of these cells so they can secrete as much insulin as the mature ones.”

According to MacDonald, when immature cells are transplanted into animals, they naturally mature on their own. Though it’s tempting to assume the same thing may happen in humans, it’s still unknown whether that’s actually the case. Transplanting immature stem cells into humans would also introduce ways the cell development could go awry, potentially causing long-term harm.

Getting a better understanding of how the stem cells mature will provide valuable hints that could pave the way for future therapies for patients with Type 1 diabetes, said MacDonald. He also believes the knowledge will prove valuable for researchers working on therapies for Type 2 diabetes.

“Genetic signals that control the development and maturation of islet cells might be important for the genetic risk of Type 2 diabetes. The knowledge that we generate from these kinds of studies can be more widely applicable than I think people often appreciate.”

MacDonald’s role on the team will be helping determine what signals are needed to advance stem cells further along the maturation pathway. His team’s focus has been on studying human islet cells taken from organ donors and mapping, or making an atlas, of gene expression and function in the cells. In this project, they will contrast the progression of stem cell-derived insulin-producing cells with primary islet (beta) cells.

“This could give us some clues into how we might push things forward,” said MacDonald.

MacDonald said he has seen remarkable progress in diabetes research since he first entered the field more than two decades ago, but acknowledged much more work is needed. And though it’s unclear when the next big breakthrough will happen, he is confident that important steps are being taken.

“I want people to know that there are a lot of very committed and invested scientists across the country and across the world, working to better understand Type 1 diabetes, with an aim to push forward better treatments, and someday, to cure the disease,” said MacDonald.

“There are many different aspects that need to be addressed. The work’s not done. We’ve got a lot ahead of us, but we will continue to push forward.”

Source: University of Alberta




Source link

0
Continue Reading

Technology

These Microsoft Azure tools can help you unlock the secrets lurking in your business data

azure synapse header

How to develop business insights from big data using Microsoft’s Azure Synapse and Azure Data Lakes technologies.

Image: Microsoft

Data lakes are an important part of a modern data analysis environment. Instead of importing all your different data sources into one data warehouse, with the complex task of building import pipelines for relational, non-relational and other data, and of trying to normalise all that data against your choice of keys, you wrap all your data in a single storage environment. On top of that storage pool, you can start to use a new generation of query tools to explore and analyse that data, working with what could be petabytes of data in real time. 

SEE: Windows 10 Start menu hacks (TechRepublic Premium)

Using data this way makes it easier to work with rapidly changing data, getting insights quickly and building reporting environments that can flag up issues as they arise. By wrapping data in one environment, you can take advantage of common access control mechanisms, applying role-based authentication and authorisation, ensuring that the right person gets access to the right data, without leaking it to the outside world. 

Working at scale with Azure Data Lake 

Using tools like Azure Active Directory and Azure Data Lake, you can significantly reduce the risk of a breach as it taps into the Microsoft Security Graph, identifying common attack patterns quickly. 

Once your data is in an Auzre Data Lake store, then you can start to run your choice of analytics tooling over it, using tools like Azure Databricks, the open-source HDInsight, or Azure’s Synapse Analytics. Working in the cloud makes sense here, as you can take advantage of large-scale Azure VM instances to build in-memory models as well as taking advantage of scalable storage to build elastic storage pools for your data lake contents. 

Microsoft recently released a second generation of Data Lake Storage, building on Azure Blobs to add disaster recovery and tiered storage to help you manage and optimise your storage costs. Azure Data Lake Storage is designed to work with gigabits of data throughput. A hierarchical namespace makes working with data easier, using directories to manage your data. And as you’re still using a data lake with many different types of data, there’s still no need for expensive and slow ETL-based transformations. 

Analysing data in Azure Synapse 

Normally you need separate analytics tooling for different types of data. If you’re building tooling to work with your own data lake, you’re often bringing together data-warehousing applications alongside big data tools, resulting in complex and often convoluted query pipelines that can be hard to document and debug. Any change in the underlying data model can be catastrophic, thanks to fragile custom analysis environments. 

Azure now offers an alternative, hybrid analytical environment in the shape of Azure Synapse Analytics, which brings together big data tooling and relational queries in a single environment by mixing SQL with Apache Spark and providing direct connections to Azure data services and to the Power Platform. It’s a combination that allows you to work at global scale while still supporting end-user visualisations and reports, and at the same time providing a platform that supports machine-learning techniques to add support for predictive analytics. 

At its heart, Synapse removes the usual barriers between standard SQL queries and big data platforms, using common metadata to work with both its own SQL dialect and Apache Spark on the same data sets, either relational tables or other stores, including CSV and JSON. It has its own import tooling that will import data into and out of data lakes, with a web-based development environment for building and exploring analytical models that go straight from data to visualisations. 

Synapse creates a data lake as part of its setup, by default using a second-generation BLOB-based instance. This hosts your data containers, in a hierarchical virtual file system. Once the data lake and associated Synapse workspace are in place, you can use the Azure Portal to open the Synapse Studio web-based development environment. 

azure-synapse-studio.jpg

Writing a PySpark query in a Spark (Scala) notebook in Azure Synapse Studio.

Image: Microsoft

Building analytical queries in Synapse Studio 

Synapse Studio is the heart of Azure Synapse Analytics, where data engineers can build and test models before deploying them in production. SQL pools manage connections to your data, using either serverless or dedicated connections. While developing models, it’s best to use the built-in serverless pool; once you’re ready to go live you can provision a dedicated pool of SQL resources that can be scaled up and down as needed. However, it’s important to remember that you’re paying for those resources even if they’re not in use. You can also set up serverless pools for Apache Spark, helping keep costs to a minimum for hybrid queries. There is some overhead when launching serverless instances, but for building reports as a batch process, that shouldn’t be an issue. 

Azure Synapse is fast: building a two-million row table takes just seconds. You can quickly work with any tabular data using familiar SQL queries, using the Studio UI to display results as charts where necessary. That same data can be loaded from your SQL store into Spark, without writing any ETL code for data conversion. All you need to do is create a new Spark notebook, and then create the database and import it from your SQL pool. Data from Spark can be passed back to the SQL pool; allowing you to use Spark to manipulate data sets for further analysis. You can use SQL queries on Spark datasets directly, simplifying what could otherwise be complex programming tasks unifying results from different platforms. 

SEE: Checklist: Securing Windows 10 systems (TechRepublic Premium)

One useful feature of Azure Data Lakes using Gen 2 storage is the ability to link to other storage accounts, allowing you to quickly work with other data sources without having to import them into your data lake store. Using Azure Synapse Studio, your queries are stored in notebooks. These notebooks can be added to pipelines to automate analysis. You can set triggers to run an analysis at set intervals, driving Power BI-based dashboards and reports. 

There’s a lot to explore with Synapse Studio, and to get the most from it requires plenty of data-engineering experience. It’s not a tool for beginners or for end users: you need to be experienced in both SQL-based data-warehousing techniques and in tools like Apache Spark. However, it’s the combination of those tools and the ability to publish results in desktop analytical tools like Power BI that makes it most useful. 

The cost of at-scale data lake analysis will always make it impossible to bring to everyone. But using a single environment to create and share analyses should go a long way towards unlocking the utility of business data. 

Also see

Source link

0
Continue Reading

Technology

Moonshots for the Treatment of Aging: Less Incrementalism, More Ambition

1 16

There is far too much incrementalism in the present research and development of therapies to treat aging. Much of the field is engaged in mimicking calorie restriction or repurposing existing drugs that were found to increase mouse life span by a few percentage points. This will not meaningfully change the shape of human life, but nonetheless costs just as much as efforts to achieve far more.

If billions of dollars and the efforts of thousands of researchers are to be devoted to initiatives to treat aging, then why not pursue the ambitious goal of rejuvenation and adding decades to healthy life spans? It is just as plausible.

Moonshots for the Treatment of Aging Less Incrementalism More Ambition

Image credit: Pixabay (Free Pixabay license)

There are just as many starting points and plausible research programs aimed at outright rejuvenation via repair of molecular damage, such as those listed in the SENS approach to aging, as there are aimed at achieving only small benefits in an aged metabolism. The heavy focus on incremental, low yield programs of research and development in the present community is frustrating, and that frustration is felt by many.

As the global population ages, there is increased interest in living longer and improving one’s quality of life in later years. However, studying aging – the decline in body function – is expensive and time-consuming. And despite research success to make model organisms live longer, there still aren’t really any feasible solutions for delaying aging in humans. With space travel, scientists and engineers couldn’t know what it would take to get to the moon. They had to extrapolate from theory and shorter-range tests. Perhaps with aging, we need a similar moonshot philosophy. Like the moon once was, we seem a long way away from provable therapies to increase human healthspan or lifespan. This review therefore focuses on radical proposals. We hope it might stimulate discussion on what we might consider doing significantly differently than ongoing aging research.

A less than encouraging sign for many of the lifespan experiments done in preclinical models, namely in mammals such as mice, is that they have modest effect sizes, often only having statistically significant effects in one of the genders, and often only in specific dietary or housing conditions. Even inhibiting one of the most potent and well-validated aging pathways, the mechanistic target of rapamycin (mTOR) pathway has arguably modest effects on lifespan – a 12-24% increase in mice. This is all to ask, if the mTOR inhibitor rapamycin is one of the potential best-case scenarios and might be predicted to have a modest effect if any (and possibly a detrimental one) in people, should it continue to receive so much focus by the aging community? Note the problems in the aging field with small and inconsistent effects for the leading strategies aren’t specific to rapamycin.

Treating individual aging-related diseases has encountered roadblocks that should also call into question whether we are on the optimal path for human aging. Alzheimer’s is a particularly well-funded and well-researched aging-related topic where there are still huge gaps in our understanding and lack of good treatment options. There has been considerable focus on amyloid beta and tau, but targeting those molecules hasn’t done much for Alzheimer’s so far, leaving many searching for answers. The point is when we spend collectively a long time on something that isn’t working well, such as manipulating a single gene or biological process, it should seem natural to consider conceptually different approaches.

Link: https://doi.org/10.3233/NHA-190064

Source: Fight Aging!




Source link

0
Continue Reading

Technology

Signal Back Up: Users May See Some Errors, Company Says Will Be Fixed in Next Update

1610920334 signal 1475682178651

Signal said it had restored its services a day after the application faced technical difficulties as it dealt with a flood of new users after rival messaging app WhatsApp announced a controversial change in privacy terms.

Signal has seen a rise in downloads following a change in WhatsApp’s privacy terms, that required WhatsApp users to share their data with both Facebook and Instagram.

Signal users might see errors in some chats as a side effect to the outage, but will be resolved in the next update of the app, the company said in a tweet.

The error does not affect the security of the chat, the company added.

The non-profit Signal Foundation based in Silicon Valley, which currently oversees the app, was launched in February 2018 with Brian Acton, who co-founded WhatsApp before selling it to Facebook, providing initial funding of $50 million (roughly Rs. 365 crores).

Signal faced a global outage that began on January 15. Although users could open the app and send messages, nothing was actually delivered.

Signal later sent Gadgets 360 a message with the following statement from its COO Aruna Harder: “We have been adding new servers and extra capacity at a record pace every single day this week, but today exceeded even our most optimistic projections. Millions upon millions of new users are sending a message that privacy matters, and we are working hard to restore service for them as quickly as possible.”

© Thomson Reuters 2021


Does WhatsApp’s new privacy policy spell the end for your privacy? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.



Source link

0
Continue Reading

Trending