Informatics useful in advancing personalized medicine: Washington University

By Jenni Spinner

- Last updated on GMT

(enot-poloskun/iStock via Getty Images Plus)
(enot-poloskun/iStock via Getty Images Plus)
A representative from the school explains how tools like artificial intelligence and natural language processing can help elevate personalized medicine.

Thanks to its potential benefit for patients, personalized medicine is a growing area of interest for researchers and drug developers. At the Washington University School of Medicine, researchers are using artificial intelligence (AI) based natural language processing (NLP) tools to extract and normalize data from patient notes to help complement structured data from electronic health records (EHRs).

Philip Payne (associate dean for health informatics and data science at Washington University) spoke with Outsourcing-Pharma about the personalized medicine efforts at the school, the progress the team has made, and reasons why WashU researchers are putting AI tools like NLP to work.

OSP: Could you please share a little about the School of Medicine—who you are, what you do, any noteworthy projects or accomplishments you’d like to brag about, and what sets you apart from other institutions?

PP: Washington University School of Medicine in St. Louis is a leader in medical research, teaching, and patient care, consistently ranking among the top medical schools in the nation by US News and World Report. The school also is one of the top recipients of funding from the National Institutes of Health (NIH) for research, with nearly $488m [USD] received in 2020.

We’re known for the breadth and depth of our research programs, with particular strengths in genetics and genomics, Alzheimer’s disease, neuroscience, microbiome, infectious diseases, cancer, imaging science, and informatics. Our basic and translational research programs inform the treatment of patients. Our 1,700 physicians make up the medical staff of Barnes-Jewish and St. Louis Children’s hospitals.  

Information about the Institute for Informatics and its vision is available here​. The institute, in a collaboration with MDClone, has pioneered the use of synthetic data in clinical research. Such data mimics real patient populations but doesn’t carry the risk of disclosing protected health information. The institute is making synthetic datasets more widely available to university researchers, with the goal of speeding up research that could save lives.

In other research of note, Washington University School of Medicine is leading an international clinical trial, funded by the NIH and others, to evaluate whether investigational therapies can prevent or delay the onset of Alzheimer’s in people genetically predisposed to develop the disease at a young age. Results of this study may inform the treatment of people with more common forms of Alzheimer’s that develop later in life.

In other work, our researchers have also demonstrated the influence of the gut microbiome in obesity and childhood malnutrition. We’re currently conducting clinical trials in Bangladesh to evaluate whether microbiome-directed therapeutic foods can effectively and durably treat childhood malnutrition and reduce the incidence of wasting and cognitive problems that often accompany malnutrition in children.

Our cancer researchers also were the first in the world to sequence the cancer genome of a patient and identify the genetic mutations that contributed to the patient’s disease. They also pioneered genomic sequencing for cancer, which is informing more personalized approaches to treatment.

OSP: Specifically, what types of research have your teams tackled in the field of personalized medicine?

OSP_WashUInformatics_PP
Philip Payne, associate dean for health informatics and data science, Washington University)a

PP: We have a variety of projects underway in the Institute for Informatics that contribute to our precision medicine strategy, and they span a broad variety of diseases that most people are familiar with such as cancer, cardiovascular disease, neurodegenerative disease, and other common diseases. We also have projects focused on rare diseases that occur less frequently but are no less important when we think about how we can have better, more precise approaches to diagnosis and treatment planning.

One of our focuses is Alzheimer's disease. We know with Alzheimer's disease that there are a variety of presentations, and while the biology may be somewhat similar across those presentations, some patients will see very rapid neurodegeneration and deterioration of their cognitive function, while for other patients it's a longer, more gradual process.

While we don't have curative strategies for any of those scenarios, there are measures we can take to improve quality of life and also to support caregivers and family members as they navigate this disease. But that means we need to understand what's the likely outcome for a patient.

One of the things that we've been doing is looking at a broad variety of data sources from patients enrolled in clinical trials in Washington University’s memory care clinic. This includes data that is captured in the EHR but also a variety of cognitive evaluation instruments and patient-generated data. And we've been using machine learning methods in order to identify patterns in that data that will allow us to predict which patients are going to have a rapid decline and which are more likely to have a slower, more longitudinal decline.

We've seen great success with those preliminary models, and now we're working with our clinical collaborators to validate those. And importantly, we're doing that with data that's available today. Data that is captured in the clinic so it doesn't require us to do anything different at the point of care, but rather it's a different way of looking at all that data that we collected at the point of care so we can improve our ability to make these prognostic assessments of a patient.

Really, we talk about this a lot in the Institute for Informatics as being really an effort to understand patient trajectory, so not all precision medicine is about finding a new treatment. Some aspects of precision medicine are simply about better understanding the trajectory a patient is on, so we can make smarter choices throughout the duration of that entire trajectory.

OSP: Why is personalized medicine an area of research worth pursuing?

A: Washington University School of Medicine has a strategic focus in both research and clinical practice that will advance precision medicine. That means we need to better understand at a biomolecular and a clinical and a population level, the features of our patients that both contribute to wellness but also disease and how patients respond to therapy so that we can use that increased understanding to make better decisions at an individual patient level that optimize quality, safety, and outcomes of care.

We have a variety of collaborators that we work with that help support this research, including traditional funding sources such as the National Institutes of Health. But in equal measure, we also have collaborations with organizations such as Centene and others that are also investing in precision medicine research in order to improve the health and wellness of patient communities.

OSP: Why are the conditions you decided to hone in on (Alzheimer’s, breast cancer, diabetes, and obesity) particularly worthy of attention?

PP: All are common, debilitating, and often deadly diseases that affect millions of people worldwide, at all levels of income. In addition, they are diseases where we see substantial variation in their presentation, treatment, and outcomes, and as such, are well-positioned to benefit from precision medicine approaches informed by the phenotyping of such patients and the subsequent use of such data to support both research and tailored, evidence-based practice.

OSP: Could you tell us a bit about the facilities, technologies, and experts you’ll be turning to in your work? 

PP: Our deep phenotyping efforts leverage the faculty, staff, and trainee expertise in our Institute for Informatics, working in close collaboration with clinical subject matter experts from throughout the Washington University School of Medicine.  These interdisciplinary teams leverage a variety of technologies, including state-of-the-art scientific computing facilities, as well as cloud-based tools for data harmonization, linkage, and warehousing, as well as analytical methods such as data mining and natural language processing.

OSP: Do you have any specific short- and long-term goals you’d like to share? 

PP: Our primary objectives are to establish a number of large cohorts of deeply phenotyped patients (consented as part of research studies), in priority disease-specific cohorts, that will allow us to apply advanced analytical methods to such data to inform future basic science, clinical, and translational studies that are more targeted and resource-efficient, and ultimately, result in new evidence that can be applied at the point-of-care and beyond.

OSP: Anything you’d like to add we didn’t touch upon above?

PP: Our work with COVID-19:

We have developed predictive models and deployed them for use both locally and at the population level to help us better respond to the pandemic. And these models have spanned a spectrum from identifying patients who are critically ill that might benefit from palliative care consults to better understanding the trajectory of our patients in the ICU and anticipating who might experience respiratory failure and therefore need early intervention in the form of more advanced respiratory therapies.

Most recently we've been looking at how can we predict likely outcomes when a patient is placed on ECMO. Because we're now seeing younger, sicker patients with COVID and one of the questions is should we put them on ECMO earlier? Because often ECMO is a therapy of last resort, which means that patients are already very ill when they're placed on ECMO, which reduces the therapeutic benefit to them. The question is, could we identify those patients earlier and perhaps intervene earlier to maximize outcomes and reduce the likelihood of complications?

In addition to that, we've done similar work at the population level, trying to anticipate hot spots of COVID infections based on prior activity in the region, such as testing or other patient-reported data. So really, across the board, COVID-19 has both been a driver for us to think about how can we use prediction to better organize our response to the pandemic?

It's also been a catalyst for moving some of these algorithms into the clinical environment or into the public health environment more quickly than we normally would have. This has both benefits and challenges. The benefit being we're getting real-world experience. The challenge is we're not always getting the opportunity to evaluate them at the level of rigor that we might have if it was not a crisis situation.

This is not to say that we're deploying unsafe algorithms; we're constantly monitoring these algorithms. It's actually a whole discipline of informatics that we refer to as “algorithmovigilance,” which is basically constant monitoring of the algorithm performance to ensure that it is doing what it is anticipated to do and that the results are accurate. But without a doubt, prediction has been a major part of how we responded to the pandemic.

Related news

Show more

Related products

show more

Saama accelerates data review processes

Saama accelerates data review processes

Content provided by Saama | 25-Mar-2024 | Infographic

In this new infographic, learn how Saama accelerates data review processes. Only Saama has AI/ML models trained for life sciences on over 300 million data...

More Data, More Insights, More Progress

More Data, More Insights, More Progress

Content provided by Saama | 04-Mar-2024 | Case Study

The sponsor’s clinical development team needed a flexible solution to quickly visualize patient and site data in a single location

Using Define-XML to build more efficient studies

Using Define-XML to build more efficient studies

Content provided by Formedix | 14-Nov-2023 | White Paper

It is commonly thought that Define-XML is simply a dataset descriptor: a way to document what datasets look like, including the names and labels of datasets...

Related suppliers

Follow us

Products

View more

Webinars