Duke University, HumanFirst partner to assess digital clinical measuring tools

By Jenni Spinner

- Last updated on GMT

(AzmanJaka/iStock via Getty Images Plus)
(AzmanJaka/iStock via Getty Images Plus)
The two entities will work together to evaluate digital sensors and other tools used in clinical research, checking the items for accuracy and equity.

Digital trial tech firm HumanFirst and the Duke Clinical Research Institute (DCRI) at the Duke University School of Medicine are working together to evaluate digital sensors used for measurement in clinical trials. The collaboration will use DCRI’s trial design expertise and HumanFirst’s Atlas platform to perform fit-for-purpose technical and clinical evaluations.

To learn more about the collaboration and how it might benefit sites and sponsors, Outsourcing-Pharma connected with two key figures in the partnership:

  • Andy Coravos, CEO of HumanFirst
  • Eric Perakslis, DCRI chief science and digital officer

OSP: Could you please tell us a bit about HumanFirst—who you are, what you do, key capabilities/specialties, and what sets you apart from the field?

A: HumanFirst enables safe, effective, and equitable healthcare operations at home. Previously known as Elektra Labs, HumanFirst is a technology company leading the development of digital platforms to evaluate and enable the deployment of connected technologies in decentralized trials and virtual care.

Twenty-two of the 25 largest pharmaceutical companies in the world have used the HumanFirst platform to evaluate and enable the deployment of health technology products. HumanFirst offers command center and infrastructure solutions that ensure their telehealth operations are as reliable and trustworthy as those within hospitals or labs.

HumanFirst first launched its Atlas workflow management tool for remote monitoring in 2019 with support from grants from the National Science Foundation (NSF) and the Harvard Business School Rock Center for Entrepreneurship. Atlas is powered by a curated, rich database of evidence (peer-reviewed papers, clinicaltrials.gov studies, regulatory decisions such as 510(k) and De Novo) spanning 300+ medical conditions across 25+ therapeutic areas.

HumanFirst’s Applied Sciences team and Atlas algorithms have overseen each piece of evidence across 3000+ physiological and behavioral digital measures classified into 250+ categories, and the database is continuously updated with the industry’s highest-quality and most recent evidence.

In September 2020, HumanFirst worked with leaders from the Digital Medicine Society, Genentech, a member of the Roche Group, Koneksa, Myokardia, Sage Bionetworks, and Scripps Research to spearhead the creation of ‘The Digital Measures Playbook’ of best practices for capturing patient signals and decreasing risk when deploying, managing, and monitoring connected products in remote settings.

OSP: Could you please share an overview of how your business has changed since COVID’s arrival—what new tech or collaborations (besides the Duke one) have you tackled, and what lessons have you learned about DCTs and virtual care?

A: The effect of COVID had enormous impacts on our industry as tens of thousands of trials had to be stopped or interrupted, which forced clinical researchers to turn to a range of digital solutions to address the pandemic. The pandemic accelerated a process that was already happening in the industry: even before the COVID-19 public health emergency, digital products were being adopted at ~34% CAGR in research studies, and that number continues to increase.

As the industry’s attention shifted towards collecting digital measures using connected sensors, a lot of questions around evidence and validity also arose. In response, experts across the industry developed a number of open-source tools and frameworks, such as the V3 Framework and the EVIDENCE Publication Checklist, to define standards for what types of research are needed and define what high-quality research looks like.

This created a gap, however, as many organizations lacked direct experience in the testing, verification, validation, and deployment of these new digital toolsets. It’s vital to evaluate for accuracy and determine how measurement errors may impact research conclusions and healthcare decision-making.

The collaboration between DCRI and HumanFirst fills this gap and will leverage HumanFirst’s Atlas platform and DCRI’s deep expertise in trial design to create and conduct innovative and scientifically rigorous protocols to evaluate sensors and other digital measures on behalf of sponsor partners. DCRI’s new Digital Measures Evaluation Center will perform fit-for-purpose technical and clinical evaluations that are designed to align with sponsor needs.

OSP: How did you come to collaborate with DCRI—have you collaborated before?

A: The HumanFirst team has collaborated with a number of academic research centers before -- ranging from the Harvard-MIT Center for Regulatory Sciences to Duke and UC Berkeley (for our NSF I-Corps grant). Perakslis joined Duke Clinical Research Institute as chief science and digital officer and he has been a long-standing advisor and supporter of HumanFirst’s development of the Atlas Platform.

OSP: Please share some of the reasons why you’re partnering to assess sensors and other digital tools used in clinical studies.

A: Prior to the COVID-19 pandemic, digital products were being adopted at ~34% CAGR in research studies, which has further accelerated over the past year and a half. With this pivotal shift toward collecting digital measures using connected sensors, it is vital to evaluate their accuracy and determine how measurement errors may impact research conclusions and healthcare decision-making.

As Perakslis put it: “With thousands of digital measures available today, some still highly experimental, drug, diagnostic and medical device developers need the confidence to know that sensors used in trials are accurate and fit-for-purpose.”

OSP: Are the accuracy and reliability of wearables and other such devices a concern?

A: Digital products offer the opportunity to advance healthcare in many ways:

  • Remote monitoring using connected sensors offers a more holistic view of a person’s lived experience.
  • Reduced dependence on trial sites to enable broader trial access and more rapid enrollment.
  • Improved experiences for trial participants.
  • Richer and more longitudinal data to differentiate assets and support market access.

However, with that, we need to ensure that home-based operations are as reliable and trustworthy as those within the hospital or the research site. Here are some of the concerns if these products aren’t validated accurately:

  • Poor sensor selection leads to non-adherence and missing data
  • Insufficiently validated sensors and measures fail to persuade regulators and payors
  • Unexpected firmware updates lead to unusable data
  • Concerns around equity (See here re: racial bias in wearables.)
OSP_HumanFirstDuke_chart
(Image: HumanFirst)

OSP: Could you please share whatever detail you’re able about how you’ll be evaluating the accuracy/reliability/etc of the sensors?

A: HumanFirst different from an ‘Underwriters Laboratory’, meaning HumanFirst does not run original tests and evaluations -- we categorize and highlight papers, trials, and regulatory decisions that have already occurred. Nor does HumanFirst give scores to sensors.

Asking “what’s the best heart rate monitor” is like asking “what’s the best food?”. Fit-for-purpose technologies depend on how the product is intended to be used, and our software helps sponsors and providers determine the evidence base for measures and technologies of given populations.

We collaborated with experts in the industry to review different evaluation frameworks in Nature NPJ Digital Medicine, which explores V3, U&U, data rights, security, and economic feasibility. This open-access work is also incorporated into The Playbook. Our workflow management software in Atlas incorporates these parameters that can be customized and adapted to the unique needs of a particular study or organization.

OSP: Anything to add?

A: It is important to note that these new services will not only benefit biopharma and biotech companies looking to safely and effectively exploit these technologies, since the announcement of the new services, many sensor, device, and algorithm makers have reached out looking for guidance and assistance in testing and validating their ideas and creations. We are hopeful that this will raise the bars of quality and utility as these novel tools enter the regulatory approval and commercialization processes.

Related news

Show more

Related products

show more

Saama accelerates data review processes

Saama accelerates data review processes

Content provided by Saama | 25-Mar-2024 | Infographic

In this new infographic, learn how Saama accelerates data review processes. Only Saama has AI/ML models trained for life sciences on over 300 million data...

More Data, More Insights, More Progress

More Data, More Insights, More Progress

Content provided by Saama | 04-Mar-2024 | Case Study

The sponsor’s clinical development team needed a flexible solution to quickly visualize patient and site data in a single location

Using Define-XML to build more efficient studies

Using Define-XML to build more efficient studies

Content provided by Formedix | 14-Nov-2023 | White Paper

It is commonly thought that Define-XML is simply a dataset descriptor: a way to document what datasets look like, including the names and labels of datasets...

Related suppliers

Follow us

Products

View more

Webinars