Companies are right to embrace artificial intelligence - not doing so is 'no longer an option' says eClinical Solutions
It is a wide-ranging tool that allows people to think how they might want to integrate information, analyze data and use insights to improve their decision making.
OSP was lucky enough to speak to Raj Indupuri of eClinical Solutions to find out what his stance on AI and the impact it is having on the industry as a whole
OSP: So, why do you think companies right to embrace AI applications?
We are seeing rapid advancements with generative AI, and in the maturity of AI in general, especially over the last few months, where AI is now pushing into human realms. AI is prevalent in all our lives and is now synonymous with technology. AI offers game-changing opportunities to augment our intelligence for any role and industry, making us more productive and accelerating outcomes. Within clinical development today, there is increased complexity coupled with pressure to reduce cycle times and increase efficiency. AI-driven technology is essential to automate interactions, personalize solutions and deliver new insights to achieve those objectives and create early impact. Every technology provider in the life science industry now provides us with a breadth and depth of AI capabilities, so companies not embracing AI is no longer an option – they need to embrace AI to achieve the desired outcomes or get left behind the competition.
OSP: How do they deal with different types of data such as the structured and unstructured types and I suppose semi-structured too?
Today, every life science company is on a digital transformation journey to modernize their infrastructure to be able to store, process and manage any type of data so it can be leveraged for advanced technologies like AI and machine learning to solve more complex problems. The sophistication of AI and its applications are more powerful when it comes to unstructured or semi-structured data, study protocols are a good example of where AI could be used to tap into unstructured data. NLP (Natural Language Processing) could be used to synthesize and extract useful information, whether that’s from protocols or scientific literature, that could then be used for better design or insights when doing research. This was not easy to do a few years ago, and it’s now possible because of technology and modern data infrastructure.
OSP: What could prevent organizations moving from pilot to production, are there any hurdles some simply can’t jump?
The key here is trust and proper change management. If end users and stakeholders don’t trust these models, it will be difficult to productionalize them. This is where interpretability, explainability and the role of the human-in-the-loop are critical for transparency to trust. AI interpretability provides insight into the internal workings of AI models, such as the factors, features, or patterns leading to the predictions. It explains the decision-making processes and outputs of these models. Knowledge of the decision-making process helps end users understand how AI models arrive at their conclusions and helps them validate results, uncover biases or errors, and properly use the predictions to make informed decisions. So, it’s very important when deploying a machine learning model that you provide that explainability to end users so that they can interpret these results correctly and understand why the model is making a given prediction or making this recommendation. The second is proper change management with due consideration for upskilling people to use these digital and AI-augmented capabilities and ensuring the business processes are updated to integrate these capabilities for day-to-day operations and decision-making. Overcoming these challenges can help move AI-driven solutions from pilot to production and ensure adoption.
OSP: What are the top challenges companies and organizations should address when considering AI initiatives?
When considering AI initiatives, the questions executives need to ask are and there are quite a few, so you may wish to list them!
- Are we solving the right problem? By solving the problem, what impact can we make?
- Do we have sufficient and high-quality data to solve the problem? AI systems function by being trained on a set of data that reflects the real-world process and how actions or decisions are made so it’s crucial to ensure the right quality and volume of data is available.
- Do we have the right infrastructure and processing capabilities to build and deploy models?
- Do we have the right AI talent with the necessary knowledge and skills?
- Are we ready to integrate AI into our existing systems? Employees must be trained to use AI-driven tools to augment actions and decisions, troubleshoot simple problems, and recognize when the AI algorithm is underperforming. We must not discount that there is significant change in people adapting their processes to utilize AI.
- How do we ensure we don’t overestimate the AI systems? AI explainability is crucial for a successful transition into machine learning. Breaking down algorithms and training users on the decision-making process of AI provides transparency and helps prevent faulty operation.
- What are the cost requirements, and is this the right investment? Developing, implementing, and integrating AI into your training strategy is not going to be cheap.
It’s important to identify the right use case to start, ensure you have the right data and talent, and take small steps before applying it at a large scale when considering AI initiatives.
OSP: What about governance and security, how is consistency managed?
Security is very important for the entire machine learning pipeline and when it comes to AI. With the advancements in machine learning models and their availability as open-source technologies – it is commonplace to start with foundational models and fine-tune them for specific tasks or use cases. It's very important to understand the exposure to cybersecurity threats and take proper measures to ensure the MLOps pipeline is not vulnerable to these threats and security is not compromised when bringing external open-source libraries or models into your environment. There are several cybersecurity technologies that specialize in MLSecOps which has become critical to any AI infrastructure and MLOps pipeline. These technologies can scan for threats, cybertheft, malcontent, data poisoning, and adversarial attacks that could cost you more than your competitive advantage.
Governance is key to success and higher-performing models. How do you manage all these datasets, and who can access what? How do you ensure and communicate interpretability and explainability, to be sure that these models are, in fact, providing the intended results; what is the governance around testing and verifying that the outputs of these models are high-performing? These are the questions you need to answer and establish processes around. Many organizations have ground truth labeling teams. When the AI model makes a prediction or produces something as an outlier, a person is going to label that, “Yes, this is accurate. Or, no it’s not.” This ground truthing is an important piece of governance as it confirms and improves performance via a feedback loop. Reviewers confirm or reject, which is captured so you feed that learning back into the machine learning model, improving performance.
For governance and security, the critical need is incorporating both aspects in end-to-end machine learning data pipelines- build, productionalize, and deliver to end users. Managing consistency is about having the team alignment and foundational infrastructure in place so that these considerations can become best practices and processes that are repeatable for the data science and IT teams across all AI projects, not reinvented each time. You need to embed governance and security throughout your technology and your processes.