Blogs

blog_1.jpg

2023 Annual Meeting Report: Artificial Intelligence in Toxicology: A Hope-Filled Future or Cautionary Tale?

By David Faulkner posted 04-27-2023 10:22

  

The burgeoning field of artificial intelligence (AI) has grown rapidly in the last few years, transitioning from the stuff of science fiction fare to freely available tools and websites that have captured the attention of academics and amateurs alike. As the proliferation of AI tools raises questions about the future of several industries, we, as toxicologists, must ask the question, “How could AI be used for risk assessment?” The Symposium Session “AI Buzz or Bliss: Case Studies for Successful Applications of Artificial Intelligence in Predictive Toxicology” during the 2023 SOT Annual Meeting and ToxExpo aimed to tackle that question through talks by a range of researchers hailing from academia, government, and industry.

The session Chair Falgun Shah opened with a short presentation about the nature of AI and the field as it stands from a toxicological perspective. He acknowledged that there is some skepticism that AI may be used in toxicological risk assessment but pointed out that computational tools have been rapidly growing in sophistication and capability over the last few decades and that the pace of their evolution has only increased in recent years. As Dr. Shah explained, AI is merely the next step in that evolution, and the session was intended to focus on successful case studies to showcase the impact AI has already had in the field of computational toxicology and to explore the potential of AI.

The main themes of the session were how AI could be used: to prioritize pharmaceuticals/environmental chemicals, to identify potential off-targets and de-convolute mechanisms of toxicity, and for hazard detection in toxicology pathology.

Thomas Hartung kicked off the presentations with a survey of the rapid pace at which AI research has expanded in the last three years and made a case for the use of AI (among other technologies) to replace animal models for certain toxicological endpoints. He pointed to reproducibility rates in some of the most-used animal-based tests (around 80%) compared to predictions from AI trained on 190,000 chemicals with extant toxicological data (87%) and reminded the audience that there were issues with accuracy and reproducibility in even the gold-standard assays, suggesting that if AI could perform comparably or better, then it was a prudent replacement that could save countless animals and billions of dollars. Of course, there were tests that he acknowledged could not be replaced with computational methods just yet, but Dr. Hartung suggested that organoids and advanced in vitro assays could be used to fill those gaps.

Nicole Kleinstreuer followed up with a review of the US federal government’s efforts to boost the use and quality of computational tools and how AI might be used in those tools. Computational tools are a key part of the federal push to move past animal testing and into alternative methods, and Dr. Kleinstreuer provided many examples of open-source computational tools developed by Interagency Coordination Committee on the Validation of Alternative Methods (part of the National Institute of Environmental Health Sciences), often in collaboration with international consortia or the private sector. One example of these tools and projects, OPERA, a computational predictor of environmental fate, physical-chemical, and ADME properties, has been available for years, and the collaborators who created it continue building out toxicity endpoints, including acute inhalation and systemic toxicity. She presented international collaborative computational projects like CERaPP, COmPara, and CATOMOS, all of which are designed with the aim of streamlining regulatory and safety assessments and are free, which will hopefully expand their use among the regulators and the regulated of the world.

Having provided an overview of AI and computational tools, the rest of the session talks presented a series of case studies on the application of these tools. Hao Zhu described work done with publicly available high-throughput screening data (HTS) to predict drug-induced liver injury (DILI) by identifying structural alerts for DILI and using machine-learning models to fill data gaps and presented a method for integrating dose-dependent data in the modeling process. He offered thoughtful commentary about the importance of toxicological expertise when developing and interpreting models, irrespective of the size or quality of the training set—a salient point for consideration among those who worry about AI taking away risk assessment jobs.

Dr. Zhu’s talk was followed by Doug Selinger, who presented several case studies conducted by Plex Research that aimed to use AI to identify mechanisms for off-target effects in drug development. Though Dr. Selinger was not able to disclose all the details about the cases he presented due to non-disclosure agreements, the data that was on offer made a compelling case for the utility of the search engine knowledge graph tool that Plex sells.

The next speaker, Seda Arat, also presented a novel commercial tool, called iScreen, that used machine learning to accelerate screening compounds for pharmaceutical development. This presentation demonstrated the importance of mechanistic information for progressing drug development and showed how the iScreen tool could model predictions for genotoxicity.

One of the more novel case studies featured an automated developmental toxicity assessment using AI to scan micro-CT images of rabbit fetus skeletons; this was presented by Antog Chen. Although the model was highly specialized, one could imagine the potential for other AI tools to automate the analysis and interpretation of developmental or physiologically based assays.

The session closed out with Alexander Amberg of Sanofi, who walked through several case studies using an in silico model brewed from the combination of several open-source in vivo toxicity databases with Sanofi’s internal databases. The case studies primarily focused on GI tract risk assessments for drug toxicity and capably demonstrated the value of publicly available toxicity data, particularly when combined with more specialized proprietary data.

Altogether, the session was in equal measure hopeful and cautious about the future of AI systems in toxicological risk assessment. While there are certainly uses for machine learning and automated data analysis, the final stages of interpretation will likely remain in human hands for quite some time to come. That said, there is exciting potential for AI to lower the cost and time that it takes to conduct risk assessments, which will result in more risk assessments, and likely safer products.

This blog reports on the Symposium Session titled “AI Buzz or Bliss: Case Studies for Successful Applications of Artificial Intelligence in Predictive Toxicology” that was held during the 2023 SOT Annual Meeting and ToxExpo. An on-demand recording of this session is available for meeting registrants on the SOT Online Planner and SOT Event App.

This blog was prepared by an SOT Reporter and represents the views of the author. SOT Reporters are SOT members who volunteer to write about sessions and events in which they participate during the SOT Annual Meeting and ToxExpo. SOT does not propose or endorse any position by posting this article. If you are interested in participating in the SOT Reporter program in the future, please email SOT Headquarters.


#Communique:ScienceNews
#2023AnnualMeeting
#SOTReporter
#Communique:AnnualMeeting
0 comments
16 views