The Impact of Open-Source AI Models in Anesthesiology

The field of Anesthesiology is on the cusp of a significant transformation with the integration of Large Language Models (LLMs) in anesthesia. Our latest study from the Stanford AIM Lab, “Open-Source Large Language Models in Anesthesiology and Perioperative Medicine: ASA-PS Evaluation,” conducted by Dara Rouholiman, Alex J Goodell, Ethan Fung, Janak T Chandrasoma, and Larry F Chu, is supporting this shift. Set to be presented at the World Congress of Anesthesia on March 4th in Singapore, our research delves deep into the capabilities of open-source LLMs in performing ASA-PS classification and perioperative risk assessments, areas traditionally dominated by proprietary models like GPT-4.

Our research suggests that open-source LLMs like Mixtral-8X7B can advance Anesthesiology classifications, offering a blend of performance, explainability, and privacy to researchers.

Dara Rouholiman, Machine Learning Engineer, Stanford AIM Lab

What We Did:

Our investigation utilized 20 hypothetical clinical vignettes, assessing them through zero-shot classification over 25 runs on models including GPT-4 Turbo, Llama-2-70B, and Mixtral-8x7B. The study meticulously compared model performances using F1 score and accuracy metrics, underpinned by ANOVA and Tukey’s HSD for statistical rigor.

Our Key Findings:

  • Open-source LLMs, particularly Mixtral-8x7B, demonstrated a capability to classify ASA scores on par with human anesthesiologists, echoing the performance of more complex, proprietary models but with significantly fewer parameters.
  • Our results underscore the ASA-PS as a valuable tool for evaluating LLMs’ clinical reasoning, highlighting the potential of these models to revolutionize anesthesia with customized applications and enhanced privacy.

Why we Think it Matters:

Our findings underscore the significant potential of open-source Large Language Models (LLMs) like Mixtral in transforming anesthesia and perioperative medicine. Here’s why our results are pivotal:

  1. Effectiveness of Open-Source Models: Demonstrating that open-source models such as Mixtral-8x7B can perform clinical reasoning tasks at par with proprietary models validates the feasibility of using these more accessible technologies in clinical settings. This finding broadens the horizon for healthcare institutions of all sizes, enabling them to leverage AI without the constraints of proprietary models.
  2. Advantages of Open-Source LLMs: The study highlights the unique affordances of open-source LLMs, including superior explainability and the ability to run models locally. These aspects are critical for clinical applications where understanding the reasoning behind AI decisions is essential for trust and transparency.
  3. Data Privacy and Local Processing: By running locally, open-source models like Mixtral-8x7B offer a significant advantage in terms of data privacy. This is particularly relevant in healthcare, where patient data sensitivity and confidentiality are paramount. Our findings suggest that institutions can harness the power of AI while maintaining control over their data, avoiding the privacy concerns associated with processing data on external servers.

In essence, our research not only suggests the efficacy of open-source LLMs in critical clinical tasks but also illuminates the path forward for their adoption in anesthesiology, emphasizing the importance of accessibility, transparency, and data privacy.

Our Team

Dara Rouholiman
Alex Goodell, MD
Ethan Fung
Janak Chandrasoma, MD
Larry Chu, MD
Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Prev
Bridging the Generation Gap in Anesthesiology

Bridging the Generation Gap in Anesthesiology

Larry Chu, MD Nov 2, 2023 We’ve all heard the stereotypes