People | Locations | Statistics |
---|---|---|
Naji, M. |
| |
Motta, Antonella |
| |
Aletan, Dirar |
| |
Mohamed, Tarek |
| |
Ertürk, Emre |
| |
Taccardi, Nicola |
| |
Kononenko, Denys |
| |
Petrov, R. H. | Madrid |
|
Alshaaer, Mazen | Brussels |
|
Bih, L. |
| |
Casati, R. |
| |
Muller, Hermance |
| |
Kočí, Jan | Prague |
|
Šuljagić, Marija |
| |
Kalteremidou, Kalliopi-Artemi | Brussels |
|
Azam, Siraj |
| |
Ospanova, Alyiya |
| |
Blanpain, Bart |
| |
Ali, M. A. |
| |
Popa, V. |
| |
Rančić, M. |
| |
Ollier, Nadège |
| |
Azevedo, Nuno Monteiro |
| |
Landes, Michael |
| |
Rignanese, Gian-Marco |
|
Demers, Vincent
in Cooperation with on an Cooperation-Score of 37%
Topics
Publications (6/6 displayed)
- 2024MEDITRON: Open Medical Foundation Models Adapted for Clinical Practicecitations
- 2023Microstructural and mechanical investigation on fiber laser welding of S500MC steelcitations
- 2023Contrasting the mechanical and metallurgical properties of laser welded and gas tungsten arc welded S500MC steel
- 2023Contrasting the Mechanical and Metallurgical Properties of Laser Welded and Gas Tungsten Arc Welded S500MC Steelcitations
- 2021Effect of Thermal Debinding Conditions on the Sintered Density of Low-Pressure Powder Injection Molded Iron Partscitations
- 2021A numerical investigation of friction stir welding parameters in joining dissimilar aluminium alloys using finite element method
Places of action
Organizations | Location | People |
---|
document
MEDITRON: Open Medical Foundation Models Adapted for Clinical Practice
Abstract
<jats:title>Abstract</jats:title><jats:p>Large language and multimodal models (LLMs and LMMs) will transform access to medical knowledge and clinical decision support. However, the current leading systems fall short of this promise, as they are either limited in scale, which restricts their capabilities, closed-source, which limits the extensions and scrutiny that can be applied to them, or not sufficiently adapted to clinical settings, which inhibits their practical use. In this work, we democratize large-scale medical AI systems by developing MEDITRON: a suite of open-source LLMs and LMMs with 7B and 70B parameters adapted to the medical domain. MEDITRON extends pretraining on a comprehensively curated medical corpus that includes biomedical literature and internationally recognized clinical practice guidelines. Evaluations using standard medical reasoning benchmarks show significant improvements over all current open-access models and several state-of-the-art commercial LLMs that are orders of magnitude larger, more expensive to host, and closed-source. Enhanced with visual processing capabilities, our MEDITRON-V model also outperforms all open-access models and much larger closed-source models on multimodal reasoning tasks for various biomedical imaging modalities. Beyond traditional benchmarks, we also create a novel and physician-driven adversarial question dataset grounded in real-world clinical settings, and a comprehensive 17-metric evaluation rubric to assess alignment and contextualization to real-world clinical practice. Applying this framework to MEDITRON-70B's responses, sixteen independent physicians found a high level of alignment across all metrics, including medical accuracy, safety, fairness, communication, and interpretation. The MEDITRON suite is a significant step forward in closing the technological gap between closed- and open-source medical foundation models. By releasing our methodologies, models, and real-world clinical practice benchmarks, we aim to drive the open-source development of more capable, representative, accessible, and transparent medical AI assistants.</jats:p>