Materials Map

Discover the materials research landscape. Find experts, partners, networks.

  • About
  • Privacy Policy
  • Legal Notice
  • Contact

The Materials Map is an open tool for improving networking and interdisciplinary exchange within materials research. It enables cross-database search for cooperation and network partners and discovering of the research landscape.

The dashboard provides detailed information about the selected scientist, e.g. publications. The dashboard can be filtered and shows the relationship to co-authors in different diagrams. In addition, a link is provided to find contact information.

×

Materials Map under construction

The Materials Map is still under development. In its current state, it is only based on one single data source and, thus, incomplete and contains duplicates. We are working on incorporating new open data sources like ORCID to improve the quality and the timeliness of our data. We will update Materials Map as soon as possible and kindly ask for your patience.

To Graph

1.080 Topics available

To Map

977 Locations available

693.932 PEOPLE
693.932 People People

693.932 People

Show results for 693.932 people that are selected by your search filters.

←

Page 1 of 27758

→
←

Page 1 of 0

→
PeopleLocationsStatistics
Naji, M.
  • 2
  • 13
  • 3
  • 2025
Motta, Antonella
  • 8
  • 52
  • 159
  • 2025
Aletan, Dirar
  • 1
  • 1
  • 0
  • 2025
Mohamed, Tarek
  • 1
  • 7
  • 2
  • 2025
Ertürk, Emre
  • 2
  • 3
  • 0
  • 2025
Taccardi, Nicola
  • 9
  • 81
  • 75
  • 2025
Kononenko, Denys
  • 1
  • 8
  • 2
  • 2025
Petrov, R. H.Madrid
  • 46
  • 125
  • 1k
  • 2025
Alshaaer, MazenBrussels
  • 17
  • 31
  • 172
  • 2025
Bih, L.
  • 15
  • 44
  • 145
  • 2025
Casati, R.
  • 31
  • 86
  • 661
  • 2025
Muller, Hermance
  • 1
  • 11
  • 0
  • 2025
Kočí, JanPrague
  • 28
  • 34
  • 209
  • 2025
Šuljagić, Marija
  • 10
  • 33
  • 43
  • 2025
Kalteremidou, Kalliopi-ArtemiBrussels
  • 14
  • 22
  • 158
  • 2025
Azam, Siraj
  • 1
  • 3
  • 2
  • 2025
Ospanova, Alyiya
  • 1
  • 6
  • 0
  • 2025
Blanpain, Bart
  • 568
  • 653
  • 13k
  • 2025
Ali, M. A.
  • 7
  • 75
  • 187
  • 2025
Popa, V.
  • 5
  • 12
  • 45
  • 2025
Rančić, M.
  • 2
  • 13
  • 0
  • 2025
Ollier, Nadège
  • 28
  • 75
  • 239
  • 2025
Azevedo, Nuno Monteiro
  • 4
  • 8
  • 25
  • 2025
Landes, Michael
  • 1
  • 9
  • 2
  • 2025
Rignanese, Gian-Marco
  • 15
  • 98
  • 805
  • 2025

Luengo-Oroz, Miguel

  • Google
  • 2
  • 25
  • 15

in Cooperation with on an Cooperation-Score of 37%

Topics

Publications (2/2 displayed)

  • 2023Assessment of Artificial Intelligence Language Models and Information Retrieval Strategies for QA in Hematology3citations
  • 2022A Smartphone-Based Platform Assisted by Artificial Intelligence for Reading and Reporting Rapid Diagnostic Tests: Evaluation Study in SARS-CoV-2 Lateral Flow Immunoassays12citations

Places of action

Chart of shared publication
Gómez-Álvarez, Miguel
1 / 1 shared
Soto, Marta Hidalgo
1 / 1 shared
Cervera, Maria R.
1 / 1 shared
Bermejo-Peláez, David
1 / 1 shared
Darias, Oscar
1 / 1 shared
García-Villena, Jaime
1 / 1 shared
Mendoza-Martínez, Ana
1 / 1 shared
Clausell, Adriana Oñós
1 / 1 shared
Cuesta, Celina Benavente
1 / 1 shared
Martínez-López, Joaquin
1 / 1 shared
Montoro, Julia
1 / 1 shared
Marcos Mencia, Daniel
1 / 1 shared
Canton, Rafael
1 / 4 shared
Lin, Lin
1 / 3 shared
Vladimirov, Alexander
1 / 1 shared
Pérez-Panizo, Nuria
1 / 1 shared
Mousa, Adriana
1 / 1 shared
Hernandez, Beatriz Romero
1 / 1 shared
Rodriguez-Dominguez, Mario
1 / 1 shared
Dacal, Elena
1 / 1 shared
Sanchez, Daniel Cuadrado
1 / 1 shared
Bermejo-Pelaez, David
1 / 1 shared
Alamo, Elisa
1 / 1 shared
Galan, Juan Carlos
1 / 1 shared
Mateos-Nozal, Jesus
1 / 1 shared
Chart of publication period
2023
2022

Co-Authors (by relevance)

  • Gómez-Álvarez, Miguel
  • Soto, Marta Hidalgo
  • Cervera, Maria R.
  • Bermejo-Peláez, David
  • Darias, Oscar
  • García-Villena, Jaime
  • Mendoza-Martínez, Ana
  • Clausell, Adriana Oñós
  • Cuesta, Celina Benavente
  • Martínez-López, Joaquin
  • Montoro, Julia
  • Marcos Mencia, Daniel
  • Canton, Rafael
  • Lin, Lin
  • Vladimirov, Alexander
  • Pérez-Panizo, Nuria
  • Mousa, Adriana
  • Hernandez, Beatriz Romero
  • Rodriguez-Dominguez, Mario
  • Dacal, Elena
  • Sanchez, Daniel Cuadrado
  • Bermejo-Pelaez, David
  • Alamo, Elisa
  • Galan, Juan Carlos
  • Mateos-Nozal, Jesus
OrganizationsLocationPeople

article

Assessment of Artificial Intelligence Language Models and Information Retrieval Strategies for QA in Hematology

  • Gómez-Álvarez, Miguel
  • Soto, Marta Hidalgo
  • Cervera, Maria R.
  • Bermejo-Peláez, David
  • Darias, Oscar
  • García-Villena, Jaime
  • Mendoza-Martínez, Ana
  • Clausell, Adriana Oñós
  • Cuesta, Celina Benavente
  • Martínez-López, Joaquin
  • Luengo-Oroz, Miguel
  • Montoro, Julia
Abstract

<jats:title /><jats:p>Introduction</jats:p><jats:p>Large language models (LLMs) have gained popularity due to their natural language generation and interpretation capabilities. Integrating these models in medicine enables multiple tasks like summarizing medical histories, synthesizing literature, and suggesting diagnoses. Models like ChatGPT, GPT-4, and Med-PaLM2 (Singhal et al., 2023) have demonstrated their proficiency by achieving high scores in medical tests like the United States Medical Licensing Examination (USMLE) (Kung et al., 2023). However, LLMs may sometimes be inaccurate, providing unverified and erroneous information.</jats:p><jats:p>In this study, we investigate the potential uses of LLMs in hematology, assessing their knowledge through hematology questions from the USMLE. Additionally, we propose augmenting LLMs with retrieval capabilities for medical guidelines in order to eliminate incorrect information. By extracting relevant information from specified medical documents, this approach holds the potential to streamline decision-making processes.</jats:p><jats:p>Methods</jats:p><jats:p>For comparative purposes, all experiments were conducted using both GPT 3.5-turbo and GPT-4 models.</jats:p><jats:p>In a first step, we evaluated the general knowledge and performance of LLM in the field of hematology by testing it in a collected dataset of 127 question-answer pairs from the hematology section (covering various aspects of the field) of the USMLE.</jats:p><jats:p>In a second step, we evaluated the proposed information retrieval framework using a set of 120 multiple-choice questions. These questions were specifically focused on the 4th revision of the World Health Organization classification of myeloid neoplasms and acute leukemia guidelines (subsequently called WHO 2017). By testing the framework on this domain-specific dataset, we aimed to assess its ability to extract specific clinical context and relevant information from complex clinical guidelines. Each question from the WHO 2017 guideline dataset was subjected to a comprehensive evaluation using two techniques. First, the questions were assessed using a zero-shot approach (the question together with the different options are directly posed to the model) to assess the LLM's capability to respond based on its own knowledge. Second, we employed our proposed retrieval information approach, enabling the system to conduct in-depth searches throughout the external documents (WHO 2017 guideline) to identify relevant (and similar) extracts about each question. Subsequently, the system provided answers based on the retrieved contexts from the document, thus facilitating more accurate and contextually informed responses. To achieve this, we created an embedding space containing the document's content and conducted a cosine-similarity search between a given question and all the content extracts from the document. The top three relevant extracts, based on similarity to the given question, were used as context for the LLM.</jats:p><jats:p>Results</jats:p><jats:p>In the evaluation of 127 hematology questions from the USMLE, GPT-3.5 in zero-shot mode achieved 63% accuracy, while GPT-4 demonstrated a higher accuracy rate of 82%.</jats:p><jats:p>The evaluation of the WHO 2017 questions dataset revealed that the zero-shot approach achieved accuracy rates of 51% for GPT-3.5 and 71% for GPT-4. Incorporating information retrieval, retrieving the three most relevant extracts from the guidelines, substantially improved performance, with GPT-3.5 achieving 86% accuracy and GPT-4 demonstrating 97% accuracy.</jats:p><jats:p>Conclusions</jats:p><jats:p>LLMs have great potential, with current models showcasing substantial knowledge in hematology. However, ensuring their consistency and safety in responses is critical for their reliable application in medical settings (Thirunavukarasu et al., 2023). To address this, we demonstrated the benefits of information retrieval for question-answering in the field of hematology, significantly improving response reliability and accuracy by empowering LLMs to deliver more informed and contextually appropriate answers. The concept was effectively validated using the WHO 2017 guideline, and it can be effortlessly adapted to answer questions based on any set of hematology-related documents. Leveraging LLMs has the potential to significantly enhance the efficiency and effectiveness of clinical, educational, and research work in hematology.</jats:p>

Topics
  • impedance spectroscopy
  • experiment