Materials Map

Discover the materials research landscape. Find experts, partners, networks.

  • About
  • Privacy Policy
  • Legal Notice
  • Contact

The Materials Map is an open tool for improving networking and interdisciplinary exchange within materials research. It enables cross-database search for cooperation and network partners and discovering of the research landscape.

The dashboard provides detailed information about the selected scientist, e.g. publications. The dashboard can be filtered and shows the relationship to co-authors in different diagrams. In addition, a link is provided to find contact information.

×

Materials Map under construction

The Materials Map is still under development. In its current state, it is only based on one single data source and, thus, incomplete and contains duplicates. We are working on incorporating new open data sources like ORCID to improve the quality and the timeliness of our data. We will update Materials Map as soon as possible and kindly ask for your patience.

To Graph

1.080 Topics available

To Map

977 Locations available

693.932 PEOPLE
693.932 People People

693.932 People

Show results for 693.932 people that are selected by your search filters.

←

Page 1 of 27758

→
←

Page 1 of 0

→
PeopleLocationsStatistics
Naji, M.
  • 2
  • 13
  • 3
  • 2025
Motta, Antonella
  • 8
  • 52
  • 159
  • 2025
Aletan, Dirar
  • 1
  • 1
  • 0
  • 2025
Mohamed, Tarek
  • 1
  • 7
  • 2
  • 2025
Ertürk, Emre
  • 2
  • 3
  • 0
  • 2025
Taccardi, Nicola
  • 9
  • 81
  • 75
  • 2025
Kononenko, Denys
  • 1
  • 8
  • 2
  • 2025
Petrov, R. H.Madrid
  • 46
  • 125
  • 1k
  • 2025
Alshaaer, MazenBrussels
  • 17
  • 31
  • 172
  • 2025
Bih, L.
  • 15
  • 44
  • 145
  • 2025
Casati, R.
  • 31
  • 86
  • 661
  • 2025
Muller, Hermance
  • 1
  • 11
  • 0
  • 2025
Kočí, JanPrague
  • 28
  • 34
  • 209
  • 2025
Šuljagić, Marija
  • 10
  • 33
  • 43
  • 2025
Kalteremidou, Kalliopi-ArtemiBrussels
  • 14
  • 22
  • 158
  • 2025
Azam, Siraj
  • 1
  • 3
  • 2
  • 2025
Ospanova, Alyiya
  • 1
  • 6
  • 0
  • 2025
Blanpain, Bart
  • 568
  • 653
  • 13k
  • 2025
Ali, M. A.
  • 7
  • 75
  • 187
  • 2025
Popa, V.
  • 5
  • 12
  • 45
  • 2025
Rančić, M.
  • 2
  • 13
  • 0
  • 2025
Ollier, Nadège
  • 28
  • 75
  • 239
  • 2025
Azevedo, Nuno Monteiro
  • 4
  • 8
  • 25
  • 2025
Landes, Michael
  • 1
  • 9
  • 2
  • 2025
Rignanese, Gian-Marco
  • 15
  • 98
  • 805
  • 2025

Hughes, Ciara

  • Google
  • 1
  • 4
  • 24

in Cooperation with on an Cooperation-Score of 37%

Topics

Publications (1/1 displayed)

  • 2021Artificial Intelligence for diagnosis of fractures on plain radiographs: a scoping review of current literature24citations

Places of action

Chart of shared publication
Bond, Raymond
1 / 2 shared
Mcconnell, Jonathan
1 / 1 shared
Rainey, Clare
1 / 1 shared
Fadden, Sonyia Mc
1 / 1 shared
Chart of publication period
2021

Co-Authors (by relevance)

  • Bond, Raymond
  • Mcconnell, Jonathan
  • Rainey, Clare
  • Fadden, Sonyia Mc
OrganizationsLocationPeople

article

Artificial Intelligence for diagnosis of fractures on plain radiographs: a scoping review of current literature

  • Bond, Raymond
  • Mcconnell, Jonathan
  • Rainey, Clare
  • Fadden, Sonyia Mc
  • Hughes, Ciara
Abstract

Aim<br/>To complete a scoping review of the literature investigating the performance of artificial intelligence (AI) systems currently in development for their ability to detect fractures on plain radiographic images.<br/><br/>Methods<br/>A systematic approach was adopted to identify papers for inclusion in this scoping review and utilised the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Statement (PRISMA). Following application of inclusion and exclusion criteria, sixteen studies were included in the final review.<br/><br/>Results<br/>With the exception of one study, all studies report that AI models demonstrated an ability to perform fracture identification tasks on plain skeletal radiographs. Metrics used to report performance are variable throughout all reviewed studies and include area under the receiver operating characteristic curve (AUC), sensitivity and specificity, positive predictive value, negative predictive value, precision, recall, F1 score and accuracy. Reported performances for studies indicated AUC values range from AUC 0.78 (weakest) to the best performing system reporting AUC 0.99.<br/><br/>Conclusion<br/>The review found a great variation in the AI model architectures, training and testing methodology as well as the metrics used to report the performance of the networks. A standardisation of the reporting metrics and methods would permit comparison of proposed models and training methods which may accelerate the testing of AI systems in the clinical setting. Prevalence agnostic metrics should be used to reflect the true performance of such systems. Many studies lacked any explainability for the algorithmic decision making of the AI models, and there was a lack of interrogation into the potential reasons for misclassification errors. This type of ‘failure analysis’ would have provided insight into the biases and the aetiology of AI misclassifications.

Topics
  • inclusion