Materials Map

Discover the materials research landscape. Find experts, partners, networks.

  • About
  • Privacy Policy
  • Legal Notice
  • Contact

The Materials Map is an open tool for improving networking and interdisciplinary exchange within materials research. It enables cross-database search for cooperation and network partners and discovering of the research landscape.

The dashboard provides detailed information about the selected scientist, e.g. publications. The dashboard can be filtered and shows the relationship to co-authors in different diagrams. In addition, a link is provided to find contact information.

×

Materials Map under construction

The Materials Map is still under development. In its current state, it is only based on one single data source and, thus, incomplete and contains duplicates. We are working on incorporating new open data sources like ORCID to improve the quality and the timeliness of our data. We will update Materials Map as soon as possible and kindly ask for your patience.

To Graph

1.080 Topics available

To Map

977 Locations available

693.932 PEOPLE
693.932 People People

693.932 People

Show results for 693.932 people that are selected by your search filters.

←

Page 1 of 27758

→
←

Page 1 of 0

→
PeopleLocationsStatistics
Naji, M.
  • 2
  • 13
  • 3
  • 2025
Motta, Antonella
  • 8
  • 52
  • 159
  • 2025
Aletan, Dirar
  • 1
  • 1
  • 0
  • 2025
Mohamed, Tarek
  • 1
  • 7
  • 2
  • 2025
Ertürk, Emre
  • 2
  • 3
  • 0
  • 2025
Taccardi, Nicola
  • 9
  • 81
  • 75
  • 2025
Kononenko, Denys
  • 1
  • 8
  • 2
  • 2025
Petrov, R. H.Madrid
  • 46
  • 125
  • 1k
  • 2025
Alshaaer, MazenBrussels
  • 17
  • 31
  • 172
  • 2025
Bih, L.
  • 15
  • 44
  • 145
  • 2025
Casati, R.
  • 31
  • 86
  • 661
  • 2025
Muller, Hermance
  • 1
  • 11
  • 0
  • 2025
Kočí, JanPrague
  • 28
  • 34
  • 209
  • 2025
Šuljagić, Marija
  • 10
  • 33
  • 43
  • 2025
Kalteremidou, Kalliopi-ArtemiBrussels
  • 14
  • 22
  • 158
  • 2025
Azam, Siraj
  • 1
  • 3
  • 2
  • 2025
Ospanova, Alyiya
  • 1
  • 6
  • 0
  • 2025
Blanpain, Bart
  • 568
  • 653
  • 13k
  • 2025
Ali, M. A.
  • 7
  • 75
  • 187
  • 2025
Popa, V.
  • 5
  • 12
  • 45
  • 2025
Rančić, M.
  • 2
  • 13
  • 0
  • 2025
Ollier, Nadège
  • 28
  • 75
  • 239
  • 2025
Azevedo, Nuno Monteiro
  • 4
  • 8
  • 25
  • 2025
Landes, Michael
  • 1
  • 9
  • 2
  • 2025
Rignanese, Gian-Marco
  • 15
  • 98
  • 805
  • 2025

Kitchen, Neil

  • Google
  • 1
  • 9
  • 12

in Cooperation with on an Cooperation-Score of 37%

Topics

Publications (1/1 displayed)

  • 2022Automated Koos Classification of Vestibular Schwannoma12citations

Places of action

Chart of shared publication
Paddick, Ian
1 / 1 shared
Ourselin, Sebastien
1 / 10 shared
Vercauteren, Tom
1 / 4 shared
Shapey, Jonathan
1 / 2 shared
Kujawa, Aaron
1 / 1 shared
Okasha, Mohamed
1 / 2 shared
Connor, Steve
1 / 1 shared
Oviedova, Anna
1 / 1 shared
Grishchuk, Diana
1 / 1 shared
Chart of publication period
2022

Co-Authors (by relevance)

  • Paddick, Ian
  • Ourselin, Sebastien
  • Vercauteren, Tom
  • Shapey, Jonathan
  • Kujawa, Aaron
  • Okasha, Mohamed
  • Connor, Steve
  • Oviedova, Anna
  • Grishchuk, Diana
OrganizationsLocationPeople

article

Automated Koos Classification of Vestibular Schwannoma

  • Paddick, Ian
  • Ourselin, Sebastien
  • Kitchen, Neil
  • Vercauteren, Tom
  • Shapey, Jonathan
  • Kujawa, Aaron
  • Okasha, Mohamed
  • Connor, Steve
  • Oviedova, Anna
  • Grishchuk, Diana
Abstract

<jats:sec><jats:title>Objective</jats:title><jats:p>The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management.</jats:p></jats:sec><jats:sec><jats:title>Methods</jats:title><jats:p>We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons.</jats:p></jats:sec><jats:sec><jats:title>Results</jats:title><jats:p>Eligible patients (<jats:italic>n</jats:italic> = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (<jats:italic>MA-MAE</jats:italic>), weighted macro-averaged F1 score (<jats:italic>F</jats:italic><jats:sub>1</jats:sub>), and accuracy score of the ensemble model were assessed on the testing sets as follows: <jats:italic>MA-MAE</jats:italic> = 0.11 ± 0.05, <jats:italic>F</jats:italic><jats:sub>1</jats:sub> = 89.3 ± 3.0%, <jats:italic>accuracy</jats:italic> = 89.3 ±2.9%, which was comparable to the average performance of two neurosurgeons: <jats:italic>MA-MAE</jats:italic> = 0.11 ± 0.08, <jats:italic>F</jats:italic><jats:sub>1</jats:sub> = 89.1 ± 5.2, <jats:italic>accuracy</jats:italic> = 88.6 ± 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (<jats:italic>n</jats:italic> = 188) will be released upon publication.</jats:p></jats:sec>

Topics
  • impedance spectroscopy
  • random
  • size-exclusion chromatography
  • microwave-assisted extraction