Materials Map

Discover the materials research landscape. Find experts, partners, networks.

  • About
  • Privacy Policy
  • Legal Notice
  • Contact

The Materials Map is an open tool for improving networking and interdisciplinary exchange within materials research. It enables cross-database search for cooperation and network partners and discovering of the research landscape.

The dashboard provides detailed information about the selected scientist, e.g. publications. The dashboard can be filtered and shows the relationship to co-authors in different diagrams. In addition, a link is provided to find contact information.

×

Materials Map under construction

The Materials Map is still under development. In its current state, it is only based on one single data source and, thus, incomplete and contains duplicates. We are working on incorporating new open data sources like ORCID to improve the quality and the timeliness of our data. We will update Materials Map as soon as possible and kindly ask for your patience.

To Graph

1.080 Topics available

To Map

977 Locations available

693.932 PEOPLE
693.932 People People

693.932 People

Show results for 693.932 people that are selected by your search filters.

←

Page 1 of 27758

→
←

Page 1 of 0

→
PeopleLocationsStatistics
Naji, M.
  • 2
  • 13
  • 3
  • 2025
Motta, Antonella
  • 8
  • 52
  • 159
  • 2025
Aletan, Dirar
  • 1
  • 1
  • 0
  • 2025
Mohamed, Tarek
  • 1
  • 7
  • 2
  • 2025
Ertürk, Emre
  • 2
  • 3
  • 0
  • 2025
Taccardi, Nicola
  • 9
  • 81
  • 75
  • 2025
Kononenko, Denys
  • 1
  • 8
  • 2
  • 2025
Petrov, R. H.Madrid
  • 46
  • 125
  • 1k
  • 2025
Alshaaer, MazenBrussels
  • 17
  • 31
  • 172
  • 2025
Bih, L.
  • 15
  • 44
  • 145
  • 2025
Casati, R.
  • 31
  • 86
  • 661
  • 2025
Muller, Hermance
  • 1
  • 11
  • 0
  • 2025
Kočí, JanPrague
  • 28
  • 34
  • 209
  • 2025
Šuljagić, Marija
  • 10
  • 33
  • 43
  • 2025
Kalteremidou, Kalliopi-ArtemiBrussels
  • 14
  • 22
  • 158
  • 2025
Azam, Siraj
  • 1
  • 3
  • 2
  • 2025
Ospanova, Alyiya
  • 1
  • 6
  • 0
  • 2025
Blanpain, Bart
  • 568
  • 653
  • 13k
  • 2025
Ali, M. A.
  • 7
  • 75
  • 187
  • 2025
Popa, V.
  • 5
  • 12
  • 45
  • 2025
Rančić, M.
  • 2
  • 13
  • 0
  • 2025
Ollier, Nadège
  • 28
  • 75
  • 239
  • 2025
Azevedo, Nuno Monteiro
  • 4
  • 8
  • 25
  • 2025
Landes, Michael
  • 1
  • 9
  • 2
  • 2025
Rignanese, Gian-Marco
  • 15
  • 98
  • 805
  • 2025

Mustafa, Hafiz Tayyab

  • Google
  • 1
  • 1
  • 76

in Cooperation with on an Cooperation-Score of 37%

Topics

Publications (1/1 displayed)

  • 2019Multi-scale convolutional neural network for multi-focus image fusion76citations

Places of action

Chart of shared publication
Yang, Jie
1 / 9 shared
Chart of publication period
2019

Co-Authors (by relevance)

  • Yang, Jie
OrganizationsLocationPeople

article

Multi-scale convolutional neural network for multi-focus image fusion

  • Yang, Jie
  • Mustafa, Hafiz Tayyab
Abstract

n this study, we present new deep learning (DL) method for fusing multi-focus images. Current multi-focus image fusion (MFIF) approaches based on DL methods mainly treat MFIF as a classification task. These methods use a convolutional neural network (CNN) as a classifier to identify pixels as focused or defocused pixels. However, due to unavailability of labeled data to train networks, existing DL-based supervised models for MFIF add Gaussian blur in focused images to produce training data. DL-based unsupervised models are also too simple and only applicable to perform fusion tasks other than MFIF. To address the above issues, we proposed a new MFIF method, which aims to learn feature extraction, fusion and reconstruction components together to produce a complete unsupervised end-to-end trainable deep CNN. To enhance the feature extraction capability of CNN, we introduce a Siamese multi-scale feature extraction module to achieve a promising performance. In our proposed network we applied multiscale convolutions along with skip connections to extract more useful common features from a multi-focus image pair. Instead of using basic loss functions to train the CNN, our model utilizes structure similarity (SSIM) measure as a training loss function. Moreover, the fused images are reconstructed in a multiscale manner to guarantee more accurate restoration of images. Our proposed model can process images with variable size during testing and validation. Experimental results on various test images validate that our proposed method yields better quality fused images that are superior to the fused images generated by compared state-of-the-art image fusion methods.

Topics
  • extraction