Materials Map

Discover the materials research landscape. Find experts, partners, networks.

  • About
  • Privacy Policy
  • Legal Notice
  • Contact

The Materials Map is an open tool for improving networking and interdisciplinary exchange within materials research. It enables cross-database search for cooperation and network partners and discovering of the research landscape.

The dashboard provides detailed information about the selected scientist, e.g. publications. The dashboard can be filtered and shows the relationship to co-authors in different diagrams. In addition, a link is provided to find contact information.

×

Materials Map under construction

The Materials Map is still under development. In its current state, it is only based on one single data source and, thus, incomplete and contains duplicates. We are working on incorporating new open data sources like ORCID to improve the quality and the timeliness of our data. We will update Materials Map as soon as possible and kindly ask for your patience.

To Graph

1.080 Topics available

To Map

977 Locations available

693.932 PEOPLE
693.932 People People

693.932 People

Show results for 693.932 people that are selected by your search filters.

←

Page 1 of 27758

→
←

Page 1 of 0

→
PeopleLocationsStatistics
Naji, M.
  • 2
  • 13
  • 3
  • 2025
Motta, Antonella
  • 8
  • 52
  • 159
  • 2025
Aletan, Dirar
  • 1
  • 1
  • 0
  • 2025
Mohamed, Tarek
  • 1
  • 7
  • 2
  • 2025
Ertürk, Emre
  • 2
  • 3
  • 0
  • 2025
Taccardi, Nicola
  • 9
  • 81
  • 75
  • 2025
Kononenko, Denys
  • 1
  • 8
  • 2
  • 2025
Petrov, R. H.Madrid
  • 46
  • 125
  • 1k
  • 2025
Alshaaer, MazenBrussels
  • 17
  • 31
  • 172
  • 2025
Bih, L.
  • 15
  • 44
  • 145
  • 2025
Casati, R.
  • 31
  • 86
  • 661
  • 2025
Muller, Hermance
  • 1
  • 11
  • 0
  • 2025
Kočí, JanPrague
  • 28
  • 34
  • 209
  • 2025
Šuljagić, Marija
  • 10
  • 33
  • 43
  • 2025
Kalteremidou, Kalliopi-ArtemiBrussels
  • 14
  • 22
  • 158
  • 2025
Azam, Siraj
  • 1
  • 3
  • 2
  • 2025
Ospanova, Alyiya
  • 1
  • 6
  • 0
  • 2025
Blanpain, Bart
  • 568
  • 653
  • 13k
  • 2025
Ali, M. A.
  • 7
  • 75
  • 187
  • 2025
Popa, V.
  • 5
  • 12
  • 45
  • 2025
Rančić, M.
  • 2
  • 13
  • 0
  • 2025
Ollier, Nadège
  • 28
  • 75
  • 239
  • 2025
Azevedo, Nuno Monteiro
  • 4
  • 8
  • 25
  • 2025
Landes, Michael
  • 1
  • 9
  • 2
  • 2025
Rignanese, Gian-Marco
  • 15
  • 98
  • 805
  • 2025

Thorup, Mikkel

  • Google
  • 1
  • 5
  • 0

in Cooperation with on an Cooperation-Score of 37%

Topics

Publications (1/1 displayed)

  • 2020No Repetitioncitations

Places of action

Chart of shared publication
Rasmussen, Peter
1 / 3 shared
Kipouridis, Evangelos
1 / 1 shared
Das, Debarati
1 / 1 shared
Aamand, Anders
1 / 1 shared
Knudsen, Jakob Bæk Tejs
1 / 1 shared
Chart of publication period
2020

Co-Authors (by relevance)

  • Rasmussen, Peter
  • Kipouridis, Evangelos
  • Das, Debarati
  • Aamand, Anders
  • Knudsen, Jakob Bæk Tejs
OrganizationsLocationPeople

article

No Repetition

  • Rasmussen, Peter
  • Kipouridis, Evangelos
  • Das, Debarati
  • Thorup, Mikkel
  • Aamand, Anders
  • Knudsen, Jakob Bæk Tejs
Abstract

To get estimators that work within a certain error bound with high probability, a common strategy is to design one that works with constant probability, and then boost the probability using independent repetitions.Important examples of this approach are small space algorithms for estimating the number of distinct elements in a stream, or estimating the set similarity between large sets.Using standard strongly universal hashing to process each element, we get a sketch based estimator where the probability of a too large error is, say, 1/4. By performing $r$ independent repetitions and taking the median of the estimators, the error probability falls exponentially in $r$. However, running $r$<br/>independent experiments increases the processing time by a factor $r$.<br/><br/>Here we make the point that if we have a hash function with strong concentration bounds, then we get the same high probability bounds without any need for repetitions.Instead of $r$ independent sketches, we have a single sketch that is $r$ times bigger, so the total space is the same. However, we only apply a single hash function, so we save a factor $r$ in time, and the overall algorithms just get simpler.<br/><br/>Fast practical hash functions with strong concentration bounds were recently proposed by Aamand { et al.} (to appear in { STOC 2020}). Using their hashing schemes, the algorithms thus become very fast and practical, suitable for online processing of high volume data streams.

Topics
  • impedance spectroscopy
  • experiment