People | Locations | Statistics |
---|---|---|
Naji, M. |
| |
Motta, Antonella |
| |
Aletan, Dirar |
| |
Mohamed, Tarek |
| |
Ertürk, Emre |
| |
Taccardi, Nicola |
| |
Kononenko, Denys |
| |
Petrov, R. H. | Madrid |
|
Alshaaer, Mazen | Brussels |
|
Bih, L. |
| |
Casati, R. |
| |
Muller, Hermance |
| |
Kočí, Jan | Prague |
|
Šuljagić, Marija |
| |
Kalteremidou, Kalliopi-Artemi | Brussels |
|
Azam, Siraj |
| |
Ospanova, Alyiya |
| |
Blanpain, Bart |
| |
Ali, M. A. |
| |
Popa, V. |
| |
Rančić, M. |
| |
Ollier, Nadège |
| |
Azevedo, Nuno Monteiro |
| |
Landes, Michael |
| |
Rignanese, Gian-Marco |
|
Bobbert, Peter A.
in Cooperation with on an Cooperation-Score of 37%
Topics
Publications (4/4 displayed)
- 2021Dopant Network Processing Units: Towards Efficient Neural-network Emulators with High-capacity Nanoelectronic Nodescitations
- 2019Short-channel vertical organic field-effect transistors with high on/off ratioscitations
- 2018Simulating phase separation during spin coating of a polymer–fullerene blendcitations
- 2018Simulating phase separation during spin coating of a polymer–fullerene blend:a joint computational and experimental investigationcitations
Places of action
Organizations | Location | People |
---|
article
Dopant Network Processing Units: Towards Efficient Neural-network Emulators with High-capacity Nanoelectronic Nodes
Abstract
The rapidly growing computational demands of deep neural networks require novel hardware designs. Recently, tuneable nanoelectronic devices were developed based on hopping electrons through a network of dopant atoms in silicon. These ‘dopant network processing units’ (DNPUs) are highly energy-efficient and have potentially very high throughput. By adapting the control<br/>voltages applied to its electrodes, a single DNPU can solve a variety of linearly non-separable classification problems. However, using a single device has limitations due to the implicit single-node architecture. This paper presents a promising novel approach to neural information processing by introducing DNPUs as high-capacity neurons and moving from a single to a multi-neuron framework. By implementing and testing a small multi-DNPU classifier in hardware, we show that feed-forward DNPU networks improve the performance of a single DNPU from 77% to 94% test accuracy on a binary classification task with concentric classes on a plane. Furthermore, motivated by the integration of DNPUs with memristor crossbar arrays, we study the potential of using DNPUs in combination with linear layers. We show by simulation that an MNIST classifierwith only 10 DNPU nodes achieves over 96% test accuracy. Our results pave the road towards hardware neural network emulators that offer atomic-scale information processing with low latency and energy consumption.