Invited speakers

Kohei Nakajima Prof Kohei Nakajima

Kohei Nakajima is an Associate Professor in the Graduate School of Information Science and Technology at the University of Tokyo. He received BS, MS, and PhD degrees from the University of Tokyo in 2004, 2006, and 2009, respectively. After obtaining his PhD, he spent five years as a post-doctoral fellow and a JSPS Postdoctoral Fellow for Research Abroad at the University of Zurich and at ETH Zurich in Switzerland. In 2013, he was awarded the title of the Hakubi researcher at Kyoto University, and until 2017, he was an Assistant Professor at the Hakubi Center for Advanced Research at Kyoto University. He was also a JST PRESTO researcher from 2015 to 2019. His research interests include nonlinear dynamical systems, information theory, reservoir computing, physical reservoir computing, and soft robotics.

Title: Physical reservoir computing

 Dynamical systems can be used as an information processing device, and reservoir computing (RC) is one of the recent approaches that can explore this perspective in practice. In this framework, a low-dimensional input is projected to high-dimensional dynamical systems, which are typically referred to as a reservoir. If the dynamics of the reservoir involve adequate nonlinearity and memory, then emulating the nonlinear dynamical systems only requires adding a linear, static readout from the high-dimensional state space of the reservoir. Because of its generic nature, RC is not limited to digital simulations of neural networks, and any high-dimensional dynamical system can serve as a reservoir if it has the appropriate properties. The approach using a physical entity rather than abstract computational units as a reservoir is called physical reservoir computing (PRC). Its various engineering applications have been proposed recently in all ranges of physics, from mechanical to quantum and photonics scales. In this presentation, the focus will particularly be on how the RC/PRC framework can provide a novel view of information processing in general.

Julie Grollier  Dr. Julie Grollier


Julie Grollier is a researcher director in the CNRS/Thales lab in France. Her Ph.D. was dedicated to the study of a new effect in spintronics : the spin transfer torque. After two years of post-doc, first in Groningen University (Netherlands, group of B.J. van Wees), then in Institut d’Electronique Fondamentale (France, group of C. Chappert), she joined CNRS in 2005. Her current research interests include spintronics (dynamics of nanomagnets under the spin torque effect), and new devices for cognitive computation (in particular memristors).
Julie has over 100 publications, and is a frequent invited speaker in international conferences. She is also a Fellow of the American Physical Society. In 2010 she was awarded the Jacques Herbrand prize of the French Academy of Science, and in 2018 the Silver Medal of CNRS for her pioneering work on spintronics and brain-inspired computing. She is the recipient of two prestigious European Research Council grants: "NanoBrain" project (Memristive Artificial Synapses and their integration in Neural Networks, 2010-2015) and "BioSPINSpired" project (Bio-inspired Spin-Torque Computing Architectures, 2016-2021).
Julie is now leading the nanodevices for bio-inspired computing team that she initiated in 2009. She is also chair of the interdisciplinary research network GDR BioComp, coordinating national efforts for producing hardware bio-inspired systems.

Title: Equilibrium Propagation for Intrinsically Learning Hardware

Neuromorphic chips open the path to low power AI on the edge. However, training them on-chip remains a challenge. The flagship algorithm for training neural networks, backpropagation, is indeed not hardware-friendly. It requires a mathematical procedure to compute gradients, it requires external memories to store them, as well as a dedicated circuit, external to the neural network, to change the weights according to these gradients.

 The brain, from which neural networks is inspired, clearly doesn’t work like that. It learns intrinsically, which means that the synapses evolve directly through the spikes applied by the neurons they connect: there is no other type of memory than synapses, nor external circuitry outside of the neurons themselves. This is very advantageous in terms of energy efficiency and component density.

 In this talk I will introduce our approach towards intrinsic learning on chip. I will show through simulations how we take advantage of the physical roots of an algorithm called Equilibrium Propagation (1) to design dynamical circuits that learn intrinsically with high accuracy (2–4).

(1) B. Scellier, Y. Bengio, Front. Comput. Neurosci. 11 (2017). (2) M. Ernoult, J. Grollier, D. Querlioz, Y. Bengio, B. Scellier, in Advances in Neural Information Processing Systems 32, H. Wallach et al., Eds. (2019), pp. 7081–7091. (3) A. Laborieux et al., Front. Neurosci. 15 (2021). (4) E. Martin et al., iScience. 24 (2021).

  Prof Sylvain Gigan

Sylvain Gigan obtained an engineering degree from Ecole Polytechnique (Palaiseau France) in 2000. After a Master Specialization in Physics from University Paris XI (Orsay, France), he obtained a PhD in Physics 2004 from University Pierre and Marie Curie (Paris, France) in quantum and non-linear Optics.
From 2004 to 2007, he was a postdoctoral researcher in Vienna University (Austria), working on quantum optomechanics, in the group of Markus Aspelmeyer and Anton Zeilinger. In 2007, he joined ESPCI ParisTech as Associate Professor, and started working on optical imaging in complex media and wavefront shaping techniques, at the Langevin Institute.

Since 2014, he is full professor at Sorbonne Université, and group leader in Laboratoire Kastler-Brossel, at Ecole Normale Supérieure (ENS, Paris). His research interests range from fundamental investigations of light propagation in complex media, biomedical imaging, sensing, signal processing, to quantum optics and quantum informations in complex media.

Title: An Optical Random Machine for inference and training