In today’s era of artificial intelligence (AI), machine/deep learning, and neuromorphic computing algorithms typically require an enormous amount of computational and memory resources for training model parameters and/or inference. The back-and-forth data transfer between processing between cores and memory via limited I/O poses a “memory wall” problem for the entire system. Therefore, a paradigm shift in the computing towards “compute-in-memory (CIM)” emerges as an attractive solution. This approach integrates logic and memory arrays in a fine-grain fashion, offloading data-intensive computations to the memory components.

Memory arrays (including e.g. emerging resistive change nanodevices) can be customized as synaptic arrays to parallelize matrix-vector multiplication or weighted sum operations in neural networks. AI hardware requires circuit and device interactions, and algorithms, offering the potential for orders of magnitude improvement in speed and energy-efficiency for intelligent tasks such as image classification or language translation. We are forcusing on the following topics:

  1. Engineering synaptic devices for multilevel states tuning, symmetric and linear incremental programming; engineering neuronal devices for oscillation, spiking, etc.
  2. Integrating synaptic arrays with semiconductor (CMOS) circuits.
  3. Evaluating performance by suing design automation tools to know various synaptic devices and array architectures.
  4. Exploring emerging compute paradigms beyond deep learning, including the probabilistic implementation of the Ising model for combinatorial optimization.