top of page
deep1.png
analog1.png
deep2.png
analog2.png

Deep Learning: The Task

𝒪(N³) computational complexity

Need to carry matrices between memory and processor

Analog Deep Learning

Local (in-memory) processing
𝒪(N) computational complexity (Fully-parallel operation)

devices.png

Unit Devices

Programmable

Resistive Elements

Si-Incompatible Slow or Uncontrollable

Architectures

Analog Core & Digital Periphery

Algorithms

Gradient Descent 

Type Optimizer

architectures.png
algorithms.png

Redundant Circuitry or Serial Operations

Highly Sensitive to Nonidealities

Previous Roadblocks Before

Analog Training Processors
 

Picture18.png

All key components 
are finally here.

3 Major Breakthroughs

Si-compatible technology

P-SiO2 solid-state proton electrolyte

Ultrafast ideal devices

Nanosecond-femtojoule protonics

Novel training algorithm

High accuracy deep learning

logo2_edited.png

July 29, 2022

Illustration_Light_edited.png

Nanosecond Protonic Programmable

Resistors for Analog Deep Learning

MIT Best PhD Dissertation Award

News.png
Picture5.jpg

Our Supporters

ARPA-E_Logo_white_arpae_white_edited_edi
Activate_edited.png
Fontinalis (1).png
HCVC_LOGO_black.png
ISV+Logo.png
bottom of page