Optoelectronic Vision for Neuromorphic Architecture and Multi-spectral deep learning for Biomedical Imaging
The increasing demand for energy-efficient and adaptive computing is pushing conventional silicon-based architectures to their limits. As devices become more compact and power constraints intensify, traditional electronic systems struggle to deliver the efficiency and scalability required for modern applications in neuromorphic computing, artificial vision, and biomedical sensing. Biological neural networks offer an energy-efficient alternative, inspiring new computing paradigms that integrate optoelectronic synapses, spiking neural networks (SNNs), and spectral deep learning. However, realizing hardware-driven neuromorphic intelligence remains a challenge due to instabilities in memory retention, inefficient spiking neuron implementations, and limited spectral data utilization in biomedical classification.
We address these challenges by developing a platform neural network scheme that stabilizes conductance states in optoelectronic synapses, mitigating the natural decay of memory in photoactive materials. This approach, validated using black phosphorus (BP) and doped indium oxide (In₂O₃) devices, significantly improves memory retention and energy efficiency without requiring additional power-intensive training techniques. Our findings establish a scalable foundation for adaptive, light-driven artificial synapses in neuromorphic computing.
Further, we demonstrate the realization of artificial neurons using monolayer molybdenum disulfide (MoS₂), enabling biologically inspired leaky integrate-and-fire (LIF) behavior. By engineering its photoelectric response and charge dynamics, we create a hardware-based spiking neuron that integrates seamlessly into neuromorphic vision systems. Deployed in spiking neural networks (SNNs), these neurons enable low-power static and dynamic image classification, demonstrating robust performance in datasets such as CIFAR-10 and DVS128 gesture recognition.
Extending these principles to biomedical applications, we leverage multi-spectral imaging (MSI) and 3D convolutional neural networks (3D-CNNs) to improve cell classification accuracy. By incorporating spectral and spatial feature extraction, our approach optimizes wavelength selection for deep learning-based biomedical diagnostics, enhancing classification sensitivity in medical imaging.
These advancements contribute to the development of low-power, biologically inspired artificial intelligent (AI) systems, paving the way for scalable neuromorphic hardware, intelligent vision sensors, and energy-efficient biomedical computing.