Prior Work

Unsupervised learning of neural population codewords with latent variable models

[Research conducted remotely under Dr. Michael J. Berry II @Princeton Neuroscience Institute. Results not publicly available at time of writing.]
Machine learning with spiking neural networks can potentially benefit from the use of learning principles drawn from neuroscience, as many conventional learning approaches require differentiable and thus non-bio-plausible neuron activation functions. By approximating the population code of the lateral geniculate nucleus (LGN) as a latent variable model, the population responses to discrete stimuli are organized into clusters that can be learned in an unsupervised fashion. This approach is desirable because it is inherently error-correcting in the sense that stochastic fluctuations in population response to the same stimulus will robustly elicit the same associated cluster.

Learning evolution operators of partial differential equations with spiking neural networks

[Research conducted under Dr. Guillermo Reyes Souto @USC. Report (PDF), Github repo]
Partial differential equations (PDEs) govern a wide variety of fundamental processes in nature. Their influence spans fields such as fluid mechanics, thermodynamics, and quantum mechanics, among many others. Thus, it follows that the ability to predict how systems governed by PDEs evolve in time is of significant importance to engineers and scientists alike. Prior work has shown that residual networks (ResNets) can effectively learn the evolution operator of an unknown PDE in modal space, allowing for prediction without prior knowledge of equation structure. Following this approach, spiking neural networks (SNNs) show potential for a computationally efficient learning paradigm for this problem due to their natural affinity for time-series problems. By encoding the problem of learning modal coefficients into a spatio-temporal sequence learning problem, the SuperSpike learning algorithm (Zenke and Ganguli, 2018) can be applied.

Investigating the topological dimension of a novel manufacturing method for neuromorphic hardware

[Research conducted for PHYS 760, a PhD-level course focused on pursuing an original research topic. Report (PDF), Github repo]
Recent work by Pantone et al. from Rain Neuromorphics proved that their novel manufacturing method for neuromorphic hardware naturally produces small-world networks, a hallmark characteristic of real connectomes. However, the criticality hypothesis implies that small-worldness does not completely describe the network topology of the brain, and introduces hierarchical modular networks (HMNs) as a viable alternative. This work reproduces the results of Pantone et al. from scratch, then extends the analysis to investigate the feasibility of producing HMNs with the same nanowire deposition process.