Minding the math

Howard E. Zimmerman Assistant Professor of Engineering and Brain Science Leo Kozachkov studies how intelligence works through the lens of engineering, mathematics and computer science.

More than two years ago, then-postdoctoral researcher at MIT Leo Kozachkov first visited the Institute for Computational and Experimental Research in Mathematics (ICERM) building in Providence as part of a mathematical challenges in neuronal networks workshop. Participating in that workshop turned out to be a pivotal experience, where he liked the vibe of the Renaissance city and the community he met at Brown. “That put Brown in my head already as a potential place,” he said. “And it was a year later when I was applying for faculty jobs, I saw the Carney Institute was leading a search – that they would hire a faculty member who would share a home in another department, with options that included engineering. I knew immediately this could be a good fit for me, because one of my Ph.D. advisors was a mathematical control theorist, a mathematician that works in engineering.”

At his core, Kozachkov sees himself as a researcher who uses the mathematics of engineering to do interesting and useful things. Using and developing the tools from engineering to understand the brain and machine learning/AI made this a natural bridge for his first tenure-track faculty position. At Brown, he easily lists a handful of current faculty names he is looking forward to working with: David Borton, Nora Ayanian, Arto Nurmikko, the Braingate team, Pedro Felzenszwalb, and Miguel Bessa. “Miguel focuses on materials, but he’s very interested in machine learning and AI, and we’ve been in contact about potentially using some of the stuff that I’ve been developing for his applications. I’m very interested in learning about what people outside of brain science, but in engineering, use machine learning for – things like dynamical systems theory and control theory, which are mathematical disciplines.”

Leo KozachkovKozachkov, who moved into his Barus & Holley office in August, says his Dynamic Intelligence Lab is purposefully lean. “All I need is a pen and paper, and then sometimes GPUs,” he said. “I mean, I can work in my head a lot, which is nice and meshes well with enjoying the outdoors. A lot of my thinking is done walking my dog or hiking, before, of course, the computational aspect where you implement the math and see what works and what doesn’t.” Building his lab out with students will be a slow process, with his one graduate student Hanson Liu already on board. “One of my biggest aspirations is that I’ll continue being able to do research for many, many years. One thing that happens sometimes when you have a big lab is that your responsibilities change and you become more focused on administrative stuff, which is obviously important and is a strategy. But for me right now, I just love doing research so much, I don’t want to give it up. My goal is to have a tight-knit, core group of people doing research together.”

He has a pair of projects at the ready, demonstrating how he has hit the ground running since coming to College Hill. The first he says is a top-down approach to understanding the brain, starting with a task you want an artificial system to do, and training the model to do it. A collaborative paper with researchers from Carnegie Mellon, “Intrinsic Goals for Autonomous Agents: Model-Based Exploration in Virtual Zebrafish Predicts Ethological Behavior and Whole-Brain Dynamics,” was presented at the annual Conference and Workshop on Neural Information Processing Systems (NeurIPS) in December. “We built a model of the zebrafish, which is a type of experimental animal that people use a lot in its larval form.” (The larval zebrafish is considered one of the premier model organisms for calcium imaging, primarily because it offers a “natural window” into the vertebrate brain.) 

“We already have whole brain datasets for this animal. My collaborators and I built a model of this animal with an artificial brain that we could train, and then did virtual neuroscience on the model. We had our virtual animal swim around and do things, and then built a lab inside of the computer to put our virtual animal in. We then compared the artificial brain cells in this animal to the real brain cells that experimental neuroscientists have recorded from. And we could do quantitative mappings between the two to show similarities. 

“We’re trying to show that this is actually possible. And this model that we’ve come up with and published is the first of its kind in the sense that it’s the first task-optimized, embodied model that has been directly compared to data. So we call it a task optimized model, because its interactions between neurons and this special type of glial cell called astrocytes have been optimized to do something interesting. So it’s a proof of concept that you can even do this.” Kozachkov said the aspiration here is to have a good model of the brain, whether it is a larval zebrafish or, perhaps someday, a human one for experimentation in hopes of yielding practical insights to the biological brain. 

His other recent work exemplifies how he is also interested in bottom-up models. “The difference between the two is that in a bottom-up model, you start with a concrete mathematical picture, and you do things like incorporate known anatomy of how glial cells and neurons wire up. You put that into a mathematical model and ask, ‘okay, what provable things can I say? What guarantees can I say about a neural network model that has these glial cells in it?’,” he asked. 

He points to another recent first-author paper from the Proceedings of the National Academy of Science (PNAS) called “Neuron-astrocyte associated memory,” which showed that if you take a kind of classic neural network model in computational neuroscience called a Hopfield network (named after John Hopfield, the physics Nobel Prize winner who created the celebrated network) “you can really prove stuff about it, which is extremely rare in neural network land,” he said.

“We took biological facts that are known about how astrocytes and neurons are wired together, and we tried to say something mathematical about the network with an astrocyte interaction. After a lot of work, we were able to prove some interesting things, which, again, is rare to be able to prove anything. But the main finding was that just the way that astrocytes are connected with neurons could potentially enhance the memory capacity of the brain dramatically compared to a brain that was made of just neurons. 

“At a very high level, the way that this works is that if two synapses, the connections between neurons, want to share information now in the brain, they have to go through the neurons. There are a lot of messages. You could greatly increase the information processing capacity of the brain if you allowed synapses to talk to one another directly.”

“But when you think about how an astrocyte connects up to neurons, a single astrocyte can connect up to over a million nearby synapses. So it’s one astrocyte connecting like an octopus with a bunch of tentacles and each tentacle wraps around a nearby synapse. And if you look at the biology of what goes on inside the astrocyte, basically, it can act as a way for signals to go from synapse just through this astrocyte connection. So it’s a more direct way for synapses to talk to one another. And we were able to leverage that biological observation and the mathematical observation to kind of prove that, at least in this simplified, idealized mathematical world, having an astrocyte talking with neurons can dramatically increase memory capacity – meaning, the number of memories that you could reliably store in your network and then later reliably recall.”

As excited as he is about all this research, he is equally enthused when talking about teaching for the first time. In the fall of 2025, he led ENGN 1570 - Linear Systems Analysis and will teach ENGN 2520 - Pattern Recognition and Machine Learning in the spring. “As much as I think of myself as a researcher first, I see such value in teaching. I want to be in a position where I can teach new things, and that will expand my own knowledge because it will force me to keep ahead of the students. The Brown undergrads are amazing. They are so talented. I don’t know if I should say this, but I was definitely not at that level when I was an undergrad.”

The young Kozachkov grew up in New Jersey: Kozachkov’s parents were refugees from the Soviet Union, born in Kiev, Ukraine. His older brother was also born there, before the family immigrated to the U.S. His brother earned his Ph.D. in physics from CalTech and is an applied physicist. Kozachkov’s grandfather, the Leo he is named after, was a professor of Cybernetics back in the Soviet Union. And his father was in the process of completing his Ph.D. in applied mathematics before leaving his dissertation behind in Kiev to come to America. “So I come from an academic family,” he said. “In that sense, I was on a course, but actually, I was and still am very into the arts. So I think of it as an odd path to academia. Growing up, I was a musician. I've been playing guitar since I was 12, and I love music. Pretty much all of my childhood friends who are still my best friends are musicians and artists.

I grew up playing punk music, which was the scene in New Jersey when I was there. But I love all kinds of stuff. My favorite band of all time is Pink Floyd, which I got from my dad, and that’s also my brother’s favorite band. Music is kind of my first love. Science, I really only got interested in around my junior year of high school, which is pretty late. And I came to it just by reading – probably Carl Sagan’s series Cosmos, which introduced a lot of people to science,” he explained. He remains an active reader and includes book recommendations on his personal website. 

“My dad’s advice in going to college at Rutgers was to take math classes, because no matter what I chose, he knew I would be interested in quantitative stuff. Calculus and algebra. This turned out to be good advice. Eventually, I realized I really wanted to understand physics. Then my senior year, I took a new computational neuroscience class taught by Konstantinos Michmizos, who had just joined Rutgers from a postdoc position at MIT. That was it. I knew this was the subject. I took his class, loved it, and just immediately fell in love because it combined the quantitative stuff I liked in physics and the everyday stuff of thinking about how the brain works. After I graduated, I stayed on for a year as a research assistant in his lab, and that’s when I fell in love with research too. Because he brought me to the frontier very quickly.”

This frontier has now expanded to Brown Engineering, where Kozachkov will work at the intersection of engineering’s quantitative modeling and neuroscience, applying those insights to the next generation of artificial intelligence and neuro-engineering technologies.