Harnessing brainpower to refine, enhance artificial intelligence
While we may think of brains as “biological computers,” biology is not primarily concerned with computation. There are animals without brains or with only rudimentary ones. In these organisms – and in us, too – brains are a result of evolutionary pressures that balanced computation against its costs, such as energy consumption.
So, what are the features of our brains that truly exist for computation and not as part of a compromise? Put another way, if we were to design a brain from scratch for the sole role of a computer, how would we do it differently?
Some computer scientists are building computers to be more and more brain-like. Face ID on your phone works similarly to the human visual system. Deep learning, a machine learning technique based on the architecture of the human brain, curates your Twitter feeds and social media ads. This approach of creating brain-like computers makes sense because brains were the first known systems capable of intelligent behavior. Engineers naturally looked to a preexisting example to create intelligence in their computers. Human brains in particular, with the benefit of evolutionary time, may be highly optimized for computation.
I am among researchers in Jason MacLean’s lab at the University of Chicago who want to discover the principles of computation by finding why brains are particularly good at it. These features can be built into next-generation software and computers. We also look for the brain's limitations, to better understand where biological trade-offs occur so future software engineers skip those features. If our efforts are successful, the computers you use in the future may be quite different than today's.
My research combines methods from artificial intelligence with data from neurobiology. I’m building a series of models of the brain, some with more brain-like features, some with fewer, and training them to perform a variety of tasks – that is, to gain the ability to compute. Then I examine how the brain-like features (or lack thereof) impact task performance, which helps us understand the computational benefit or detriment of each feature. I am focusing on neuronal spiking, adaptation and sparse connectivity. Evidence suggests that all three traits play important roles in enhancing computation in the brain.
Spiking is an efficient and powerful way for neurons to communicate
The brain is made of neurons which are connected to one another and communicate using short bursts of electrical activity or "spikes." When input to a neuron exceeds a threshold, it emits a spike to downstream neurons. Why should the brain communicate as a cascade of spiking activity? The inside of your laptop doesn’t spike, but it might work better if it did.
Spikes are highly precise, one-millisecond events that travel rapidly from neuron to neuron. Building our models with spiking neurons enables them to perform tasks that require precise timing or rapid responses. Spiking activity also lends greater precision to memories, in particular for associating events which happen close together in time.
The all-or-nothing nature of spikes is energy efficient – a big plus for biology. Their discrete nature also suggests a means for adaptation by changing the spike threshold for different conditions. This is exactly what occurs in your brain, so I added this feature to my models. Neurons adapt after they spike by increasing their spike thresholds slightly, making it more difficult for a neuron to spike if the input remains the same.
In daily life, that sort of adaptation is why you mostly do not feel your socks on your feet, for example. After some initial spiking when you put on your socks, the threshold for sensory neurons in the skin of your feet adjusts to be higher. A new input must be added – e.g., a sharp tack underfoot – to exceed the higher threshold and cause new spikes and new sensation. I am currently testing my hypothesis that models built with adaptive thresholds are better at tasks that involve identifying changes rapidly and consistently.
Although spiking carries the benefits just described, it actually happens fairly infrequently. Why is that? Research has found that computing with fewer spikes is more powerful than computing with many. The intuitive explanation is that the rarer an event is, the more information it carries. If I cried "wolf" all the time –whether I saw one on TV, in a book or in real life– I would not be telling my neighbors anything urgent. However, if I cried only when a wolf was truly present, then my neighbors would be more likely to respond. Similarly, if a neuron spikes only under specific circumstances, it demonstrates that the event has more value to the brain. The low spiking activity we see across the brain enables it to both conserve energy and be most informative. When I impose a spiking limit on my models, they train more rapidly and perform better on a variety of tasks.
Low connectivity between neurons protects against noisy signals
Just like neurons save their spikes for the most useful information, they are also selective about making connections with each other. The number of connections in the brain is about 100 trillion, while the number of total possible connections is 100 million times 100 trillion. Why are there so many fewer actual connections than potential ones? Connections are physical structures that need to be created and maintained, which is costly. The longer the connection travels, the higher the cost. To solve this problem, neurons in the brain aggregate into local, interconnected hubs that unify information. A small subset of neurons in these hubs send just a few connections to faraway targets. Researchers have found that this pattern may have the added benefit of making the brain more resistant to noise – the opposite of useful signals.
This is helpful because our world is noisy and inconsistent. Consider the fact that Monet’s scenery looks different under different lighting, and yet we can still identify which of his paintings depict the same landscape. The sparsity of our brains' connections, and the ways in which they are able to change, can help us adapt to differences in concepts without losing our understanding of the whole. By building models with fewer overall connections, I am testing my hypothesis that they are not just cost-efficient, but more robust to noise.
Most brain-like models used in computer science today do not spike, do not adapt, and are fully connected. Research suggests that these "biological" features deserve reconsideration because they offer benefits for computation. Since computers are here to stay, we should hope that they will improve over time – both by increasing their computational power and by lowering their energy consumption. Our results on computing in the brain can show engineers how to do both.