The LiNC Lab has three major research areas, which you can read about below.
Credit assignment in the brain
Deep learning has had a major impact on AI, leading to human-level performance in image/speech recognition and motor control. One of the keys to deep learning is its solution to the credit assignment problem: for learning to be successful, each neuron in a deep neural network must receive “credit” for its contribution to any behaviour. Credit assignment in a hierarchical and/or recurrent network is non-trivial because the impact of a neuron in an early stage of processing depends on its downstream effects. In AI, the credit assignment problem is typically solved with the backpropagation-of-error algorithm (backprop). However, there are several problems with backprop that make it less than ideal, both for AI and for understanding the brain.
From an AI perspective, backprop makes it difficult to learn online, because it requires a pass of information forward through the entire network, followed by backward propagation of errors for credit assignment. The requirement for a full-pass through the network (or backwards through time) also makes it difficult to learn in a modular fashion. Moreover, traditional backprop operates on differentiable, real-valued functions, requiring a great deal of energy for implementing its calculations on hardware. From a neuroscience perspective, backprop is biologically infeasible. Not only does the brain not use a full forward pass followed by a backward pass, it also doesn’t use differentiable functions. Moreover, the brain does not possess symmetric feedforward and feedback pathways (something which backprop assumes), and it is far more energy efficient than current deep neural networks.
If we could understand the brain’s solution to the credit assignment problem, then we could achieve two major steps forward. First, we could improve AI, making it capable of online, modular, energy efficient credit assignment. Second, we could better understand the learning algorithms of the brain, opening the door to more accurate mathematical models of human cognition and more sophisticated brain-computer interfaces. One of the major research areas in the LiNC Lab is the quest to develop new, biologically inspired algorithms for solving the credit assignment problem.
Memory for action and insight
We are explicitly aware of events from our past. Our memories provide us with a sense of self, and perhaps more importantly, they allow us to leverage our experience to intelligently plan future actions. For example, if you remember that you lost your keys yesterday, you will arrange with someone else to let you into your apartment. However, memories are not simply static records of events as they happened, they are far more complex. Previous neuroscience research, including from the LiNC Lab, demonstrated that the brain combines multiple memories into a statistical amalgamation, modifies memories when they are recalled, and actively forgets information to help make us more flexible. Thus, memories stored in our brains are malleable combinations of our past experiences, and forgetting is a feature of our memory, not a bug.
It is now commonplace for AI practitioners to augment artificial agents with brain-like memory systems to improve the agents' capacity for rapid learning, reasoning, decision making, and reinforcement learning. However, there are aspects of natural memories that have yet to be tapped. For example, AI memory systems do not yet possess the intricate consolidation processes that exist in the brain. As well, it is possible that forgetting in an appropriate manner may actually enhance the flexibility of AI, especially for those agents with limited computational power like people and animals. Thus, another major area of research in the LiNC Lab is to apply our ever-increasing knowledge about natural memory to develop memory systems for artificial neural networks that endow them with greater flexibility and insight.
Tools for understanding brains
Neuroscience has historically sought clear, interpretable signatures of computation in the brain. For example, the seminal work by Hubel & Wiesel in the 1950’s and 1960’s found evidence for cells in primary visual cortex that computed the presence of edges at particular orientations in an animal’s visual field. Their work formed the core of investigations into visual cortex for decades. However, the majority of activity in the brain is uninterpretable to humans. Within visual cortex, for example, many neurons are not selective for edges, and instead exhibit responses to visual stimuli that human observers cannot discern any patterns from. Nonetheless, preliminary analyses of these “uninterpretable” neurons in the brain show that they can carry as much, or even more, information than those neurons that humans can interpret. This points to an immense difficulty that neuroscience faces: evolution would have no reason to select for computations in the brain that were easy for us to interpret, so we have no reason a priori to believe that we should be able to understand the brain’s operations by focusing on those components of brain activity that are easy for us to intuit.
How can we overcome this situation? The most promising path forward is to develop machine learning tools that can be applied to neuroscience data to help us discern principles of neural computation that we humans could not easily discover ourselves. For example, based on recent advances in deep neural networks for data analysis, we are working on new varieties of variational autoencoders for 2-photon fluorescence microscopy applications. The goal is to enable the identification of patterns in brain activity that generalize across individuals to predict behaviour and memory. Thus, the third objective of the LiNC Lab's research is to develop new machine learning algorithms for neuroscientists to understand human uninterpretable neural computation.