Relationship Between Neuroscience and AI — Computational Neuroscience

Sheryl Li
6 min readApr 8, 2021

--

Neuroscience and AI work hand in hand- neuroscience playing a key role in AI and being an inspiration for building human-like AI. Neuroscience inspires designs in AI systems through neural networks that mimic brain structure. Computational neuroscience is an approach to understanding the development and function of nervous systems, describing how electrical and chemical signals are used in the brain to interpret and process information. With better understanding of the brain, AI algorithms can be more advanced.

A neuron is basically a leaky bag of charged liquid- a bag where the contents are enclosed within a cell membrane that is impermeable to charged ions (such as sodium chloride and potassium), and includes the lipid bilayer. The neuron is made up of three main parts- the soma, dendrites, and axon. Soma is the cell body, dendrites are the input ends and axon is the output end of the neuron. Each neuron is connected to others through synapses, forming a large network altogether.
Chemicals and chemical reactions drive all the spikes and synapses in the neuron.

The presence of certain chemicals determines the opening of the gated ionic channels-which allow the transmission of ions to pass in and out of the neuron, permitting only specific ions to pass through into three gated channels. These gated channels are what allows neuronal signaling and communication between neurons in the brain. There are more sodium ions on the outside (higher concentration), which means that there is going to be a diffusion of these sodium ions into the inside. Sodium ions are positively charged, causing an increase in the local membrane potential. Increase in membrane potential is going to cause opening or closing of voltage-gated channels.

Opening or closing of voltage-gated channels in turn is going to result in depolarization (a positive change in the membrane potential). When there is a strong enough depolarization (strong enough excitation), the excitation reaches a particular point to get a spike/ action potential(AP). This causes even more channels to open, because these are voltage-gated channels.

When these channels inactivate at approximately the same time, the potassium channels open, and the sodium channel closes. As the potassium channels open, since there’s more of the potassium ions on the inside, an outflow of the potassium ions occurs, which causes a decrease in the membrane potential. This is responsible for this downward slope of the action potential.

What happens to the action potential after it’s generated at the initial segment of the axon?

There are cells which grow sheaths called myelin around the axon of cells. These sheaths are insulating sheaths, which means they don’t allow charge to pass through, but do leave open certain uncovered areas of the axon. The myelinated axon allows fast long-range communicational spikes, allowing the action potential that’s generated near the cell body to basically hop from one non-myelinated region to the next.

Quick recap: neurons have these branches called dendrites. A neuron also has a long, slender, fiber, called an axon, which carries the output of a neuron. These outputs are essentially electrical impulses called spikes or action potentials, and are delivered at these junctions between neurons (these junctions are called synapses).

Now, the question is: how is this information interpreted?

Mathematically, the base model for an irregular spike train is the probability of spiking within a time interval (t, t+(change in)t) for a small change in time-

However, this model is insufficient — neurons have refractory periods following a spike (when probability of spiking goes to 0), and then gradually increases often over tens of milliseconds (relative refractory period). Neurons are immune to synaptic activity during the activation and refractory periods.

Neurons respond to a stimulus to an action by increasing their firing rates- the measuring firing rate of a neuron within a time interval t would be the number of spikes in the interval divided by the length of the interval(spikes per second, Hz (Hertz)).

The point process framework centers on the theoretical instantaneous rate, which takes the expected value of this ratio, and passes to the limit as the length of the time interval goes to zero, giving an intensity function for the process. To accurately model a neuron’s spiking behavior, the intensity function typically must itself evolve over time depending on changing inputs and experimental conditions, etc. It is therefore called a conditional intensity function written in the form:

Nsub(t,t+changeint) is the number of spikes in the time interval t, and vector Xsub(t) includes both the past spiking history prior to time t and other quantities that affect the neuron’s current spiking behavior. Because Xsub(t) is random, the conditional intensity is also random.

If Xsub(t) includes unobserved random variables, the process is often called doubly stochastic. When the conditional intensity depends on the history H(t), the process is often called self-exciting. The vector Xt may be high-dimensional.

How does this relate to AI?

A biological neural network is composed of chemically connected neurons that may be connected to other neurons, making a network. In deep learning, a field in machine learning, an artificial neural network (ANN) is designed to simulate the way the human brain analyzes and processes information, through weighted directed graphs representing the neuron inputs and outputs.

The nodes are formed by the artificial neurons, receiving the input signal in the form of a pattern and image in the form of a vector, and each input is multiplied by its corresponding weights. Adding a predetermined bias, all of these calculations are applied to an activation function.

However, connecting back to the idea that a better understanding of how the brain computes benefits AI, spiking neural networks (SNN) are more biologically realistic than ANNs.

ANNS(artificial neural networks) communicate with continuous valued activations, but SNNs have the advantage of being sensitive to information transmission. The precise timing of every spike is highly reliable — a crucial coding strategy in sensory information processing areas and neural motor control areas.

Pattern recognition in the brain occurs through multi-layer neural circuits which communicate by spiking events. The SNN architecture consists of spiking neurons and interconnecting synapses that are modeled by adjustable scalar weights.

SNN often involves STDP (spike-timing dependent plasticity)- If a presynaptic neuron fires right before the postsynaptic neuron, the weight connecting them is strengthened (LTP, Long Term Potentiation). If the presynaptic neuron fires right after the postsynaptic neuron, then the weight is weakened(LTD, Long Term Depression).

W represents the synaptic weight, A> 0 and B < o are parameters indicating learning rates. The first equation of the set represents LTP while the second one represents LTD.

Traditional deep learning approaches are computationally expensive and difficult to implement on hardware for portable devices. On the other hand, SNNs are power efficient models because of their sparse, spike-based communication framework.

Though the spiking neurons’ transfer function in SNNs are non-differentiable -> prevents using back propagation, work- arounds are being made. With SNN and advances in neuroscience, AI algorithms become more brain-like, thus being more efficient.

Credits:

https://www.researchgate.net/publication/321695315_Computational_Neuroscience_Mathematical_and_Statistical_Perspectives

--

--