Observation of nature always has inspired the development of scientifical thoughts. Existing methods of mathematical modeling and computational power provide a solid ground to address a wide area of actual questions both in modern biology and computer science using models of biological neurons and connections (synapses). There are different branches in neurobiology, e.g. computational neuroscience and brain-computer interfaces studies, and machine learning which, when melted together, result in new research methods, such as modeling of single neuron activity, as well as poses new questions: for example, can we predict how chemical agent would affect neural system.


The problem is, classic artificial neural networks do not allow us to get to know how single neurons interact using different types of signals during development and learning processes; otherwise, biological experiments nowadays aren't as complicated to tell us the full story of a signal from a molecule to the behavior in one organism. Biological Cellular Neural Network Modeling (BCNNM) project is about to provide new insights and solutions in the neural modeling field by using biological principles in describing both cellular level of network physiology and learning strategies. Such technique would give us new possibilities in research of machine learning and first step in modeling of biological experiments with a high accuracy of the results.


The main goal of BCNNM project is to develop and implement a simulation model of an evolutionary process of a neural tissue development in living organisms and molecular processes inside the cells in that tissue. The resulting networks could represent a real connectome replica of a specific real organism or artificial connectome with given threshold for biological reliability. For each modeling organism it could be a whole connectome, which is a map of neural connections within an organism's nervous system, or just a small part of real brain for another. Using such simulation would be helpful in picking up a parameter-optimized models of biologically close neuronal networks. The resulting networks could be used in different applied and scientific solutions e.g. pattern recognition in visual and other kind of signals, clustering big data, creating knowledge bases, data correlation and prediction, making analyze-based decisions (including analyze of decisions made by human), etc. Analysis of such network models and comparison of them with real biological connectomes would help us to find similarities in neural tissues in different organisms due to their functional purposes.

Project description

BCNNM includes several major aspects:

  • Main model that simulates the development of the network: neurogenesis (including cell division and proliferation), histogenesis, tissue development and synaptogenesis. After the growth phase there is a learning process somewhat similar to supervised learning in artificial neural networks (ANN). During this stage, some connections are getting stronger and some weaker due to the synaptic and neuronal plasticity principles. Useless synapses eliminate due to biological pruning processes. Neurons, which do not have any axonal and dendritic connections, eliminate, too. However, network always has some reserves in "Stem cells pool" so that network can slightly increase its neurons and synapses count.
  • Network evolution and selection. On this stage, model ranges all individual networks in progeny and choose top N nets to recombine their genes (development and learning parameters). Range parameters are configurable and depend on the specific tasks.
  • Statistical and graph analysis. First of all we have to analyse all results which model gives us from different individual networks. Our goal is to determine learning patterns and correlate them to the average network structure and initial parameters for each class of tasks e.g. visual recognition, associative memory, decision making etc. Next goal is to compare network structure and its dynamical molecular processes with known biological systems/model organisms and to determine the "biological power" of each network.
  • Development of selected known network of model organism. For now we try to model the full development of C. Elegans network structurally using our simulation model.


Our model is dynamic, spatial and it is based on intracellular and extracellular interactions and processes. For the universal and standard information unit to work on, for each model level we choose a chemical signaling pathway. A unique signaling pathway is preceded for every single action in a cell. Any of the present pathways has its own duration that correlates with real biological processes duration (e.g. duration of neurotransmitter synthesis is longer than neurotransmitter's release) and each pathway suggests its own "gene set" activation. Each of these genes regulates only one special chemical process, which could be presented in a number of signaling pathways. Single gene activation also has its own duration, that depends on gene "structure" and imitate real gene's specificity in our model. The duration of each action is a sum of all its chemical processes durations and it depends on the "gene set" activation process, so that it changes in a particular gene structure would affect several pathways and may change all simulated network structure and behavior.

To simplify all the calculations with cells size and their spatial interactions including physic level of system description we chose to modify model space to cube with an integer lattice. All moving algorithms, such as cell movement, axonal growth and chemical signal spreading etc., are based on this lattice and allowed bias. We are starting from a single cell per individual and a set of biological rules and pathways that describe behavior of the intracellular and extracellular interaction and processes. During the simulation network evolves: new neural cells are formed, some of them are dying (apoptotic processes), and others are forming synaptic connections. Each cell has its type and a set of signaling pathways. Depending on spatial chemical signaling and intracellular preset each cell has its own number of receptors of concrete type, can affect other cells' current processes, reacts in different ways on different neurotransmitters.

Depending on our goal we can get just spatial (topological) information as a result, information about tissue composition, information about neuronal and synaptic plasticity (we could just simulate some spikes with parameterized frequency), EEG information from each neuron at the exact time and information about learning process.

Model learning process includes both stimulation of a specific sensor layer and getting spikes from specific cells for decision purposes. Because of dynamic and spatial principles, we construct the Arbiter, which is responsible for detecting all kinds of spikes in network and signal strength of each particular neuron. The Arbiter's state reports all the important information to the model Learning System that decides when and how should we stimulate sensors next time. As a test task for our model learning we have chosen recognition of handwritten digits from MNIST dataset. For now we are working on the reinforcement learning algorithms including the dopaminergic circuits.

To obtain more efficient network for solving a given class of tasks, we develop an evolutionary algorithm, which selects the best networks under following parameters:

  • Total learning error is the main filter parameter, which is used to select networks with smallest value of it. More accurate networks get higher score and put on the top of the generation list.
  • Learning speed filter allows us to choose faster network among the networks from top group with same total error with small delta. Faster networks would have higher priority so we are trying to reduce time on each next generation model.
  • Computational efficiency is the last main filter, which allows us to select networks with the lower physical memory amount. It depends on the total cells count and its remoteness in each network, axons length, total synapses amount etc. By decreasing the amount of physical memory usage, we select more topologically efficient networks. We can increase amount of diversity of all individual networks in the next generation as we are selecting by this filter.

The main key of model evolution is to obtain such duration parameters for each gene, which is a part of certain signaling pathways, so each action, caused by such pathways, would give us more efficient network: there would be less neurons with less synapses, with better topology, fast expression of factors and neurotransmitters and with better plasticity. To choose stop criteria of evolutionary algorithm we decide always to put the best network from the previous generation to new progeny and compare results with newcomer networks. If the top 1 network configuration is not changing after Nth generation, we assume we have reached the maximum and stop the simulation. If the total error of this network is still unacceptable, we make a new "Pathways-to-Genes layout" and simulate again.


We use the imitational modeling principle with shared modeling time and small discrete value of dt for each of the modeling steps. In general, we use ordinary differential equations (ODE) to describe the complex dynamical processes and link them to the pathways. On each simulation step, we have dependent and independent actions, which occur in each cell inside the simulation model. We define common functionality for each cell in separate logic unit and then split calculations for particular dt for each cell into separate agent which are calculating in parallel to other agents.

By using such approach, we solve scalability issues when our network grows from hundred to ten thousands cells in it. In addition, such approach allows us to build fully distributed system in future and use more independent hardware for simulation.


When working with multiple different models of neural networks it is essential to have a set of techniques for comparing different connectomes. We address the problem of neural network analysis with well-established toolkit from the graphs and network theory. Its application to biological systems has been well covered in works of Olaf Sporns et al.

Analysis techniques are currently applied to the simulation results, specifically to the resulting neural networks on each stage of the development and learning. The analysis of a such topological structure is closely related to the area of biology called 'connectomics' - analysis of neural network of living creatures. Analysis of neural network topology can highlight some of its properties as a whole as well as bring the light to some individual neurons, synapses or their combinations.

It is trivial to distinguish an ANN from a biological one. A typical artificial network contains easily detectable clusters. Often it is designed to have a layered structure. In contrary, biological network is a mess at a first sight. There are multiple pairs of edges between nodes, loop subgraphs with 2 vertices and, furthermore, easily identified clusters and layers.

We are trying to bend together connectomics and artificial networks. As an ultimate goal, we want to obtain a set of techniques that describes the roles of neurons/clusters/communities. These techniques should be applicable to the decision making networks, both artificial and biological. We believe that results of our work will increase the transparency of artificial networks, and will help biologists to analyse connectomes.

Further work

At the moment, we are working on evolutionary algorithm that produces a serial set of generations of neural networks, as well as on learning algorithms that allow networks to solve specific tasks and select the best configuration for this class of tasks. Currently we are focused on questions of biological description and identification of valuable signalling pathways. As a next milestone, we are planning to increase the precision in the classification tasks comparable to what could be obtained using classic artificial neural nets. Another goal that we want to achieve in near future is to grow a neural network similar to the one of the biological model organisms (popular nematode C. Elegans) using the BCNNM model.

Long-term goals include:

  • Mapping of prepared network from our model to ANN so that it should reduce the computational cost of the solution and bring it to the level of applied artificial neural networks.
  • Using our model for biophysical processes control and the Bio-Computer interfaces - interfaces that use biological and physiological parameters for feedback, such as heart rate, brain activity, muscles activity etc.

Base core principle of our modeling allows us to model another kind of tissue just by describing the specific cells and their signaling pathways. This option may be interesting in modeling of different sorts of histogenesis and especially in embryogenesis. Although, for that purposes we should define the modeling space more specifically: for instance, by using the double lattice instead of an integer, defining cells sizes and their morphology, etc.


Dmitry Bozhko

Project head, single simulation model, learning, programming

Georgy Galumov

Infrastructure, evolutionary algorithm, configuration, programming

Victoria Stelmakh

Biology consultation, model learning, interpretation of analysis

Alex Polovian

Mathematical analysis, visualization, connectome, programming