From insect neuroanatomy to nanophotonic computers

Lund University’s Stanley Heinze describes how his study of neuroanatomy in insects is leading to the development of nanophotonic computers.

When I began to study biology about two decades ago, I thought that neuroanatomy was not only boring but a thing of the distant past. It was much more exciting to ask how brains work, especially as there had been much progress in new functional imaging techniques, molecular methods, and electrophysiology. Yet there was something about the astounding beauty of neurons and the meditative experience of spending hours with an old camera lucida system to draw a single branch of a dendritic tree that deeply fascinated me. Indeed, 15 years later, neuroanatomy is moving faster than most fields in neuroscience. What happened?

Personally, while delving deeper into neuroscience, I realised that understanding the constraints imposed by anatomy – the brain’s hardware – makes unravelling neural function much easier. Knowing the projections of neurons means that the information flow within the brain can be delineated, a first step towards revealing its overall function – but the billions of neurons in our brains make mapping all projection patterns a daunting task. With our limited technology, one can therefore either restrict this mapping to a small part of a large brain, or turn towards smaller and numerically simpler brains.

From projectome to connectome

Despite only possessing between 100,000 and 1,000,000 neurons in their brains (where humans have around 100 billion), insects are able to carry out complex behaviours that rival those of mammals in many ways. This numerical simplicity offers the chance to understand how these brains generate such behaviours on the level of identified neurons, and thus serve as models for decision making, navigation, sensory integration, or learning. However, simply knowing the projection patterns of neurons is not sufficient to delineate the functions of neural circuits. Even if we knew each neuron’s morphology in full detail, two overlapping cells could still be wired up in countless different ways. This means that knowing all projections in the brain (the ‘projectome’) is only a first step. What we eventually need as a solid anatomical basis of brain function on the circuit level is the full map of all synaptic connections (the ‘connectome’).

In the first decade of the 21st century, serial section electron microscopy was perfected as a method for obtaining 3D datasets at a resolution good enough to visualise synaptic connections between neurons across large volumes of brain tissue. This technological milestone started the era of ‘connectomics’ in species with complex behaviour– and tiny insects were the most promising models to attempt whole brain synaptic mapping. Ten years later, we now are in possession of the first near-complete connectome of the fruit fly brain; and the complexity of the neuroanatomical data is mind-blowing.

Comparative connectomics

Like many neuroscientists, my research goal is to understand the fundamental features of animal brains. Once we know which neural circuits are responsible for the core computations that allow a brain to carry out its most essential tasks, we can ask how core circuits have evolved to mediate functions that are specific to a species’ ecology, sensory environment, or behavioural strategies, ultimately leading to the question of what makes our brains distinctly human.

So far, insect neuroanatomy has been pushed to the level of connectomics only in the fruit fly; and a deep understanding of the fruit fly brain in isolation does not permit us to distinguish fundamentally important circuits from fly-specific circuits. Work in my research group therefore focuses on pioneering a comparative approach to connectomics. Although insect brains are small, the brains of species with interesting behaviours, such as bees, ants, or migratory butterflies, are many times larger than the fruit fly.

Despite the many technological advances, the work required for a full brain connectome would still take decades and is thus out of reach for now. We therefore focus our efforts on the central complex of the brain, which exists in all insects and is considered the brain’s decision-making centre, with circuits responsible for generating internal representations of current travelling directions, behavioural goals, and for initiating appropriate actions. Understanding the connectome of this brain region across species will illuminate how the insect brain guides planned movements for a wide range of behaviours.

So far, we have generated datasets of five species of bees and ants, including key model species such as the honeybee; expert navigators like the desert ant; and capable engineers like the tropical army ants, famous for collaboratively building rafts and bridges with their bodies. By tracing thousands of neurons in these datasets, we have generated projectomes of all major neurons in this brain region. Taking advantage of its modular layout, we additionally construct local connectomes of individual computational units of the central complex which, together, now allow us to infer overall connectivity principles and their variation across species.

Building computational models

The accumulated data is invaluable for a functional understanding of the neural computations underlying navigation. Indeed, the first generation of projectomics and connectomics data from bumblebees has already aided the development of an advanced computational model – the first computational model of this enigmatic region to be completely constrained by the anatomical reality of the brain.

In this model we uncovered that the connectivity of the CX resembled an artificially evolved neural circuit that had been engineered to perform a navigation behaviour called ‘path integration’ (returning to a point of origin in a straight line after a convoluted journey). Not only was the biologically constrained version of this circuit able to steer a virtual bee and a real robot back to its nest, but emerging properties resulting from previously unexplained anatomical peculiarities yielded a performance surpassing its artificially engineered counterpart. It produced automatic search behaviour when the nest was missed, superior estimation of flight speed, and the ability to negotiate obstacles. Overall, we demonstrated that the key to understanding the neural computations underlying elementary navigation decisions is the anatomical wiring of the central complex.

Nanophotonic insect neuroanatomy

Realising that complex and robust navigation decisions can be achieved with a neural network with fewer than 150 neurons, we started collaborating with physicists and engineers at the NanoLund Center (Lund, Sweden) with the aim of implementing our model circuit using nanotechnology. In simulations, we have already succeeded in designing a nanophotonic element that performs the tasks of one of our model neurons. It receives specific wavelengths of light as either inhibitory or excitatory signals and sends out an integrated light signal to its neighbouring neurons. As light propagation is highly predictable, no wiring is needed to enable communication between specific nano-neurons. In fact, we can use the geometry of the arrangement of the neural elements (their anatomy) to define the desired computation. This makes our circuit both fast and extremely energy efficient. With a footprint of only a few micrometres across, we have designed this network to mimic the internal compass of the insect central complex, a first step towards a complete decision-making circuit that can be used for navigation. Once realised in hardware, these nanophotonic devices could be highly useful for any autonomous navigating agent, from self-driving cars to disaster relief robots.

While we still have a long way to go, utilising detailed neuroanatomical insights from many insect species allows us to harvest the accumulated ingenuity resulting from more than 500 million years of insect evolution – a process that has generated extremely efficient organisms capable of robustly performing tasks that engineers only dream about for autonomous robots.

Further reading

Stone, T. et al. (2017). Current Biology, 27, 3069-3085.e11 https://dx.doi.org/10.1016/j.cub.2017.08.052

Honkanen, A. et al. (2019). The Journal of Experimental Biology, 222 https://dx.doi.org/10.1242/jeb.188854

Winge, D. et al. (2020). ACS Photonics https://dx.doi.org/10.1021/acsphotonics.0c01003

Stanley Heinze
Researcher
Lund Vision Group/NanoLund
Lund University
+46 46 222 95 78
stanley.heinze@biol.lu.se
Tweet @NanoLund
www.nano.lu.se

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network