Global Highlights in Neuroengineering 2005-2018

No Comments

PDF version: 

Global Highlights in Neuroengineering 2005-2018 – Logan Thrasher Collins

Optogenetic stimulation using ChR2

(Boyden, Zhang, Bamberg, Nagel, & Deisseroth, 2005)

  • Ed S. Boyden, Karl Deisseroth, and colleagues developed optogenetics, a revolutionary technique for stimulating neural activity.
  • Optogenetics involves engineering neurons to express light-gated ion channels. The first channel used for this purpose was ChR2 (a protein originally found in bacteria which responds to blue light). In this way, a neuron exposed to an appropriate wavelength of light will be stimulated.
  • Over time, optogenetics has gained a place as an essential experimental tool for neuroscientists across the world. It has been expanded upon and improved in numerous ways and has even allowed control of animal behavior via implanted fiber optics and other light sources. Optogenetics may eventually be used in the development of improved brain-computer interfaces.


Blue Brain Project cortical column simulation

(Markram, 2006)

  • In the early stages of the Blue Brain Project, ~ 10,000 neurons were mapped in 2-week-old rat somatosensory neocortical columns with sufficient resolution to show rough spatial locations of the dendrites and synapses.
  • After constructing a virtual model, algorithmic adjustments refined the spatial connections between neurons to increase accuracy (over 10 million synapses).
  • The cortical column was emulated using the Blue Gene/L supercomputer and the emulation was highly accurate compared to experimental data.

Cortical Column

Optogenetic silencing using halorhodopsin

(Han & Boyden, 2007)

  • Ed Boyden continued developing optogenetic tools to manipulate neural activity. Along with Xue Han, he expressed a codon-optimized version of a bacterial halorhodopsin (along with the ChR2 protein) in neurons.
  • Upon exposure to yellow light, halorhodopsin pumps chloride ions into the cell, hyperpolarizing the membrane and inhibiting neural activity.
  • Using halorhodopsin and ChR2, neurons could be easily activated and inhibited using yellow and blue light respectively.

Halorhodopsin and ChR2 wavelengths


(Livet et al., 2007)

  • Lichtman and colleagues used Cre/Lox recombination tools to create genes which express a randomized set of three or more differently-colored fluorescent proteins (XFPs) in a given neuron, labeling the neuron with a unique combination of colors. About ninety distinct colors were emitted across a population of genetically modified neurons.
  • The detailed structures within neural tissue equipped with the Brainbow system can be imaged much more easily since neurons can be distinguished via color contrast.
  • As a proof-of-concept, hundreds of synaptic contacts and axonal processes were reconstructed in a selected volume of the cerebellum. Several other neural structures were also imaged using Brainbow.
  • The fluorescent proteins expressed by the Brainbow system are usable in vivo.


High temporal precision optogenetics

(Gunaydin et al., 2010)

  • Karl Deisseroth, Peter Hegemann, and colleagues used protein engineering to improve the temporal resolution of optogenetic stimulation.
  • Glutamic acid at position 123 in ChR2 was mutated to threonine, producing a new ion channel protein (dubbed ChETA).
  • The ChETA protein allows for induction of spike trains with frequencies up to 200 Hz and greatly decreases the incidence of unintended spikes. Furthermore, ChETA eliminates plateau potentials (a phenomenon which interferes with precise control of neural activity).

Ultrafast optogenetics

Hippocampal prosthesis in rats

(Berger et al., 2012)

  • Theodore Berger and his team developed an artificial replacement for neurons which transmit information from the CA3 region to the CA1 region of the hippocampus.
  • This cognitive prosthesis employs recording and stimulation electrodes along with a multi-input multi-output (MIMO) model to encode the information in CA3 and transfer it to CA1.
  • The hippocampal prosthesis was shown to restore and enhance memory in rats as evaluated by behavioral testing and brain imaging.

In vivo superresolution microscopy for neuroimaging

(Berning, Willig, Steffens, Dibaj, & Hell, 2012)

  • Stefan Hell (2014 Nobel laureate in chemistry) developed stimulated emission depletion microscopy (STED), a type of superresolution fluorescence microscopy which allows imaging of synapses and dendritic spines.
  • STED microscopy uses transgenic neurons that express fluorescent proteins. The neurons exhibit fluctuations in their fluorescence over time, providing temporal contrast enhancement to the resolution. Although light’s wavelength would ordinarily limit the resolution (the diffraction limit), STED’s temporal contrast overcomes this limitation.
  • Neurons in transgenic mice (equipped with glass-sealed holes in their skulls) were imaged using STED. Synapses and dendritic spines were observed up to fifteen nanometers below the surface of the brain tissue.

Superresolution microscopy in vivo

Eyewire: crowdsourcing method for retina mapping

(Marx, 2013)

  • The Eyewire project was created by Sebastian Seung’s research group. It is a crowdsourcing initiative for connectomic mapping within the retina towards uncovering neural circuits involved in visual processing.
  • Laboratories first collect data via serial electron microscopy as well as functional data from two-photon microscopy.
  • In the Eyewire game, images of tissue slices are provided to players who then help reconstruct neural morphologies and circuits by “coloring in” the parts of the images which correspond to cells and stacking many images on top of each other to generate 3D maps. Artificial intelligence tools help provide initial “best guesses” and guide the players, but the people ultimately perform the task of reconstruction.
  • By November 2013, around 82,000 participants had played the game. Its popularity continues to grow.


The BRAIN Initiative

(“Fact Sheet: BRAIN Initiative,” 2013)

  • The BRAIN Initiative (Brain Research through Advancing Innovative Technologies) provided neuroscientists with $110 million in governmental funding and $122 million in funding from private sources such as the Howard Hughes Medical Institute and the Allen Institute for Brain Science.
  • The BRAIN Initiative focused on funding research which develops and utilizes new technologies for functional connectomics. It helped to accelerate research on tools for decoding the mechanisms of neural circuits in order to understand and treat mental illness, neurodegenerative diseases, and traumatic brain injury.
  • The BRAIN Initiative emphasized collaboration between neuroscientists and physicists. It also pushed forward nanotechnology-based methods to image neural tissue, record from neurons, and otherwise collect neurobiological data.

The CLARITY method for making brains translucent

(Chung & Deisseroth, 2013)

  • Karl Deisseroth and colleagues developed a method called CLARITY to make samples of neural tissue optically translucent without damaging the fine cellular structures in the tissue. Using CLARITY, entire mouse brains have been turned transparent.
  • Mouse brains were infused with hydrogel monomers (acrylamide and bisacrylamide) as well as formaldehyde and some other compounds for facilitating crosslinking. Next, the hydrogel monomers were crosslinked by incubating the brains at 37°C. Lipids in the hydrogel-stabilized mouse brains were extracted using hydrophobic organic solvents and electrophoresis.
  • CLARITY allows antibody labeling, fluorescence microscopy, and other optically-dependent techniques to be used for imaging entire brains. In addition, it renders the tissue permeable to macromolecules, which broadens the types of experimental techniques that these samples can undergo (i.e. macromolecule-based stains, etc.)

CLARITY Imaging TechniqueTelepathic rats engineered using hippocampal prosthesis

(S. Deadwyler et al., 2013)

  • Berger’s hippocampal prosthesis was implanted in pairs of rats. When “donor” rats were trained to perform a task, they developed neural representations (memories) which were recorded by their hippocampal prostheses.
  • The donor rat memories were run through the MIMO model and transmitted to the stimulation electrodes of the hippocampal prostheses implanted in untrained “recipient” rats. After receiving the memories, the recipient rats showed significant improvements on the task that they had not been trained to perform.

Rat Telepathy

Integrated Information Theory 3.0

(Oizumi, Albantakis, & Tononi, 2014)

  • Integrated information theory (IIT) was originally proposed by Giulio Tononi in 2004. IIT is a quantitative theory of consciousness which may help explain the hard problem of consciousness.
  • IIT begins by assuming the following phenomenological axioms; each experience is characterized by how it differs from other experiences, an experience cannot be reduced to interdependent parts, and the boundaries which distinguish individual experiences are describable as having defined “spatiotemporal grains.”
  • From these phenomenological axioms and the assumption of causality, IIT identifies maximally irreducible conceptual structures (MICS) associated with individual experiences. MICS represent particular patterns of qualia that form unified percepts.
  • IIT also outlines a mathematical measure of an experience’s quantity. This measure is called integrated information or ϕ.

Expansion Microscopy

(F. Chen, Tillberg, & Boyden, 2015)

  • The Boyden group developed expansion microscopy, a method which enlarges neural tissue samples (including entire brains) with minimal structural distortions and so facilitates superior optical visualization of the scaled-up neural microanatomy. Furthermore, expansion microscopy greatly increases the optical translucency of treated samples.
  • Expansion microscopy operates by infusing a swellable polymer network into brain tissue samples along with several chemical treatments to facilitate polymerization and crosslinking and then triggering expansion via dialysis in water. With 4.5-fold enlargement, expansion microscopy only distorts the tissue by about 1% (computed using a comparison between control superresolution microscopy of easily-resolvable cellular features and the expanded version).
  • Before expansion, samples can express various fluorescent proteins to facilitate superresolution microscopy of the enlarged tissue once the process is complete. Furthermore, expanded tissue is highly amenable to fluorescent stains and antibody-based labels.

Expansion microscopy

Japan’s Brain/MINDS project

(Okano, Miyawaki, & Kasai, 2015)

  • In 2014, the Brain/MINDS (Brain Mapping by Integrated Neurotechnologies for Disease Studies) project was initiated to further neuroscientific understanding of the brain. This project received nearly $30 million in funding for its first year alone.
  • Brain/MINDS focuses on studying the brain of the common marmoset (a non-human primate abundant in Japan), developing new technologies for brain mapping, and understanding the human brain with the goal of finding new treatments for brain diseases.


(Szigeti et al., 2014)

  • The anatomical elegans connectome was originally mapped in 1976 by Albertson and Thomson. More data has since been collected on neurotransmitters, electrophysiology, cell morphology, and other characteristics.
  • Szigeti, Larson, and their colleagues made an online platform for crowdsourcing research on elegans computational neuroscience, with the goal of completing an entire “simulated worm.”
  • The group also released software called Geppetto, a program that allows users to manipulate both multicompartmental Hodgkin-Huxley models and highly efficient soft-body physics simulations (for modeling the worm’s electrophysiology and anatomy).

C. elegans Connectome

The TrueNorth Chip from DARPA and IBM

(Akopyan et al., 2015)

  • The TrueNorth neuromorphic computing chip was constructed and validated by DARPA and IBM. TrueNorth uses circuit modules which mimic neurons. Inputs to these fundamental circuit modules must overcome a threshold in order to trigger “firing.”
  • The chip can emulate up to a million neurons with over 250 million synapses while requiring far less power than traditional computing devices.

Human Brain Project cortical mesocircuit reconstruction and simulation

(Markram et al., 2015)

  • The HBP digitally reconstructed a 0.29 mm3 region of rat cortical tissue (~ 31,000 neurons and 37 million synapses) based on morphological data, “connectivity rules,” and additional datasets. The cortical mesocircuit was emulated using the Blue Gene/Q supercomputer.
  • This emulation was sufficiently accurate to reproduce emergent neurological processes and yield insights on the mechanisms of their computations.

Cortical Mesocircuit

Neural lace

(Liu et al., 2015)

  • Charles Lieber’s group developed a syringe-injectable electronic mesh made of submicrometer-thick wiring for neural interfacing.
  • The meshes were constructed using novel soft electronics for biocompatibility. Upon injection, the neural lace expands to cover and record from centimeter-scale regions of tissue.
  • Neural lace may allow for “invasive” brain-computer interfaces to circumvent the need for surgical implantation. Lieber has continued to develop this technology towards clinical application.

Neural Lace

Expansion FISH

(F. Chen et al., 2016)

  • Boyden, Chen, Marblestone, Church, and colleagues combined fluorescent in situ hybridization (FISH) with expansion microscopy to image the spatial localization of RNA in neural tissue.
  • The group developed a chemical linker to covalently attach intracellular RNA to the infused polymer network used in expansion microscopy. This allowed for RNAs to maintain their relative spatial locations within each cell post-expansion.
  • After the tissue was enlarged, FISH was used to fluorescently label targeted RNA molecules. In this way, RNA localization was more effectively resolved.
  • As a proof-of-concept, expansion FISH was used to reveal the nanoscale distribution of long noncoding RNAs in nuclei as well as the locations of RNAs within dendritic spines.

Expansion FISH

Neural dust

(Seo et al., 2016)

  • Michel Maharbiz’s group invented implantable, ~ 1 mm biosensors for wireless neural recording and tested them in rats.
  • This neural dust could be miniaturized to less than 0.5 mm or even to microscale dimensions using customized electronic components.
  • Neural dust motes consist of two recording electrodes, a transistor, and a piezoelectric crystal.
  • The neural dust received external power from ultrasound. Neural signals were recorded by measuring disruptions to the piezoelectric crystal’s reflection of the ultrasound waves. Signal processing mathematics allowed precise detection of activity.

Neural Dust

The China Brain Project

(Poo et al., 2016)

  • The China Brain Project was launched to help understand the neural mechanisms of cognition, develop brain research technology platforms, develop preventative and diagnostic interventions for brain disorders, and to improve brain-inspired artificial intelligence technologies.
  • This project will be take place from 2016 until 2030 with the goal of completing mesoscopic brain circuit maps.
  • China’s population of non-human primates and preexisting non-human primate research facilities give the China Brain Project an advantage. The project will focus on studying rhesus macaques.

Somatosensory cortex stimulation for spinal cord injuries

(Flesher et al., 2016)

  • Gaunt, Flesher, and colleagues found that microstimulation of the primary somatosensory cortex (S1) partially restored tactile sensations to a patient with a spinal cord injury.
  • Electrode arrays were implanted into the S1 regions of a patient with a spinal cord injury. The array performed intracortical microstimulation over a period of six months.
  • The patient reported locations and perceptual qualities of the sensations elicited by microstimulation. The patient did not experience pain or “pins and needles” from any of the stimulus trains. Overall, 93% of the stimulus trains were reported as “possibly natural.”
  • Results from this study might be used to engineer upper-limb neuroprostheses which provide somatosensory feedback.

Somatosensory Stimulation

Hippocampal prosthesis in monkeys

(S. A. Deadwyler et al., 2017)

  • Theodore Berger continued developing his cognitive prosthesis and tested it in Rhesus Macaques.
  • As with the rats, monkeys with the implant showed substantially improved performance on memory tasks.

The $100 billion Softbank Vision Fund

(Lomas, 2017)

  • Masayoshi Son, the CEO of Softbank (a Japanese telecommunications corporation), announced a plan to raise $100 billion in venture capital to invest in artificial intelligence. This plan involved partnering with multiple large companies in order to raise this enormous amount of capital.
  • By the end of 2017, the Vision Fund successfully reached its $100 billion goal. Masayoshi Son has since announced further plans to continue raising money with a new goal of over $800 billion.
  • Masayoshi Son’s reason for these massive investments is the Technological Singularity. He agrees with Kurzweil that the Singularity will likely occur at around 2045 and he hopes to help bring the Singularity to fruition. Though Son is aware of the risks posed by artificial superintelligence, he feels that superintelligent AI’s potential to tackle some of humanity’s greatest challenges (such as climate change and the threat of nuclear war) outweighs those risks.

Bryan Johnson launches Kernel

(Regalado, 2017)

  • Entrepreneur Bryan Johnson invested $100 million to start Kernel, a neurotechnology company.
  • Kernel plans to develop implants that allow for recording and stimulation of large numbers of neurons at once. The company’s initial goal is to develop treatments for mental illnesses and neurodegenerative diseases. Its long-term goal is to enhance human intelligence.
  • Kernel originally partnered with Theodore Berger and intended to utilize his hippocampal prosthesis. Unfortunately, Berger and Kernel parted ways after about six months because Berger’s vision was reportedly too long-range to support a financially viable company (at least for now).
  • Kernel was originally a company called Kendall Research Systems. This company was started by a former member of the Boyden lab. In total, four members of Kernel’s team are former Boyden lab members.

Elon Musk launches NeuraLink

(Etherington, 2017)

  • Elon Musk (CEO of Tesla, SpaceX, and a number of other successful companies) initiated a neuroengineering venture called NeuraLink.
  • NeuraLink will begin by developing brain-computer interfaces (BCIs) for clinical applications, but the ultimate goal of the company is to enhance human cognitive abilities in order to keep up with artificial intelligence.
  • Though many of the details around NeuraLink’s research are not yet open to the public, it has been rumored that injectable electronics similar to Lieber’s neural lace might be involved.

Facebook announces effort to build brain-computer interfaces

(Constine, 2017)

  • Facebook revealed research on constructing non-invasive brain-computer interfaces (BCIs) at a company-run conference in 2017. The initiative is run by Regina Dugan, Facebook’s head of R&D at division building 8.
  • Facebook’s researchers are working on a non-invasive BCI which may eventually enable users to type one hundred words per minute with their thoughts alone. This effort builds on past investigations which have been used to help paralyzed patients.
  • The building 8 group is also developing a wearable device for “skin hearing.” Using just a series of vibrating actuators which mimic the cochlea, test subjects have so far been able to recognize up to nine words. Facebook intends to vastly expand this device’s capabilities.

DARPA funds research to develop improved brain-computer interfaces

(Hatmaker, 2017)

  • The U.S. government agency DARPA awarded $65 million in total funding to six research groups.
  • The recipients of this grant included five academic laboratories (headed by Arto Nurmikko, Ken Shepard, Jose-Alain Sahel and Serge Picaud, Vicent Pieribone, and Ehud Isacoff) and one small company called Paradromics Inc.
  • DARPA’s goal for this initiative is to develop a nickel-sized bidirectional brain-computer interface (BCI) which can record from and stimulate up to one million individual neurons at once.

Human Brain Project analyzes brain computations using algebraic topology

(Reimann et al., 2017)

  • Investigators at the Human Brain Project utilized algebraic topology to analyze the reconstructed ~ 31,000 neuron cortical microcircuit from their earlier work.
  • The analysis involved representing the cortical network as a digraph, finding directed cliques (complete directed subgraphs belonging to a digraph), and determining the net directionality of information flow (by computing the sum of the squares of the differences between in-degree and out-degree for all the neurons in a clique). In algebraic topology, directed cliques of n neurons are called directed simplices of dimension n-1.
  • Vast numbers of high-dimensional directed cliques were found in the cortical microcircuit (as compared to null models and other controls). Spike correlations between pairs of neurons within a clique were found to increase with the clique’s dimension and with the proximity of the neurons to the clique’s sink. Furthermore, topological metrics allowed insights into the flow of neural information among multiple cliques.
  • Experimental patch-clamp data supported the significance of the findings. In addition, similar patterns were found within the C. elegans connectome, suggesting that the results may generalize to nervous systems across species.

HBP algebraic topology

Early testing of hippocampal prosthesis algorithm in humans

(Song, She, Hampson, Deadwyler, & Berger, 2017)

  • Dong Song (who was working alongside Berger) tested the MIMO algorithm on human epilepsy patients using implanted recording and stimulation electrodes. The full hippocampal prosthesis was not implanted, but the electrodes acted similarly, though in a temporary capacity. Although only two patients were tested in this study, many trials were performed to compensate for the small sample size.
  • Hippocampal spike trains from individual cells in CA1 and CA3 were recorded from the patients during a delayed match-to-sample task. The patients were shown various images while neural activity data were recorded by the electrodes and processed by the MIMO model. The patients were then asked to recall which image they had been shown previously by picking it from a group of “distractor” images. Memories encoded by the MIMO model were used to stimulate hippocampal cells during the recall phase.
  • In comparison to controls in which the same two epilepsy patients were not assisted by the algorithm and stimulation, the experimental trials demonstrated a significant increase in successful pattern matching.

Brain imaging factory in China

(Cyranoski, 2017)

  • Qingming Luo started the HUST-Suzhou Institute for Brainsmatics, a brain imaging “factory.” Each of the numerous machines in Luo’s facility performs automated processing and imaging of tissue samples. The devices make ultrathin slices of brain tissue using diamond blades, treat the samples with fluorescent stains or other contrast-enhancing chemicals, and image then using fluorescence microscopy.
  • The institute has already demonstrated its potential by mapping the morphology of a previously unknown neuron which “wraps around” the entire mouse brain.

China Brain Mapping Image

Automated patch-clamp robot for in vivo neural recording

(Suk et al., 2017)

  • Ed S. Boyden and colleagues developed a robotic system to automate patch-clamp recordings from individual neurons. The robot was tested in vivo using mice and achieved a data collection yield similar to that of skilled human experimenters.
  • By continuously imaging neural tissue using two-photon microscopy, the robot can adapt to a target cell’s movement and shift the pipette to compensate. This adaptation is facilitated by a novel algorithm called an imagepatching algorithm. As the pipette approaches its target, the algorithm adjusts the pipette’s trajectory based on the real-time two-photon microscopy.
  • The robot can be used in vivo so long as the target cells express a fluorescent marker or otherwise fluoresce corresponding to their size and position.

Automated Patch Clamp System

Genome editing in the mammalian brain

(Nishiyama, Mikuni, & Yasuda, 2017)

  • Precise genome editing in the brain has historically been challenging because most neurons are postmitotic (non-dividing) and the postmitotic state prevents homology-directed repair (HDR) from occurring. HDR is a mechanism of DNA repair which allows for targeted insertions of DNA fragments with overhangs homologous to the region of interest (by contrast, non-homologous end-joining is highly unpredictable).
  • Nishiyama, Mikuni, and Yasuda developed a technique which allows genome editing in postmitotic mammalian neurons using adeno-associated viruses (AAVs) and CRISPR-Cas9.
  • The AAVs delivered ssDNA sequences encoding a single guide RNA (sgRNA) and an insert. Inserts encoding a hemagglutinin tag (HA) and inserts encoding EGFP were both tested. Cas9 was encoded endogenously by transgenic host cells and in transgenic host animals.
  • The technique achieved precise genome editing in vitro and in vivo with a low rate of off-target effects. Inserts did not cause deletion of nearby endogenous sequences for 98.1% of infected neurons.

Genome Editing Neurons

Near-infrared light and upconversion nanoparticles for optogenetic stimulation

(S. Chen et al., 2018)

  • Upconversion nanoparticles absorb two or more low-energy photons and emit a higher energy photon. For instance, multiple near-infrared photons can be converted into a single visible spectrum photon.
  • Shuo Chen and colleagues injected upconversion nanoparticles into the brains of mice and used them to convert externally applied near-infrared (NIR) light into visible light within the brain tissue. In this way, optogenetic stimulation was performed without the need for surgical implantation of fiber optics or similarly invasive procedures.
  • The authors demonstrated stimulation via upconversion of NIR to blue light (to activate ChR2) and inhibition via upconversion of NIR to green light (to activate a rhodopsin called Arch).
  • As a proof-of-concept, this technology was used to alter the behavior of the mice by activating hippocampally-encoded fear memories.

Upconversion nanoparticles and NIR

Map of all neuronal cell bodies within mouse brain

(Murakami et al., 2018)

  • Ueda, Murakami, and colleagues combined methods from expansion microscopy and CLARITY to develop a protocol called CUBIC-X which both expands and clears entire brains. Light-sheet fluorescence microscopy was used to image the treated brains and a novel algorithm was developed to detect individual nuclei.
  • Although expansion microscopy causes some increased tissue transparency on its own, CUBIC-X greatly improved this property in the enlarged tissues, facilitating more detailed whole-brain imaging.
  • Using CUBIC-X, the spatial locations of all the cell bodies (but not dendrites, axons, or synapses) within the mouse brain were mapped. This process was performed upon several adult mouse brains as well as several developing mouse brains to allow for comparative analysis.
  • The authors made the spatial atlas publicly available in order to facilitate global cooperation towards annotating connectivity among the neural cell bodies within the atlas.


Clinical testing of hippocampal prosthesis algorithm in humans

(Hampson et al., 2018)

  • Further clinical tests of Berger’s hippocampal prosthesis were performed. Twenty-one patients took part in the experiments. Seventeen patients underwent CA3 recording so as to facilitate training and optimization of the MIMO model. Eight patients received CA1 stimulation so as to improve their memories.
  • Electrodes with the ability to record from single neurons (10-24 single-neuron recording sites) and via EEG (4-6 EEG recording sites) were implanted such that recording and stimulation could occur at CA3 and CA1 respectively.
  • Patients performed behavioral memory tasks. Both short-term and long-term memory showed an average improvement of 35% across the patients who underwent stimulation.

Precise optogenetic manipulation of fifty neurons

(Mardinly et al., 2018)

  • Mardinly and colleagues engineered a novel excitatory optogenetic ion channel called ST-ChroME and a novel inhibitory optogenetic ion channel called IRES-ST-eGtACR1. The channels were localized to the somas of host neurons and generated stronger photocurrents over shorter timescales than previously existing opsins, allowing for powerful and precise optogenetic stimulation and inhibition.
  • 3D-SHOT is an optical technique in which light is tuned by a device called a spatial light modulator along with several other optical components. Using 3D-SHOT, light was precisely projected upon targeted neurons within a volume of 550×550×100 μm3.
  • By combining novel optogenetic ion channels and the 3D-SHOT technique, complex patterns of neural activity were created in vivo with high spatial and temporal precision.
  • Simultaneously, calcium imaging allowed measurement of the induced neural activity. More custom optoelectronic components helped avoid optical crosstalk of the fluorescent calcium markers with the photostimulating laser.

Optogenetic control of fifty neurons

Whole-brain Drosophila connectome data acquired via serial electron microscopy

(Zheng et al., 2018)

  • Zheng, Bock, and colleagues collected serial electron microscopy data on the entire adult Drosophila connectome, providing the data necessary to reconstruct a complete structural map of the fly’s brain at the resolution of individual synapses, dendritic spines, and axonal processes.
  • The data are in the form of 7050 transmission electron microscopy images (187500 x 87500 pixels and 16 GB per image), each representing a 40nm-thin slice of the fly’s brain. In total the dataset requires 106 TB of storage.
  • Although much of the the data still must be processed to reconstruct a 3-dimensional map of the Drosophila brain, the authors did create 3-dimensional reconstructions of selected areas in the olfactory pathway of the fly. In doing so, they discovered a new cell type as well as several other previously unrealized insights about the organization of Drosophila’s olfactory biology.

Drosophila connectome with SEM



Akopyan, F., Sawada, J., Cassidy, A., Alvarez-Icaza, R., Arthur, J., Merolla, P., … Modha, D. S. (2015). TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 34(10), 1537–1557.

Berger, T. W., Song, D., Chan, R. H. M., Marmarelis, V. Z., LaCoss, J., Wills, J., … Granacki, J. J. (2012). A Hippocampal Cognitive Prosthesis: Multi-Input, Multi-Output Nonlinear Modeling and VLSI Implementation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20(2), 198–211.

Berning, S., Willig, K. I., Steffens, H., Dibaj, P., & Hell, S. W. (2012). Nanoscopy in a Living Mouse Brain. Science, 335(6068), 551 LP-551. Retrieved from

Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G., & Deisseroth, K. (2005). Millisecond-timescale, genetically targeted optical control of neural activity. Nature Neuroscience, 8, 1263. Retrieved from

Chen, F., Tillberg, P. W., & Boyden, E. S. (2015). Expansion microscopy. Science, 347(6221), 543 LP-548. Retrieved from

Chen, F., Wassie, A. T., Cote, A. J., Sinha, A., Alon, S., Asano, S., … Boyden, E. S. (2016). Nanoscale imaging of RNA with expansion microscopy. Nature Methods, 13, 679. Retrieved from

Chen, S., Weitemier, A. Z., Zeng, X., He, L., Wang, X., Tao, Y., … McHugh, T. J. (2018). Near-infrared deep brain stimulation via upconversion nanoparticle–mediated optogenetics. Science, 359(6376), 679 LP-684. Retrieved from

Chung, K., & Deisseroth, K. (2013). CLARITY for mapping the nervous system. Nature Methods, 10, 508. Retrieved from

Constine, J. (2017). Facebook is building brain-computer interfaces for typing and skin-hearing. TechCrunch. Retrieved from

Cyranoski, D. (2017). China launches brain-imaging factory. Nature, 548(7667), 268–269.

Deadwyler, S. A., Hampson, R. E., Song, D., Opris, I., Gerhardt, G. A., Marmarelis, V. Z., & Berger, T. W. (2017). A cognitive prosthesis for memory facilitation by closed-loop functional ensemble stimulation of hippocampal neurons in primate brain. Experimental Neurology, 287, 452–460.

Deadwyler, S., Hampson, R., Sweat, A., Song, D., Chan, R., Opris, I., … Berger, T. (2013). Donor/recipient enhancement of memory in rat hippocampus. Frontiers in Systems Neuroscience. Retrieved from

Etherington, D. (2017). Elon Musk’s Neuralink wants to boost the brain to keep up with AI. TechCrunch. Retrieved from

Fact Sheet: BRAIN Initiative. (2013). Retrieved from

Flesher, S. N., Collinger, J. L., Foldes, S. T., Weiss, J. M., Downey, J. E., Tyler-Kabara, E. C., … Gaunt, R. A. (2016). Intracortical microstimulation of human somatosensory cortex. Science Translational Medicine. Retrieved from

Gunaydin, L. A., Yizhar, O., Berndt, A., Sohal, V. S., Deisseroth, K., & Hegemann, P. (2010). Ultrafast optogenetic control. Nature Neuroscience, 13, 387. Retrieved from

Hampson, R. E., Song, D., Robinson, B. S., Fetterhoff, D., Dakos, A. S., Roeder, B. M., … Deadwyler, S. A. (2018). Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall. Journal of Neural Engineering, 15(3), 36014.

Han, X., & Boyden, E. S. (2007). Multiple-Color Optical Activation, Silencing, and Desynchronization of Neural Activity, with Single-Spike Temporal Resolution. PLOS ONE, 2(3), e299. Retrieved from

Hatmaker, T. (2017). DARPA awards $65 million to develop the perfect, tiny two-way brain-computer interface. TechCrunch. Retrieved from

Liu, J., Fu, T.-M., Cheng, Z., Hong, G., Zhou, T., Jin, L., … Lieber, C. M. (2015). Syringe-injectable electronics. Nature Nanotechnology, 10, 629. Retrieved from

Livet, J., Weissman, T. A., Kang, H., Draft, R. W., Lu, J., Bennis, R. A., … Lichtman, J. W. (2007). Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system. Nature, 450, 56. Retrieved from

Lomas, N. (2017). Superintelligent AI explains Softbank’s push to raise a $100BN Vision Fund. TechCrunch. Retrieved from

Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., … Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience, 21(6), 881–893.

Markram, H. (2006). The Blue Brain Project. Nature Reviews Neuroscience, 7, 153. Retrieved from

Markram, H., Muller, E., Ramaswamy, S., Reimann, M. W., Abdellah, M., Sanchez, C. A., … Schürmann, F. (2015). Reconstruction and Simulation of Neocortical Microcircuitry. Cell, 163(2), 456–492.

Marx, V. (2013). Neuroscience waves to the crowd. Nature Methods, 10, 1069. Retrieved from

Murakami, T. C., Mano, T., Saikawa, S., Horiguchi, S. A., Shigeta, D., Baba, K., … Ueda, H. R. (2018). A three-dimensional single-cell-resolution whole-brain atlas using CUBIC-X expansion microscopy and tissue clearing. Nature Neuroscience, 21(4), 625–637.

Nishiyama, J., Mikuni, T., & Yasuda, R. (2017). Virus-Mediated Genome Editing via Homology-Directed Repair in Mitotic and Postmitotic Cells in Mammalian Brain. Neuron, 96(4), 755–768.e5.

Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLOS Computational Biology, 10(5), e1003588. Retrieved from

Okano, H., Miyawaki, A., & Kasai, K. (2015). Brain/MINDS: brain-mapping project in Japan. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 370(1668).

Poo, M., Du, J., Ip, N. Y., Xiong, Z.-Q., Xu, B., & Tan, T. (2016). China Brain Project: Basic Neuroscience, Brain Diseases, and Brain-Inspired Computing. Neuron, 92(3), 591–596.

Regalado, A. (2017). The Entrepreneur with the $100 Million Plan to Link Brains to Computers. MIT Technology Review. Retrieved from

Reimann, M. W., Nolte, M., Scolamiero, M., Turner, K., Perin, R., Chindemi, G., … Markram, H. (2017). Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function. Frontiers in Computational Neuroscience. Retrieved from

Seo, D., Neely, R. M., Shen, K., Singhal, U., Alon, E., Rabaey, J. M., … Maharbiz, M. M. (2016). Wireless Recording in the Peripheral Nervous System with Ultrasonic Neural Dust. Neuron, 91(3), 529–539.

Song, D., She, X., Hampson, R. E., Deadwyler, S. A., & Berger, T. W. (2017). Multi-resolution multi-trial sparse classification model for decoding visual memories from hippocampal spikes in human. In 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (pp. 1046–1049).

Suk, H.-J., van Welie, I., Kodandaramaiah, S. B., Allen, B., Forest, C. R., & Boyden, E. S. (2017). Closed-Loop Real-Time Imaging Enables Fully Automated Cell-Targeted Patch-Clamp Neural Recording In Vivo. Neuron, 95(5), 1037–1047.e11.

Szigeti, B., Gleeson, P., Vella, M., Khayrulin, S., Palyanov, A., Hokanson, J., … Larson, S. (2014). OpenWorm: an open-science approach to modeling Caenorhabditis elegans. Frontiers in Computational Neuroscience. Retrieved from

Zheng, Z., Lauritzen, J. S., Perlman, E., Robinson, C. G., Nichols, M., Milkie, D., … Bock, D. D. (2018). A Complete Electron Microscopy Volume of the Brain of Adult Drosophila melanogaster. Cell, 174(3), 730–743.e22.




Notes on Banach and Hilbert Spaces

No Comments

PDF Version: Notes on Banach and Hilbert Spaces – Logan Thrasher Collins

  • Banach spaces are normed vector spaces with the property of completeness.
  • Hilbert spaces are normed vector spaces with the property of completeness for which the norm is determined via an inner product.
  • Hilbert spaces are always Banach spaces, but Banach spaces are not always Hilbert spaces.
  • A vector space is a set equipped with vector addition and scalar multiplication which also satisfies the following axioms for all scalars a,b and all vectors u,v,w.


  • The norm of a vector space V is a function p on a field F (often the real or complex numbers) defined below. The conditions which must be satisfied include (i) the triangle inequality, (ii) absolute scalability, and (iii) positive definiteness.


  • Norms are often denoted by double brackets as |||| with the dot indicating an arbitrary norm. Vectors and functions of vectors can also be placed between the brackets to indicate the norm on said vector or vector-valued function.
  • The inner product of a vector space V is a function <,> which takes two elements of the vector space and maps them to the field F which the vector space is defined over. An inner product must satisfy the axioms of (i) conjugate symmetry, (ii) linearity in the first argument, and (iii) positive definiteness.


  • Note that the linearity axiom of an inner product is sometimes defined with respect to the second argument (rather than the first), particularly in physics disciplines.
  • When an inner product is defined, the norm and the inner product are related by the following formula.


  • Completeness is a mathematical property of metric spaces. Normed vector spaces are a type of metric space, though vector spaces in general are not necessarily metric spaces. A metric space M is called complete if every Cauchy sequence of points in M converges to a limit L which is also in M.
  • Cauchy sequences are sequences (xn)n∈ℕ within normed vector spaces for which distinct terms can be made arbitrarily close to each other if one goes far enough into the sequence. This is expressed by the following relation where ε denotes a distance.


  • Convergent sequences are sequences (xn)n∈ℕ within normed vector spaces for which the following holds. For every distance ε, there exists an index N∈ℕ such that all terms beyond N have a distance to L less than ε. This is expressed below in symbolic form.


  • Every convergent sequence is a Cauchy sequence and every convergent sequence has a unique limit.
  • Some examples of Banach spaces include (ℝ, ||||), (ℂ, ||||), and (ℝd, ||||2). For the first two cases, the norms are given by the absolute value |a – b| where a and b are real or complex numbers. The third case uses the Euclidean norm (x12+x22+x32+…+xd2)1/2 where xi are components of a d-dimensional vector x.
  • Some examples of Hilbert spaces include ℝn with the vector dot product <u,v>, ℂn with the vector dot product <u,v*> (in which the complex conjugate of the second argument is taken), and the infinite dimensional Hilbert space L2 which is the set of all real-valued functions such that the inner product given below does not diverge.





Panpsychic epistemology: a physical basis for knowledge and justification

No Comments

     Modern epistemology has generated a myriad of debatable conclusions from assumptions about knowledge and justification. For instance, René Descartes used his epistemological propositions to develop arguments for substance dualism, a philosophy which classifies the mind and the material as distinct substances (Descartes, Cottingham, & Williams, 1996). However, the possibility of an immaterial substance grows less and less likely with increased understanding of neuroscience and physics (Barrett, 2014; Oizumi, Albantakis, & Tononi, 2014). Given the almost certainly causal relationship between physical influences on the brain and cognitive changes, substance dualism seems highly unlikely at best. In addition, the existence of an immaterial substance may be inherently contradictory since any immaterial substance capable of interacting with the material world would gain materiality by virtue of its communication with the material (Poland, 1994). John Locke developed an epistemology in which he suggested that innate Ideas do not exist (Locke, 1836). But contemporary scientific data shows that many organisms do exhibit instinctual behavior patterns from birth, contradicting Locke’s position (Batki, Baron-Cohen, Wheelwright, Connellan, & Ahluwalia, 2000; Manoli, Meissner, & Baker, 2006; Schoolland, 1942). I propose that physical panpsychism may entail a form of revised empiricism, circumvent the issues associated with modern epistemology, and provide a superior framework for understanding the basis of knowledge and justification.

     Panpsychism is rekindling among contemporary thinkers as an explanation for consciousness (Strawson, 2006). Integrated information theory or IIT (Oizumi et al., 2014) is a mathematical formulation which seeks to quantify consciousness using the information arising from dynamical systems. Galen Strawson has argued that IIT implies panpsychism since all physical structures contain some amount of information (Strawson, 2006). Furthermore, Adam Barrett has proposed modifications to IIT which may help account for fundamental physical interpretations of the universe like quantum field theory (Barrett, 2014). Panpsychic descriptions of reality are reentering philosophical and scientific discourse as new data are acquired and new theoretical interpretations develop.

     But many still view the idea that inanimate objects may possess primitive qualia as ludicrous. To counter this presumption, consider a fragment of quartz resting on a ridge. As the sun rises, photons excite the atoms on the crystal’s surface, causing thermal oscillations to propagate into the quartz. This thermal diffusion is modulated by crystallographic defects, causing a heterogeneous distribution of heat inside the rock. As dusk falls, the quartz fragment begins to cool, emitting heat at varying rates across the surface. The particular rates are influenced directly by this quartz specimen’s pattern of internal defects. Next, consider a mouse, also located on the ridge. As the sun rises, photons excite the retinaldehyde molecules in the mouse’s eyes, triggering signal transduction via electrochemical systems. This signal moves into the mouse’s brain, where it propagates through a series of neural pathways, causing a heterogeneous distribution of neural activity. Soon, the signal’s interaction with preexisting brain structures is translated into a motor action; the mouse blinks and looks away from the bright illumination. The particular motor response is modulated by the structural organization of this mouse’s brain at the given time. The quartz and the mouse both receive sensory inputs, process them according to internal properties, and then give motor outputs. Although the rock’s “brain” is much more disorganized and chaotic than the mouse’s brain, it operates by the same basic principles and could plausibly experience a primitive form of consciousness. As such, the possibility of panpsychism cannot be readily dismissed as absurd or metaphysical.

     Another prominent objection to panpsychism arises from brain processes which occur in a subconscious fashion. For instance, activity in the primary visual cortex (V1) does not correlate with conscious visual experience except for a few special cases (Boehler, Schoenfeld, Heinze, & Hopf, 2008; Boyer, Harrison, & Ro, 2005; Crick & Koch, 1995). However, the presence of subconscious neural events does not necessarily indicate that the said events are subconscious from the viewpoint of their associated anatomies. Instead, anatomical structures like V1 may experience their own independent qualia. The full informational content of their perceptions may not be transmitted or translated into the brain areas like the prefrontal cortex (PFC) which are often identified with a patient’s sense of self (Mitchell, Banaji, & Macrae, 2005). Of course, some data does transfer into higher brain regions to facilitate processes like vision, but the information undergoes an extensive series of modifications before arriving at the PFC and other regions associated with conscious processing. For this reason, the “unconscious anatomies” objection is insufficient to invalidate panpsychism.

     An alternative formulation of empiricism may hold in a panpsychic universe. If all matter contains at least some primitive level of consciousness, then all channels of information transfer might be regarded as sensory. To understand this, consider the lateral geniculate nucleus (LGN), a neural structure which behaves as an intermediary point between the retina and V1 (Tortora & Derrickson, 2013). In the context of rationalism and empiricism, some may claim that only the retina provides sensory information. However, I would argue that this represents an arbitrary and ultimately falsitical distinction. Rather, I propose that every point at which data undergoes some transformation (including spatial translation) represents its own “slice” of sensory processing. As such, the LGN itself may act as a sensory organ which receives data from afferent retinal ganglia.

     Much finer-grained slices of matter may contain slices of sensory processing as well. Nanoscale distances along axons contribute to the propagation of action potentials via the movement of ions. Whenever information undergoes transfer over even a miniscule distance, it acts as a sensory input into the adjacent spatial structure. In this way, the entire cosmos can be thought of as a series of interlinked sensory organs. Depending on whether physics exhibits fundamentally discrete or continuous character, there may or may not be a way of partitioning individual sensory quanta into distinct structures. It should be noted that the description of physics as sensory might be a slight misrepresentation since the term “sensory” carries connotations related to more traditional physiological concepts of sensory organs. Although every data-transfer event can be thought of as a sensory input, describing such events as a form of information processing is equally valid. Nonetheless, for the purposes of dissecting rationalism and empiricism, it is useful to describe the universe as sensory. If all data lives in a sensory field, then all knowledge and justification must be sensory. As such empiricism holds, but possesses radically different properties compared to the versions espoused by most modern empiricists.

     Many distinct subsets of the universe experience intercommunication, though the level of said intercommunication is often limited. To visualize this, consider a hypothetical pair of humans named Nathaly and Ernesto. Nathaly does not possess the ability to read Ernesto’s mind. However, Ernesto may explain the content of his mind verbally. Despite this explanation representing an incomplete dataset which undergoes heavy transformation as he maps the information into sound waves and Nathaly maps the sound waves back into patterns of neural firing, some of Ernesto’s mental content is roughly reconstructed in Nathaly’s brain. I suggest that this type of interaction lies at the heart of knowledge and justification. Informational connectivity provides an empirical link by which subsystems of reality create representations of other subsystems and gain knowledge of the universe.

     On a larger scale, knowledge and justification may permeate the cosmos, but in a mostly stochastic and primitive fashion. Any given subset of reality may possess some knowledge and justification about itself and about other subsets of reality. However, most structures likely exhibit extremely limited understanding of external data. For example, consider a cloud of dust particles drifting in outer space. The cloud may possess a very primitive level of consciousness. I speculate that such an entity’s experience may resemble the static white noise which sometimes appears on television screens. This cloud may acquire knowledge and justification by thermal excitation from incoming starlight. The thermal excitation represents a direct causal influence from the stars onto the dust cloud. As a result, the cloud’s state undergoes an alteration with a direct relationship to the states of the stars. Of course, the cloud’s cognitive simulation of the distant stars is vastly different from the kind of cognitive simulation a human would experience, though some new qualia likely occurs in the cloud relative to its unexcited state. Nonetheless, the cloud possesses a form of knowledge. To extend this concept, the cloud only has a very specific form of justification for this knowledge. The cloud gains empirical justification that it experiences a state change, but does not possess the level of cognitive machinery necessary to develop a veridical epistemic model of the distant stars. Subsystems within the universe continuously generate empirical models of other subsystems, but the epistemic precision of said models varies depending on the cognitive construction of the given subsystem.

     More intelligent structures may possess superior epistemic abilities. Here, I specifically refer to the type of intelligence which allows a system to create accurate representative models of other structures. As an illustrative example, imagine that a group of engineers builds an “empathy machine.” The empathy machine moves about and gathers cellular-resolution scans of people’s functional brain activity. After a scan, the empathy machine simulates the person’s brain with exquisite detail, creating a near-perfect replica of that person’s experiences. In this way, the empathy machine can acquire veridical knowledge of other subsystems. Its knowledge and justification is empirical, the product of sensory experience, but possesses far more accuracy than the dust cloud’s knowledge from the scenario described previously. Recall that sensory experience extends beyond sensory organs and includes every data-transfer event within an intelligent system. Note that the veridicality of knowledge and justification may or may not be practically important. I emphasize veridicality here in order to provide a descriptive philosophical classification scheme rather than to make claims about epistemic virtue. The empathy machine shows that appropriately organized systems may acquire fairly complex and veridical information about each other.

     Although the majority of subsystems within the universe only propagate distorted versions of their own experiential patterns into neighboring subsystems, the cosmos may have the potential for vastly greater connectivity. In a possible future scenario, the universe may reach a point at which it is saturated with engineered intelligence (Kurzweil, 2005). If all matter is incorporated into programmable computronium structures, intercommunication processes may allow any subset of reality to accurately simulate any other subset of the reality. Distant subsets may communicate with relative efficiency by altering the spacetime metric since changing distance itself does not violate the lightspeed barrier (Alcubierre, 1994). Even in this scenario, simultaneous sensory knowledge of all states in the cosmos may present a challenge since a limited amount of computational resources can be packed into given spatial bounds (Bremermann, 1967) and so a limited amount of knowledge could undergo simulation at any given spatiotemporal location. However, a possibility remains for the universe to behave as an enormous brain, partitioning its attentional mechanisms to different subsets of the total knowledge and justification as the need arises. Furthermore, an entity on this scale would likely have abilities which current humans cannot conceptualize, so the constraints involved might be very different than those described here. This extreme situation demonstrates that the most crucial limit to empirical knowledge and justification arises from informational connectivity and the capacity for state emulation.

     Using panpsychism as an explanation for the existence of consciousness in a physical universe, an alternative form of empiricism emerges. Sensory knowledge can be reframed as events by which information undergoes transformation. For this reason, any cognitive process might be considered empirical, regardless of whether said process is actually associated with traditional sensory organs. Distinct subsystems within the cosmos gain knowledge and justification by receiving data from each other and undergoing state alterations which modify their qualia. Veridicality is achieved when the state alterations emulate the states of the structure which induced the change. If the entire universe was engineered to maximize informational connectivity, it may acquire a form of omniscient or nigh-omniscient understanding in which any knowledge can be accessed and understood as needed. Panpsychic epistemology provides a new perspective on knowledge and justification which may facilitate the convergence of scientific and philosophical approaches to epistemological questions.


Alcubierre, M. (1994). The warp drive: hyper-fast travel within general relativity. Classical and Quantum Gravity, 11(5), L73. Retrieved from

Barrett, A. (2014). An integration of integrated information theory with fundamental physics. Frontiers in Psychology. Retrieved from

Batki, A., Baron-Cohen, S., Wheelwright, S., Connellan, J., & Ahluwalia, J. (2000). Is there an innate gaze module? Evidence from human neonates. Infant Behavior and Development, 23(2), 223–229.

Boehler, C. N., Schoenfeld, M. A., Heinze, H.-J., & Hopf, J.-M. (2008). Rapid recurrent processing gates awareness in primary visual cortex. Proceedings of the National Academy of Sciences, 105(25), 8742 LP-8747. Retrieved from

Boyer, J. L., Harrison, S., & Ro, T. (2005). Unconscious processing of orientation and color without primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16875 LP-16879. Retrieved from

Bremermann, H. J. (1967). Quantum noise and information. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 4: Biology and Problems of Health (pp. 15–20). Berkeley, Calif.: University of California Press. Retrieved from

Crick, F., & Koch, C. (1995). Are we aware of neural activity in primary visual cortex? Nature, 375(6527), 121–123.

Descartes, R., Cottingham, J., & Williams, B. (1996). Descartes: Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge University Press.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Locke, J. (1836). An essay concerning human understanding. T. Tegg and Son.

Manoli, D. S., Meissner, G. W., & Baker, B. S. (2006). Blueprints for behavior: genetic specification of neural circuitry for innate behaviors. Trends in Neurosciences, 29(8), 444–451.

Mitchell, J. P., Banaji, M. R., & Macrae, C. N. (2005). The Link between Social Cognition and Self-referential Thought in the Medial Prefrontal Cortex. Journal of Cognitive Neuroscience, 17(8), 1306–1315.

Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLOS Computational Biology, 10(5), e1003588. Retrieved from

Poland, J. (1994). Physicalism, the Philosophical Foundations (Vol. 57). Oxford University Press.

Schoolland, J. B. (1942). Are there any innate behavior tendencies? Genetic Psychology Monographs, 25, 219–289.

Strawson, G. (2006). Realistic monism: why physicalism entails panpsychism.

Tortora, G. J., & Derrickson, B. H. (2013). Principles of Anatomy and Physiology, 14th Edition: 14th Edition. Wiley Global Education.


Algebraic Graph Mappings

No Comments

an exploration by Logan Thrasher Collins.

     Algebraic manipulation of mathematical objects provides a rich array of tools for deriving insights about said objects. Graph theory concerns networks of vertices and edges and finds applications in diverse areas such as computer science, quantitative sociology, electrical engineering, and neurobiology. The use of algebraic methods in graph theory has received surprisingly little attention. While mappings are sometimes studied in graph theory, such transformations are often considered in the abstract, without a method for describing the content of any given mapping. Here, I define an algebraic framework for specifying transformations between graphs. This framework might be extended to further capitalize upon the tools of analysis and abstract algebra, leading to potential new paths of investigation.

     Let G and H be simple, labeled graphs. The transformation ϕ(G):𝔾→𝔾 is a mapping which transforms G into H. (Here, 𝔾 represents the set of all simple, labeled graphs). To specify the precise content of this process, I define a general formula for graph mappings. Note that any term may take on either a positive or a negative value. Positive terms indicate the creation of a new vertex or edge within H, while negative terms indicate the removal of an existing vertex or edge from G. In this formula, v∈{V(G),V(H)} and e∈{E(G),E(H)}.


In order for the equation above to be defined, any edge must connect a pair of vertices in H. That is, no edge e can be created without also creating a pair of corresponding vertices that are connected by e. Below, I will provide a specific example of a mapping between graphs to show this formula’s mechanics.



Figure 1 Illustrative example of an algebraic graph mapping ϕ(G)=H and its corresponding labeled graphs.

     The algebraic manipulations enabled by this approach remain somewhat limited since every new vertex and edge and every deleted vertex and edge must be explicitly specified. However, as with manipulations of other mathematical objects, taking advantage of patterns can allow abstraction to larger and more complicated systems. For example, sequences and series are often utilized for such purposes in the field of analysis. To move towards analogous investigations on graphs, I will describe a framework for graph labeling which eases the manipulation of complex networks using my algebraic technique.

     Let 𝒵 represent a set of “potential vertices” equipped with labels corresponding to the integers. It should be noted that this setup allows for infinite isomorphic graphs with different labelings to be constructed. Each point in 𝒵 is labeled as x∈ℤ. The points x∈𝒵 define the set of possible vertices v∈H which might be created when ϕ operates upon G. In this way, symbolic extrapolations to large networks can be made.


Figure 2 (A) The layout of “potential vertices” described by 𝒵 with bounds x∈[-5,5]. (B) An example of a labeled graph defined on 𝒵 with the same bounds.

     Using the labeling scheme 𝒵, functions on ℤ can facilitate the construction of complex networks according to defined rules. Let the variable x take on integer values such that x∈ℤ. To map between graphs on 𝒵, use functions which satisfy F𝒵:ℤ→ℤ to specify the vertices and edges of H. Note that such functions will create infinite graphs unless bounds are defined on x. To facilitate the construction of networks, this scheme will follow three useful rules. (1) If repeated vertices and edges are created, the copies of the repeated vertices and edges will be ignored. (2) If a nonexistent vertex or edge is subtracted, giving a negative vertex or edge, the negative object will be ignored. (3) Any edges or vertices which do not belong to G or H will be ignored.



Figure 3 (A) The graph G consists of a single vertex v(0). (B) The function F𝒵 maps G to a graph H using the labeling scheme 𝒵. Edges are created linking v(3) and the rest of the vertices, all negative-labeled vertices edges incident with v(3) are removed, and then all edges linking v(0) and any existing negative-labeled vertices are created. (C) Isomorphism on H to provide a cleaner visualization of the result.

     Here, I have developed an algebraic toolset for describing mappings between graphs. This toolset allows the creation of new vertices and edges as well as the removal of existing vertices and edges. To allow rapid generation of networks with many vertices and edges, I have provided a method for consistently labeling graphs that works synergistically with the core algebraic technique. This method may provide the basis for more generalized explorations which utilize such mathematical resources as groups and group-like structures. Because the method lends itself to creating infinite graphs, it may also stimulate investigation from the perspective of analysis using concepts like limits and convergence. The relationships between my algebraic graph mappings and matrix-based representations of graphs may also provide new territory. Applied computational fields may benefit from the mapping technique since it provides an alternative set of resources for investigating and modeling networks. Algebraic graph mappings may provide a nucleation point for a myriad of possible research directions.


Disclaimer: because I come from a biological background and have very limited formal mathematical training, this text may appear quite rough to more mathematically-experienced individuals. However, I feel that the technique has potential to develop further and be applied to problems in science and engineering. I would welcome any constructive critiques so long as they are presented in a respectful manner.

Notes on Abstract Algebra

1 Comment

Binary Operations

  • Definition: a binary operation is an operation performed between any pair of elements from a set S that maps into the same set S.


  • The mapping must be defined for every pair of elements in S and uniquely assign any pair of elements in S to another element in S. (This does not exclude the elements operated upon).
  • Addition on the set of real numbers is an example of a binary operation since adding any two real numbers will give another real number. Note that the “upside-down A” symbol denotes “for all.”


  • Addition on the set of complex numbers as well as addition on the set of integers are binary operations.
  • Matrix addition on the set of all matrices with real entries is not a binary operation since matrix addition is undefined for matrices with different numbers of rows or columns.
  • However, matrix addition on the set of all 2×2 matrices with real entries is a binary operation since adding any given pair of 2×2 matrices will generate another 2×2 matrix. The same holds for the set of all 2×2 matrices with complex entries.
  • Subtraction on the set of natural numbers (1,2,3,4…) is not a binary operation because some results will fall outside of the natural numbers (for instance, negative values).


  • Binary operations can take more exotic forms. Any operation that fits the definition of a binary operation is a binary operation.


  • A group G is an algebraic structure consisting of a finite or infinite set and a binary operation which acts between any pair of the elements in that set. Groups have the following properties.
  • Closure: given that a and b are in G, the group operation generates a result c that is also in G. (Note that all binary operations are defined to have closure regardless of whether they belong to a group).


  • Associativity: the group operation is associative.


  • Identity: there is an identity element e such that the relations below hold.


  • Inverse: there must be an inverse of each element.


  • As such, groups are sets equipped with a generalized addition operation.
  • To prove that G is a group, simply demonstrate in general terms (by manipulating symbols rather than specific values) that the properties above hold for all elements of G. Note that the identity element e may need to be algebraically solved for as a separate exercise from the proof itself.
  • Abelian groups are groups for which the group operation is commutative on the set G.


  • Some group-like structures that commonly occur in abstract algebra are given along with their properties in the table below.



  • Many number systems (i.e. the set of real numbers, the set of polynomials, etc.) share the binary operations of addition and multiplication.
  • Addition is a binary operation with the properties of closure (by definition of a binary operation), associativity, additive identity a+0=0+a=a, additive inverse a+-a=-a+a=0, and commutativity. Addition forms an Abelian group on a set where it is defined.
  • Multiplication is a binary operation with the properties of closure (by definition of a binary operation), associativity, multiplicative identity a•1=a, multiplicative inverse a•(1/a)=a, and distributivity of multiplication over addition a(b+c)=a•b+a•c and (a+b)•c=a•c+b•c. Multiplication is commutative on some sets (i.e. the reals), but not others (i.e. real 2×2 matrices). Note that when the set contains 0, multiplication cannot form a group because 0 generally has no multiplicative inverse.
  • A ring is a set R equipped with the binary operations of addition and multiplication. Rings have the following properties. Note that the “backwards E” symbol denotes “there exists.” The elements denoted by 1 and 0 are generalized and do not always refer to numbers. 


  • Some examples of rings include the set of real numbers equipped with addition and multiplication, the set of complex numbers equipped with addition and multiplication, the set of rational numbers equipped with addition and multiplication, the set of polynomials R[x] equipped with addition and multiplication, and the set of square matrices equipped with addition and multiplication. Note that, since rings do not require a multiplicative inverse or commutativity with respect to multiplication, they can be defined on a wider variety of sets than would be possible otherwise. This standard type of ring is also known as a noncommutative ring.
  • Commutative rings are rings which have the additional property given below.



  • Consider a ring R containing an element a ≠ 0 and an element b ≠ 0 The element a is called a zero divisor if either of the formulas below holds.


  • Commutative rings without any zero divisors are called integral domains. Some examples of integral domains include ℝ, ℂ, ℤ, and ℚ.
  • Multiplicative cancellation holds for integral domains. Note that R denotes a ring and not the real numbers.


  • Consider a ring R and a nonzero element a ∈ If there also exists an element b ∈ R such that ab = ba = 1, then the elements a and b are multiplicative inverses of each other and are called invertible. Invertible elements of R are also called units of the ring R.
  • If all the nonzero elements of an integral domain are invertible, then the integral domain is called a field. As such, fields are sets equipped with a division operation as well as addition and multiplication operations.


Reference: A Gentle Introduction to Abstract Algebra