Author: logancollins

Emotion as a Property of Information Part 1: The Physical Basis for Panpsychism

No Comments

     When considering the fundamental substance of the universe, I am inclined to propose an information-based description. All physics may arise as a consequence of informatic processes. To unify this description with the experience of consciousness, I suggest that information and consciousness are synonymous. This would support panpsychism, the idea that “everything has at least some level of consciousness.”

     To explain more completely, consider a rock. Any rock encodes information in its crystal structure. When sunlight hits the rock, energy is transferred into the atoms on the stone’s surface. From there, the energy propagates through the rock as heat. The pattern of thermal diffusion is dependent on factors such as the distribution of temperature in the rock at t=0 (immediately prior to contact with the sunlight), the relative densities and compositions of different parts of the rock, and the types of impurities present in the rock. As the sunlight shifts away from the rock, leaving it in shadow once again, heat begins to radiate back out. Depending on the processing inside the rock, the heat will radiate from different patches on its surface at different rates.

     Now consider a human. When light contacts the human’s retina, a signal is transduced by opsin proteins and a cis-trans isomerization of the cofactor retinal. After several more steps, a signal encoding the pattern of light on the retina is transferred into the brain, where it is processed by an elaborate series of excitatory and inhibitory neural interactions. These neural processes take into account the individual’s past experiences, other sensory information, and more. The data is repeatedly transformed until it yields instructions for a motor response, perhaps turning the head away or blinking.

     The rock and the human are similar in that they both are subsystems of the universe that take in data, transform it depending on internal structures, and generate some output. Of course, the rock does not experience the world in the same way that humans or even insects do. The rock’s experience is far more primitive. Compared to most biological organisms, rocks possess poor memories. The rock can store some hazy memories in its distribution of residual thermal energy from a previous encounter with heat, but these data are highly disorganized and difficult to retrieve in a form that resembles the original heat stimuli. Consequently, a rock probably lives “in the moment” and does not reflect upon its past experiences. Perhaps the stone experiences a fuzzy, often randomly changing, procession of sensations and mild swells of emotion, never really pausing to consider their implications.

     By comparison, a human will experience more directed responses to specific stimuli. If a human sees someone she knows, some brain regions will be predictably activated. However, the human brain’s output is dependent on all current sensory information as well as its state at time t, leading to a colossal space of possible responses to an individual stimulus. The brain’s structure evolves over time as experiences accumulate, leading to variable responses even given identical sensory data. Unlike the rock, humans recall past events and so construct a continuous temporal context. With this context, humans can reflect upon their own experiences as well as predict future events.

     Given these parallels between biological and non-biological information processing, I suggest that physical panpsychism may represent an accurate description of reality. This could provide a generalizable path to the neural correlates of consciousness, in which specific patterns of information are synonymous with specific conscious experiences. For instance, stable positive feedback loops might be involved in positive emotions like curiosity, excitement, and love. Of course, the human brain’s vast meshwork of data-transforming pathways gives rise to far more nuanced types of curiosity, excitement, and love than could be generated by an individual positive feedback loop. However, I would postulate that stable positive feedback loops could form the backbone for more complicated sentiments. It should be noted that some positive feedback loops can give rise to negative emotions (i.e. as in OCD). In these cases, the positive feedback might be coupled to other patterns of information which possess intrinsically unpleasant properties, overriding the intrinsic goodness of the loop. Another item to note is that positive feedback loops might only retain their pleasantness for as long as they are stable. If the loop can no longer reproduce or propagate its pattern (such as during habituation processes), the positive emotions may begin to fade. With this method of understanding consciousness, I argue that information structures may correspond to emotions in a quantifiable manner.  


Notes on Topology

1 Comment

Topological Spaces and Topologies

Given a set X and a collection of its subsets τ (this τ is a topology on X), a topological space on X is denoted by (X,τ) and follows the rules below.

  • X and the empty set are contained in τ.


  • Any union of subsets in τ is contained in τ. This can be an infinite number of unions.


  • Any finite intersection of subsets in τ is contained in τ.



  • Given a topological space (X,τ) containing an element A, the neighborhood NA is any subset of X including A such that NA contains an open set of τ.
  • Often, neighborhoods are defined around a point p in X. In these cases, the neighborhood Np is any subset of X including p such that Np contains an open set of τ.
  • To visualize neighborhoods, consider a subset V of a plane X. The subset V contains a point p. If an arbitrarily small disk around p fits inside V, then V is a neighborhood of p.
  • Note that, for a given set S and any point p on the boundary of the set, that set cannot be a neighborhood of p.


Open Sets and Closed Sets 

  • In practice, open sets do not include their boundaries and closed sets do include their boundaries. However, more complicated definitions exist.
  • The union of any number of open sets is itself open.
  • The intersection of a finite number of open sets is itself open.
  • The complement of an open set relative to the rest of the space is a closed set.
  • Some sets may be both open and closed. Examples of such sets are the given entire topological space X and the empty set.

Closure, Interior, and Boundary of a Set

  • Informally, the boundary of a subset S includes the points on its “outline.”
  • Slightly more formally, the boundary includes the points in S for which no open ball will be entirely inside S.


  • The interior of a subset S includes all points in S that are not part of its boundary.
  • The closure of a subset S on a topology is defined as the union of the boundary points of S and the interior points of S.

Topological Bases

  • A basis B for a topology X is a family of subsets of X such that every open subset of X is the union of some members of B.
  • The empty set is the union of any nonoverlapping elements of B.
  • Any intersection of base elements is another base element.
  • Many bases may generate the same topology.
  • A common example of a basis is the set of all possible open balls (in 2D they actually are disks, though they are called balls no matter the dimensionality) that union to form a plane.
  • Below, just some of the (infinite) open balls forming the basis for a topology consisting of a 2D shape are illustrated.


Continuous Functions between Topological Spaces: Homeomorphisms

  • Functions between topological spaces generalize the notion of real and complex-valued mappings to any rule that assigns abstract objects in a domain to abstract objects in a codomain.
  • Continuous functions between topological spaces are called homeomorphisms when the inverse of the given mapping is also continuous.
  • Given a mapping from a topological space (X,τ) to a topological space (Y,τ), the mapping is continuous if and only if all open sets in the domain map to open sets in the codomain. That is, the preimage of any open set in the codomain must be open in the domain.
  • Another property to note is that homeomorphisms are bijective (each element in the domain maps to exactly one element in the codomain).


  • Given that (X,τ) and (Y,τ) are homeomorphic, any topological property (such as connectedness) of (X,τ) or of (Y,τ) must be shared by both (X,τ) and (Y,τ).
  • If (X,τ) and (Y,τ) do not share even a single topological property, then (X,τ) and (Y,τ) are not homeomorphic.     
  • The common joke about how topologists consider donuts and coffee cups to be the same arises because those items are homeomorphic (if they are assumed to be topological spaces).

Hausdorff Spaces, Compactness, Paracompactness, and σ-Compactness

  • Hausdorff spaces are topological spaces in which individual points have disjoint neighborhoods. Disjoint neighborhoods are neighborhoods that do not share any elements (the intersection of disjoint neighborhoods U and V is the null set).
  • Note that the term “disjoint” can also refer to sets. Disjoint sets are sets that do not share any elements.


  • Compact spaces have finite open covers. An open cover of a set X is a collection of sets that contains X as a subset. As such, a finite open cover involves a finite collection of sets containing X.
  • Given a cover, a subcover of X is a subset of a cover that still contains X. The subcover of X is equivalent to the refinement of X.


  • Paracompact topological spaces are spaces for which every open cover contains a locally finite open refinement. A collection of subsets is locally finite when each point in the space has a neighborhood that intersects a finite number of neighborhoods in the collection.
  • σ-compact topological spaces are topological spaces which can be described as the union of countably many compact spaces.
  • Note that a countable set is a set for which each element is associated with a unique natural number. More formally, countable sets have the same cardinality (number of elements) as a subset of the natural numbers. Countable sets may be countably finite or countably infinite. When countably infinite, each element can be counted one at a time, but the counting may never finish.

First-Countable and Second-Countable Spaces

  • First-countable spaces are topological spaces for which every point is an element in a countable number of neighborhoods. For each point, this set of neighborhoods is called a neighborhood basis or a local basis.
  • For example, consider a topological space on a set X consisting of four points x1, x2, x3, x4. (Assume that the set is equipped with a topology). Each point has a countable number of overlapping neighborhoods. In the figure, the neighborhoods which include point x4 are shown as an example.


  • All metric spaces are first-countable since any point in a metric space will have a countable (though often countably infinite) number of overlapping neighborhoods. This comes from the fact that distances are strictly defined on metric spaces, so the number of open balls making up the basis for a metric space will correspond to some natural number.
  • Second-countable spaces are topological spaces with a countable basis.
  • Euclidean space is a second-countable space because the set of open balls forming its basis can be restricted to the set of open balls with rational radii and centers with rational coordinates.
  • Every second-countable space is first-countable (but not every first-countable space is second-countable).


  • Connected spaces are topological spaces that cannot be represented as a union of disjoint nonempty open subsets. That is, a connected topological space X cannot be divided into disjoint nonempty open subsets.
  • For instance, the usual topology (described in the next section) on Rn is connected as any union of disjoint open subsets will always exclude at least part of the n-dimensional real numbers.
  • Disconnected spaces are topological spaces which do not satisfy the definition of connected spaces.
  • Path-connected spaces are topological spaces for which a path can be used to draw some curve from any point x to any point y in the given topological space X.
  • A path is defined as a continuous function mapping from the unit interval [0,1] to the topological space X such that f(0) = x and f(1) = y.


  • Every path-connected space is connected (but not every connected space is path-connected).
  • Simply connected spaces are topological spaces which are path-connected and every path between every pair of points can be continuously transformed.
  • Given a topological space V, it is said to be locally connected at a point x when every neighborhood of x contains a connected open neighborhood (a topological neighborhood that does not include its boundary). When V is locally connected at every point it contains, then V is called a locally connected topological space.

Essential Examples of Topological Spaces

  • The standard (or usual) topology on Rn may be defined as the union of all possible open balls on n-dimensional Euclidean space.
  • Given a set X, the discrete topology on X is a topological space in which each point forms an open set. Each of these open sets is called a singleton (a set with only one element).
  • Given a set X, the indiscrete (or trivial) topology is a topological space in which the only open sets are X itself and the empty set. If X has multiple elements, the space cannot be equipped with a metric (any distance between elements must be zero).
  • Given a topological space (X,τ) and a subset V of X, the subspace topology is the subset V equipped with the topology τ.


  • Topological manifolds are a type of topological space which must satisfy the conditions of Hausdorffness, second-countability, and paracompactness.
  • In addition, topological manifolds must be locally homeomorphic to Euclidean space, Rn.
  • In order to be locally homeomorphic to Euclidean space, the formalisms below must hold.


  • This means that, for every point x in a locally Euclidean topological space X, there exists an open set U such that x belongs to U and there exists a homeomorphism h from Rn to U. These homeomorphisms are called charts. The combination of charts which covers X is called an atlas.
  • Below is a simplified representation of a manifold (only a few homeomorphisms are shown). In this case, the manifold is a 2-manifold, also called a surface.


  • Manifolds cannot “cross themselves” unless special conditions are met (i.e. the Klein bottle is an exception). For instance, a 1-manifold cannot have curves which intersect and a 2-manifold cannot pass part of its surface through another part of its surface. The reason for this is that Euclidean space cannot be homeomorphic to a point of intersection since this gives multiple values for the same function and therefore is not defined as a function.
  • However, as the dimension of a manifold increases, the manifold can cross itself when projected into lower dimensions while not actually crossing itself in its highest dimension. Consider that a surface may “fold over” to some degree without intersecting itself in R3. If projected into R2, then this manifold will appear to cross itself. But when looking back at the R3 depiction, it is clear that the manifold does not actually cross itself. The same holds true for higher dimensions.
  • For an atlas, two charts can overlap on a manifold. The intersection of these two charts is an open set which maps to the same region of Euclidean space. Transition maps are composition functions f(g-1) = f∘g-1 and g(f-1) = g∘f-1 which map the open sets in R2 to the manifold (by the inverse function) and then map back to Euclidean space (by the function which describes the other homeomorphism). This is also called a coordinate transformation.


  • If a manifold’s transition maps are all at least once differentiable (C1), then the manifold is called a differentiable manifold.
  • If the transition maps are all infinitely differentiable (C), then the manifold is called a smooth manifold.



Understanding the Hippocampus Visually

1 Comment

I have always enjoyed looking at images that display biological systems in terms of their functional components. But oftentimes, existing diagrams show too much detail, too little detail, the wrong types of detail, or confusing captions/explanations. This seems especially common in neuroscience. Inspired by this challenge, I put together the image below to explain the fundamentals of hippocampal circuitry in a clean, unified manner. 

Hippocampus Diagram

List of Exciting Fields

No Comments

Some exciting/beautiful/inspiring topics to explore!

Abnormal psychology, aerospace engineering, aesthetics, affective neuroscience, algebraic geometry, algebraic topology, algorithms, algorithmic art and writing, American education system, American government, amygdala, arachnology, architecture, arthropod neurobiology, artificial intelligence, artificial superintelligence, astrochemistry, auditory neurobiology, Bayesian statistics, biochemical engineering, biochemistry, bioconjugates, bioelectronics, bioengineering, bioethics, biomaterials, biomathematics, biomedical imaging, biomimetic architecture, biomimetics, bionics, biophysics, biopunk, brain-computer interfaces, category theory, cell signaling, cellular biomechanics, Christian demonology, civil engineering, cliodynamics, cognitive neuroscience, combinatorics, complex analysis, complex geometry, complex manifolds, complex networks, computational biology, computational chemistry, computational neuroscience, computational protein engineering, computer architecture, connectomics, contemporary American poetry, contemporary installation art, control theory, cosmology, cybernetics, cyberpoetry, cyberpunk, developmental biology, developmental neurobiology, differential equations, differential geometry, differential topology, digital art and graphic design, digital filtering, DNA nanotechnology, Dubai, dynamical network models, dynamical systems, ecological engineering, economic theory, economics, education, electrical engineering, electrochemistry, electronica, electronic communication systems, electrostatics, entrepreneurship, epigenetics, epistemology, experimental art, experimental poetry, F. Scott Fitzgerald, femtotechnology, fluorescence microscopy, Fourier analysis, fractional calculus, future studies, gender and sexuality, gender theory, general relativity, geoengineering, gifted education, gifted psychology, graph theory, graphic novels, group theory, healthcare, herpetological neurobiology, herpetology, Hindu demonology, history of science, human physiology, human sexuality, hypercomplex manifolds, immunology, in vitro meat, information geometry, information theory, inorganic biochemistry, inorganic chemistry, insect-based robotics, integral equations, intellectual property law, intelligence, Islamic demonology, Japan, Japanese education system, Judaic demonology, linear algebra, literary science fiction, logic, macroeconomics, malaria, materials science, mathematical neuroscience, mathematical statistics, MATLAB, Matrioshka brains, mechanics of materials, megascale engineering, metabolic engineering, microeconomics, microscopy, molecular dynamics, MRI, myrmecology, nanopunk, nanotechnology, neurobiology of learning and memory, neurochemistry, neurogeometry, neuroimmunology, neuromorphic electronics, NEURON, neuropharmacology, neuroscience, NMR, ologs, ontology, optimization, optogenetics, organic synthesis, organometallic chemistry, partial differential equations, personality disorders, philosophy of mind, philosophy of science, philosophy of technology, photovoltaics, physical chemistry, physical panpsychism, poetry writing, political science, polymer chemistry, polymer physics, prefrontal cortex, premotor cortex, probability, psychology, public speaking, Python, quantitative human physiology, quantum chemistry, quantum chemistry, quantum field theory, robotics, romanticism, sensorimotor cortex, sensory neurobiology, Shanghai, signal processing, smooth manifolds, social media, social movements, sociology, soil chemistry, solid-state physics, speculative poetry, speleology, statistical thermodynamics, structural biology, supercomputer architecture, surgery theory, Systems biology, systems ecology, tensors, the Enlightenment, theoretical computer science, thermodynamics, tissue engineering, tomography, topological dynamics, topology, ultrasound, urban ecology, urban planning, utilitarianism, virtual reality, world politics, writing science fiction novels, X-ray microtomography

Useful Equations Glossary


Table of Contents

  • Multivariable Calculus
  • Differential Equations
  • Complex Analysis
  • Graph Theory
  • Matrix Methods
  • Fourier Analysis
  • Mechanics
  • Electrostatics
  • Diffusion
  • Electrochemistry
  • Quantum Chemistry
  • Computational Neuroscience
  • Probability
  • Information Theory
  • Electrical Engineering

Multivariable Calculus

Line integrals in a scalar field give the area under a space curve. The formula can be generalized for more variables.

LineIntegralScalarField - corrected

When a line integral is taken using a vector field, the work performed by the vector field on a particle moving along a space curve in the field can be computed. This is applicable to numerous analogous situations as well. The line integral in a vector field is given by the equation below. F defines the vector field and the curve C is a vector function given by r(t).

The gradient is the vector of the partial derivatives of a function with respect to each of its variables.


The gradient can be operated on in a variety of ways. For instance, the dot product of the gradient and a unit vector gives the directional derivative (the rate of change in the direction of that unit vector).


Divergence is a measure of the “flow” emerging from any point in a vector field. Positive divergence indicates a “source,” while negative divergence indicates a “sink.”


Curl describes the amount of counterclockwise rotation around any point in a vector field. Positive curl indicates more counterclockwise rotation, while negative curl indicates more clockwise rotation. 2D curl has a simple formula (below, top) and 3D curl has a somewhat more complicated formula (below, bottom).


Integration by parts is a way to “reverse the product rule.” For integration by parts, let f(x)=u and g(x)=v.


The Laplacian is the sum of unmixed second partial derivatives of a multivariable function. Roughly speaking evaluating the Laplacian at a point measures the “degree to which that point acts as a minimum.” Consider f(x,y,z). When a high positive value comes from evaluating the Laplacian at a point, that point will likely have a lower value for f than most of the neighboring points.    


Differential Equations

Given a first order linear differential equation that can be arranged into the form given by the top equation below, the solution to that differential equation is given by the bottom equation below.


The Laplace transform is useful for solving differential equations with known initial values for y(t), y(t), y’’(t), and any higher order derivatives present. It is defined by the top equation, but it is often easier to employ tables of Laplace transforms of common functions and modify the results for the problem of interest. Laplace transforms have a variety of useful properties described by the rest of the equations below. Furthermore, the Laplace transform is a linear operator.


Complex Analysis

Euler’s identity relates e, i, and ϴ to trigonometric functions.


For a complex valued function, f(z) = w, the complex variable z maps to another complex variable w. Complex valued functions can be described as a transformation from a point (x,y) to a point (u,v) as given by the equation below.


Trigonometric functions of a complex variable z can be represented in terms of exponentials. In addition, a complex function f(z) raised to the power of another complex function g(z) may be expressed as the bottom equation.


Complex integrals are also called contour integrals. Integrals of complex functions can be expressed in terms of line integrals if f(z) is continuous on a parameterized space curve.


Cauchy’s integral formula states that, for a simply connected domain D and a curve C which lies within D and contains a point z0, the equation below holds. (Simply connected domains have no “holes” and their boundaries do not cross each other).


In complex analysis, residues come from a generalized (to complex numbers) version of Taylor series called Laurent series. They are useful in evaluating contour integrals. If the complex part of a Laurent series contains a finite number of terms, then z = z0 is called a pole of order n. Here, n is the power to which (z – z0) is raised in the denominator of that term in the given Laurent series. If n=1, the pole is called a simple pole and can be computed by the formulas at top or middle. For higher order poles, the formula at bottom can compute the residue.


Cauchy’s residue theorem requires a simply connected domain D with a closed contour C inside and a function f(z) which is differentiable on and within C except at a finite number of points z1, z2, … zn. If these conditions are met then, the contour integral of f(z) may be computed by the equation below.



Graph Theory

The handshaking lemma states that, for any graph G(V,E), the sum of the node degrees equals twice the number of edges.


Euler’s formula holds for simple (undirected and no self-edges) planar graphs. Planar graphs are graphs that can be represented without any edges crossing each other. V represents the number of nodes, E represents the number of edges, and F represents the number of faces including the infinite face outside of the graph.Euler’s formula holds for simple (undirected and no self-edges) planar graphs. Planar graphs are graphs that can be represented without any edges crossing each other. V represents the number of nodes, E represents the number of edges, and F represents the number of faces including the infinite face outside of the graph.


The clustering coefficient for a graph is given below. Here, n is the number of edges among the neighbors of node i and k is the number of neighbors of node i.



Matrix Methods

The matrix product is described by the equation below. Note that the inner dimensions of the matrices must match for the matrix product to be defined.


The transpose of any matrix is applied as in the example below.


The determinant of a 2×2 matrix is given by the equation below.


Eigenvalues and eigenvectors are very useful for a variety of applications. They are related to a matrix A and a vector x by the equation at top. To find the eigenvalues of a matrix, apply the formula at middle and find the zeros of the resulting polynomial. The zeros are the eigenvalues of the matrix. The eigenvectors of an eigenvalue can be found by solving the system at bottom for x1…xn. The set of all scalar multiples of the solved vector [x1…xn] is the set of eigenvectors for the given eigenvalue.



Fourier Analysis

Fourier series are weighted sums of sines and cosines that can represent any periodic signal. Here, the period of signal s is given by T.


The Fourier series coefficients are the weights of the sines and cosines. They are given by the integrals below.


The continuous Fourier transform takes in a signal function s(t) and decomposes it into its component weighted sines and cosines. It employs the relationship between complex numbers, sines, and cosines to achieve this. Fourier transforms are said to convert from the time domain to the frequency domain. By observing the frequency domain, a signal can be more easily analyzed and decoded.


Since the output of a Fourier transform is usually complex, the equation below can be used to convert the output to a power spectrum.


The signal can be reconstructed using the inverse Fourier transform. This is useful in data compression since the data can undergo a Fourier transform and then later be reconstructed using the inverse Fourier transform, given below.


The discrete Fourier transform has the same purpose as the continuous version, but it operates on discrete data s(k) and so uses summation rather than integration.


Power spectra can also be computed for discrete Fourier transforms.


By applying the discrete Fourier transform and then the inverse discrete Fourier transform (below), a discrete signal can be compressed and then reconstructed.




Multidimensional equation of motion (constant acceleration).


Momentum in kg•ms-1 (top), work in J given constant force (middle), circular motion at constant speed (bottom).


Newton’s second law in terms of mass and acceleration (top) and in terms of momentum (bottom). Force is measured in Newtons.


Work-energy theorem.


Velocity of an object undergoing circular motion given its period.


Force from potential energy in 3D.


Potential energy function for spring (top) and spring force formula (bottom).


Objects move to minimize their potential energy. When the net force is zero, so is the derivative of the potential energy. Note that stable and unstable fixed points apply here.


Energy of isolated system with thermal energy U.


Multidimensional power with constant force. Power is measured in J/s.


Multidimensional work with variable force (given by a vector field) as a line integral.




Suppose that a constellation of point charges exists at a given time. If a new charge, q0 is brought into the vicinity of the other charges and placed at a location x,y,z, then the force from the other charges on q0 is given by the equation below. The ȓ0j represents a unit vector extending from qj to the point x,y,z. The ϵ0 represents a constant known as the permittivity of free space and is equal to 8.85•10-12.


The electric field in the above scenario is defined as the equation below, where F has been divided by q0.


To calculate the electric field from a continuous charge distribution at a point x0,y0,z0, the triple integral below is used over the volume region D. The unit vector ȓ points from (x,y,z) to (x0,y0,z0).


The electrostatic potential energy of a point charge q brought into the vicinity of an electric field E is given by a line integral describing the work performed on the point charge to move it from position r0 to r.


Given an electrostatic potential function UE, the electric field is given by the gradient of UE.




Fick’s first law of diffusion describes the relationship between the flux J of particles across an area and the concentration of the particles. D is a proportionality constant called the diffusion coefficient. The law is given in 1D at top and in 3D at bottom.


Fick’s second law of diffusion describes how particles flow given a concentration gradient. The law is given in 1D at top and in 3D at bottom. This is equivalent to the heat equation.




The Nernst equation describes the membrane voltage given an ion’s concentration on each side of the membrane. R is the gas constant (8.314 J/K), F is Faraday’s constant (96,485 C/mol), z is the charge of the ion, and T is the temperature in Kelvin.


The Goldman equation expresses the membrane voltage given multiple types of ions and relative permeabilities of the membrane to those ions.


The Nernst-Planck equation describes ion flow with electrical potential and concentration gradients. Jp indicates particle flux over an area, D is a diffusion coefficient, UE is the electrical potential, e represents the elementary charge of 1.6•10-19 C, z is the charge of the given type of ion, and u is the mobility constant of the ion in the solution.


The free energy required for an ion to cross a semipermeable membrane (like a phospholipid membrane) is given by the equation below. This equation accounts for a preexisting membrane voltage from another ion. If the free energy is negative, the ions in the first term will spontaneously cross the semipermeable membrane. Note that the numerator and denominator within the natural logarithm are switched relative to the Nernst equation.



Quantum Chemistry

The Rydberg equation gives the wavelength emitted when an electron moves from an excited state ni to a lower energy level nf. Here, R is the Rydberg constant, 1.097•107 m-1.


Given an electromagnetic wave, its velocity (the speed of light c), frequency, and wavelength are related by the equation below.


Particles have wavelengths known as de Broglie wavelengths. They can be computed using Planck’s constant 6.626•10-34 m2kg/s over the momentum mv.


The energy of a wave (in J) is given by the formulas below. Note that frequency is equivalent to c/λ.


The time independent Schrödinger equation relates the wavefunctions of a particle and its energy. Solving this equation gives a formula for the wavefunction of a given particle. The symbol  represents h/2π. V(x) is the potential energy, m is the mass, and E is the energy. The 1D version is shown at top and the 3D version is shown at bottom.

Schrodinger - corrected

The Hamiltonian operator describes the kinetic and potential energy of a quantum system. It can be applied to wavefunctions to generate the Schrödinger equation. The Hamiltonian in 1D is given at top and the Hamiltonian in 3D is given at bottom.

Hamiltonian - corrected

The wavefunction solution of a 1D particle in a box with boundary conditions ψ(0)=0 and ψ(L)=0 is given at top. The n represents the discrete energy level of the particle and may take on values of 1,2,3… The energy of the particle at energy level n is given at bottom.


The wavefunction solution of a 3D particle in a box with boundary conditions ψ(0,y,z)=0, ψ(Lx,y,z)=0, ψ(x,0,z)=0, ψ(x,Ly,z)=0, ψ(x,y,0)=0, ψ(x,y,Lz)=0 is given at top. The nx, ny, and nz represent the x,y,z components of the particle’s discrete energy level n. Each component must still be a poösitive integer. The energy of the particle at energy level n is given at bottom.


The integral of the wavefunction squared denotes the probability that a particle will be found within a given region. Since wavefunctions may be complex, the square must be computed using the complex conjugate. This is the 1D version of the equation.


The average or expectation value of a measurable property (associated with an operator) is given in 1D and 3D by the integrals below. Note that the integral in the denominator acts to normalize the result.


The time-dependent Schrödinger equation describes the evolution of a wavefunction over time.


The general solution to the time-dependent Schrödinger equation is the time-independent wavefunction times a complex exponential.



Computational Neuroscience

Consider the spikes of a single neuron being recorded over a time interval t. This experiment is carried out for K trials. The firing rate v in trial k is given by the number of spikes over t. Repeating the experiment several times allows the repeatability of the spike count nk across repetitions to be calculated. This variability, the Fano Factor, is the variance of the spike count over the mean spike count.


To construct a peri-stimulus-time histogram, consider a spike train over time interval t with k trials. Split the interval into subintervals (t; t + ∆t) and sum the number of spikes over all trials within each subinterval. Then divide by k∆t, the number of trials times the size of each subinterval. 


Average neural population activity can be computed from simultaneously measuring of j different neurons over a time interval t. Split the time interval into subintervals (t; t + ∆t) and sum the number of spikes over the whole population for each subinterval. Then divide by j∆t to obtain the average population activities at each subinterval.


Integrate-and-fire neurons are modeled by a linear differential equation. The solution to that differential equation given an initial condition v(0) = vrest is shown below. The variable v represents the membrane voltage, R is the membrane resistance, I0 is the injected current, and τ is a time constant for the membrane. When using integrate-and-fire neurons, a firing threshold is often set.


The Hodgkin-Huxley equations are a system of differential equations that model neural electrophysiology. The system must be solved numerically. The C(dv/dt) term, in which capacitance is multiplied by the change in voltage with respect to time, is equivalent to current across the membrane. Note that membrane voltage is a function of time. Eion is the equilibrium potential for that ion across the membrane, Iin is the injected current, and gion is the conductance value for that ion across the membrane. The parameters m, n, and h help describe the gating of the ion channels and fit the model to data.




Conditional probability (the probability of B happening given A).


Joint probability (the probability of A and B occurring). This is described using an intersection. Joint probability for three events and four events is also given. Further extrapolation follows the same pattern.


The probability of A or B, but not both (described using a union). The probability of A, B, or C but not any other combination of the events is also given. Further extrapolation follows the same pattern.


Bayes theorem gives the probability of an event based on prior knowledge of other conditions that may have influence on the event.

Bayes - corrected

For a discrete random variable, a probability mass function gives the probability associated with each outcome of a random experiment.


Cumulative distribution functions (CDFs) give the probability that a random variable X will give a value less than or equal to x.


For a continuous random variable, the integral of a probability density function fX gives the probability that the outcome lies on the chosen interval.


The expected value is the average of each outcome times its probability. The discrete case is given at top and the continuous case is given at bottom.


The variance is a measure of the spread of data. The square root of variance is standard deviation. The general equation for variance is given at top. To compute discrete variance, use the equation at middle. To compute continuous variance, use the equation at bottom.


Given a set of n total elements, a permutation is a selection of k elements from the set such that the order does matter. The number of permutations of a set is given below.


Given a set of n total elements, a combination is a selection of k elements from the set such that the order does not matter. The number of combinations of a set is given below.



Information Theory

Shannon information is a measure of “surprise” associated with an outcome. The units are bits and p(x) is the probability of the given outcome.


Entropy is the average amount of Shannon information from a stochastic data source.


Joint entropy is the entropy for multiple random variables (the joint entropy for two random variables is given below).


Conditional entropy is the entropy of X given that a particular outcome y (from the other variable) is known.


Mutual information measures the mutual dependence of two random variables. It does not necessarily imply causality, though sometimes it is associated with causality. The mutual information I(X;Y) is given by the sum of individual entropies minus the joint entropy.



Electrical Engineering

Electrical current is defined as the amount of charge in Coulombs that flows through a conductor or circuit element per unit time. The Coulomb is a unit of charge that is equivalent to 1.602•1019 elementary charges. One electron has a charge of -1.602•10-19 C. Current is measured in amperes, which are C/s. Voltage is defined as the amount of energy per unit charge that moves through a given circuit element. It is measured in J/C or volts. Voltage can be thought of as electrical potential energy. Resistance has units of V/A or ohms, represented by Ω. Current, voltage, and resistance are related by Ohm’s law (top). Conductance refers to the inverse of resistance, is represented by G, and is measured in inverse ohms Ω-1 or siemens S. Ohm’s law in terms of conductance is given at bottom.


Power is the rate of energy transfer in J/s or watts. Power is given by the product of current and voltage.


Energy delivered to a circuit element is computed by integrating power over a time interval.


Kirchhoff’s current law states that the net current entering a node equals zero. The net current equals the sum of currents entering minus the sum of currents leaving. Alternatively, this law states that the sum of the currents entering a node equals the sum of the currents leaving the node. If several circuit elements are connected to a pair of nodes which are themselves connected, then the circuit elements can be considered as connected to a common node.


The closed path from a node, through some circuit elements, and back to the same node is called a loop. Kirchhoff’s voltage law states that the sum of the voltage changes around a closed loop in a circuit equals zero. If the path taken around the loop starts in the direction of a circuit element that undergoes a voltage drop, that circuit element is given a positive value. If the path starts by crossing a circuit element such that there is a voltage gain, that circuit element is given a negative value.


If the length L of a resistor is much larger than its cross sectional area A, then the resistance can be approximated by the formula below. The letter ρ is a constant for the material of the given resistor called the resistivity. The resistivity is measured in ohm meters Ωm.


Resistances in series can be summed into a single resistance. This single equivalent resistance can replace the original set of resistances in the analysis.


Resistances in parallel can be converted into a single equivalent resistance using the equation below.


The equation for conductance in series is given below.


The equation for conductance in parallel is given below.


The capacitance for a parallel plate capacitor, measured in Farads (coulombs per volt), is given by the equation below. Here, ε is the dielectric constant for the material between the plates, A is measured in square meters, and d is measured in meters.