Author: logancollins

Want to learn biology? Recommended texts from beginner to advanced


No Comments

Preface:

I have seen a variety of online resources which recommend books for learning physics and mathematics (e.g. Chicago undergraduate mathematics bibliography, Susan Fowler’s So You Want To Learn Physics, How to Learn Math and Physics, etc.), yet there seems to be a paucity of similar resources for biological fields. To help fill this gap, I have compiled a handpicked list of textbooks which may aid those with a desire to learn biology.

I have also included books from fields such as mathematics, computer science, chemistry, physics, imaging, and nanotechnology which are important in biology. The books from adjacent fields which I recommend here are mostly targeted towards those readers who come from backgrounds which are not greatly quantitative. For this reason, books filled with lots of detailed mathematics are located in the advanced category. That said, I do assume some familiarity with mathematics and physics in the lower levels also.  

Though this page so far does not include resources beyond textbooks, there are many other useful tools for learning about biology. Video lectures, educational books which are not textbooks (e.g. Thieme FlexiBooks, Lippincott’s Illustrated Reviews, etc.), scientific journal articles (especially review papers), reputable scientific news articles (e.g. Nature News and Views, Science Daily, Neuroscience News, etc.), Wikipedia, other educational websites, and research experience come to mind.

While this list is certainly not comprehensive, I have tried to cover as much ground as possible for the interested autodidact. These books represent the ones that I personally feel are the best for the given subjects at the given levels (beginner, lower intermediate, upper intermediate, and advanced). There are a lot of texts related to microbiology, biochemistry, and neuroscience. This bias reflects my own background in synthetic biology, nanobiotechnology, and connectomics. My list is currently lacking in ecology and evolutionary biology texts. If anyone is interested in contributing their own recommendations for these or other missing topics, feel free to contact me and we can figure out how to incorporate your texts.

One point that I would like to make is that you by no means need to read these books from cover to cover. It is much more efficient to learn biology by creating a curriculum for yourself and reading selected chapters and sections as they interest you. Over time, the knowledge will build up and you will start to see how it all connects. You will eventually begin to gain the ability to think critically about biological mechanisms and how perturbing them may influence the systems. I would recommend practicing this kind of thinking early on. You can begin to do thought experiments even when you are starting out. As you carry out these thought experiments, you can explore your books and the internet to try and figure out any missing pieces. This will exercise your ability to understand and make predictions about biological systems.

Biology is an expansive, interdisciplinary, and extremely exciting field. I hope that you enjoy your journey into the biological sciences!

Beginner:

These represent foundational texts which introduce biology and associated fields which are essential for understanding biology (i.e. chemistry, physics, and mathematics). They are at a high school or maybe college freshman level.

Biology

Campbell Biology – by Urry, Cain, Wasserman, Minorsky, Reece || An authoritative introduction to biology and its subdisciplines. It features clear explanations, good organization, and helpful illustrations. Though lengthy, you can often read desired subsections in any order. That said, I would recommend reading some molecular biology and genetics chapters before diving into physiology. It should be noted that this text is a primary source for the high school Biology Olympiad competition.

Chemistry

Chemistry – by Zumdahl, Zumdahl, and DeCoste || An introductory chemistry text which has good organization and illustrations. Though other general chemistry books could work just as well, I have a mild personal preference for this one.

Mathematics

Calculus – by Stewart || Though lengthy, this book is a good introduction to calculus. It explains single-variable calculus and multivariable calculus and even gives a small taste of differential equations. This is excellent since calculus and differential equations are so central to computational modeling of biological systems.

Physics

Physics for Scientists and Engineers: A Strategic Approach with Modern Physics – by Knight || While I have not used this book personally, I have heard good things with regards to its applicability for biology. As such, I picked out Knight’s text for this list entry because of its organization, its inclusion of modern physics, and its emphasis on practical applications.

Lower intermediate:

These books introduce a range of key subfields in biology. Though some of the texts are quite long (e.g. 800+ pages), I will say again that they do not need to be read cover to cover. These do not require greatly specialized knowledge to understand. They are typically used for first year or second year university courses. As with the previous section, I have included some non-biology texts covering fields adjacent to biology. Note that, because biology is an interdisciplinary enterprise, these adjacent fields are vitally important for understanding and applying biological knowledge. 

Biochemistry

Lehninger’s Biochemistry – by Nelson and Cox || Great textbook which discusses biochemistry with both depth and breadth. It is not as detailed as Voet’s book (see the upper intermediate section), but it is not a light treatment either. This text features beautiful illustrations which are very helpful for gaining a deeply visual appreciation of how biochemistry works. In my opinion, it also has well-written treatments of the mathematics of enzyme kinetics and related topics.

Computer science

MATLAB: A Practical Introduction to Programming and Problem Solving – by Attaway || Since computer science is an integral part of biology research, it is important to have at least some understanding of programming and modeling. For those who are not already familiar with programming, Attaway’s MATLAB book provides an excellent entry point. It instructs on how to use MATLAB in a clear and concise way and also discusses essential mathematics that come up in scientific computing. Another strength of this text is its clean organization, which allows one to jump around the different sections more easily as required by one’s explorations in MATLAB coding. MATLAB is one of the most user-friendly programming languages and so it is great for beginners. Though MATLAB is not as grounded in the fundamentals of computational logic as some languages, it is quite useful as a tool for many scientific computing applications such as modeling, image processing, and data analysis. It should be noted that MATLAB itself is not free, though if you are affiliated with a university, the school will probably pay for your license.

Python Programming: An Introduction to Computer Science – by Zelle || This text provides another excellent entry point into programming. Zelle acts as a well-organized reference for learning the basics of Python. It is clear and reasonably concise. By contrast to MATLAB, Python is freely available. Another benefit of Python is the wide array of user-created software packages that you can easily install into your Python infrastructure. Many of these packages provide tools that handle specific areas of computational biology such as nucleic acid sequence analysis or biologically realistic neuron simulation.

Genetics

Essentials of Genetics – by Klug, Cummings, Spencer, Palladino, Killian || A standard text which introduces the various branches of genetics. Though there is perhaps not enough focus on modern techniques for my personal taste, I do appreciate the clarity of this book’s molecular genetics sections.

Gene Cloning and DNA Analysis: An Introduction – by Brown || Excellent book which describes molecular genetics techniques. It is concise and clear and yet still covers a lot of important methods in sufficient detail to convey real understanding.

Immunology

Cellular and Molecular Immunology – by Abbas, Lichtman, Pillai || Explains immunological principles in a through yet digestible way. It features very consistent diagrams which carefully represent specific molecules and cell types with the same images throughout the book.

Basic Immunology: Functions and Disorders of the Immune System – by Abbas, Lichtman, Pillai || This text is essentially a more concise version of Cellular and Molecular Immunology. Since it is written by the same authors, it also features its sister text’s helpfully consistent diagrams. 

Mathematics

Fundamentals of Differential Equations and Boundary Value Problems – by Nagle, Saff, and Snider || Differential equations are vitally important for modeling and simulation in biology, so if you want to go into any kind of biotechnology-related field, you should learn about this branch of mathematics. This text covers differential equations in a clear manner, provides lots of good exercises, and focuses on application rather than theory.

Linear Algebra: Step by Step – by Kuldeep Singh || Linear algebra is another area of mathematics which is vitally important for modeling and simulation in biology and bioengineering fields. This book goes over linear algebra in a clear fashion, has some illustrations to aid intuitive understanding, includes many good exercises, and emphasizes application rather than theory.

Microbiology

Brock Biology of Microorganisms – by Madigan, Bender, Buckley, Sattley, Stahl || For those who want to explore infectious disease and/or synthetic biology, it can be valuable to get acquainted with microbiology. This authoritative text is friendly to beginners in biology and has strong illustrations.

Molecular and Cellular Biology of Viruses – by Lostroh || This is a good book for virology in general. It has very pretty illustrations which are quite helpful to the reader. I do think that the book meanders too much in its explanations. The organization of the book as a whole seems a little haphazard as well. Nonetheless, this text can serve as a good reference if you want to read up on a specific type of virus and are looking for intuitive comprehension of its mechanisms.

Molecular biology

Molecular Biology of the Cell – by Alberts, Johnson, Lewis, Raff, Roberts, Walter || A comprehensive and yet approachable book on molecular biology. It has numerous excellent illustrations, a crucial feature in any molecular biology text. It thoroughly covers a large array of important topics. There are even supplemental digital chapters on further topics in molecular biology for interested readers.  

Essential Cell Biology – by Alberts, Hopkin, Johnson, Morgan, Raff, Roberts, Walter || Though this book is somewhat less detailed and thorough than the Molecular Biology of the Cell, it provides a more concise introduction to cell biology, while still covering enough detail to grant a good understanding of the subject. It also has great illustrations.

Neuroscience

Neuroscience: Exploring the Brain – by Bear, Connors, Paradiso || This book talks about a wide range of topics in neurobiology, so it is useful for introducing neuroscience as a broad field of study. I found the chapters on sensory neuroscience to be especially strong. In my admittedly biased opinion, the book neglects computational neuroscience and modern neuroscientific techniques. If you are coming from a highly mathematical background and/or wanting to go into a mathematically-focused field of neuroscience, you might want to supplement this text with some computational neuroscience books (see the intermediate and advanced sections of this page).

Organic chemistry

Organic Chemistry as a Second Language: First Semester Topics – by Klein || Klein’s short books on organic chemistry are amazing at helping the reader to understand the core principles of the subject. The first semester topics text is especially good for explaining the principles governing structure and mechanisms in organic chemistry.

Organic Chemistry as a Second Language: Second Semester Topics – by Klein || The second installment in Klein’s short texts on organic chemistry is similarly fantastic for gaining intuitive understanding. It goes into more depth on why certain reaction mechanisms happen as well as covering spectroscopy topics.

Organic Chemistry – by Klein || Klein’s full-length textbook provides further detail on organic chemistry while still emphasizing skills and principles rather than memorization.

Physiology

Principles of Anatomy and Physiology – by Tortora and Derrickson || Very long book, but wonderfully illustrated, clearly explained, and highly informative. I really appreciate how this text discusses molecular biology and biochemistry in the context of human physiology. It includes a wealth of fascinating details on how physiology works from the molecular level on up to the whole body. I especially enjoyed the chapter on endocrinology. For those who are medically inclined, there is also a lot of detail on the anatomical terminology (but this can easily be skimmed if you are not planning on going into medicine). Finally, there are numerous boxes which discuss specific diseases and other clinical subjects of special interest.

Plant biology

Raven Biology of Plants – by Evert and Eichhorn || An authoritative text on plant biology. Though I never got into this book much, I have heard great reviews from others. It covers a wide range of topics in botany and offers clear explanations as well as very nice illustrations and photographs. It spends a lot of time reviewing content from other areas of biology, which can be good or bad depending on your level of background.

Upper intermediate:

Books which cover more specialized topics in various subfields of biology or cover broader fields of biology in more depth. In contrast to the previous texts, these books tend to go into more detail and assume that the reader has more background. They are often employed in upper-level undergraduate elective courses. It should be noted that the degree of background required for my “lower intermediate” and “upper intermediate” categories is a matter of opinion. People may find certain texts more challenging or less challenging depending on their background and learning style. That said, I think that these categories can still serve as a rough guide for those seeking to expand their knowledge of the biological sciences.

Biochemistry

Introduction to Proteins: Structure, Function, and Motion – by Kessel and Ben-Tal || Discusses protein biochemistry and biophysics. This text does not go into great mathematical detail (it only includes relatively simple equations), but it does discuss the conceptual underpinnings of biophysical phenomena in a lot of detail. As an example, it contains some excellent biophysical explanations of why protein folding is such a challenging computational problem. The book also provides a wealth of information about how proteins operate in the larger cellular and physiological contexts. The illustrations are only moderately attractive, but still helpful from a practical perspective.

Biochemistry – by Voet and Voet || Though I have not personally used this book much, I have heard it is an excellent text from a number of sources, so I wanted to include it here. Voet’s textbook is known for going into a lot of detail, so it should serve you well if you are looking for a comprehensive discussion of general biochemistry. It also has very good illustrations.

An Introduction to Medicinal Chemistry – by Patrick || Beautiful book on drug design, drug development, and how drugs interact with the body. This textbook is really great because it clearly explains the fundamental principles of medicinal chemistry in a highly generalizable fashion. Its writing and diagrams really help the reader to understand the “why” underlying pharmacology. The text is also quite concise, direct, and practical in its presentation.

Developmental biology

Developmental Biology – by Gilbert and Barresi || This book contains impressive details on the development of various organisms. It has beautiful diagrams and describes complicated signaling pathways in an engaging and meaningful manner. When I read Gilbert’s text, I get excited about how the process of organismal development follows a gorgeously complex extrapolation of fundamental chemical logic.

Imaging

Fluorescence Microscopy: From Principles to Biological Applications – edited by Ulrich Kubitscheck || An excellent introduction to the engineering principles of fluorescence microscopy. This book provides background on optical physics, explains the physical mechanisms behind key types of modern fluorescence microscopy systems (e.g. confocal microscopy, light-sheet microscopy, etc.), and discusses how fluorescence itself works and is applied. While the text does not shy away from using the necessary mathematical tools to properly explain the subject, it is clear enough that even readers with relatively light backgrounds in physics should find it reasonably understandable.

Introduction to Medical Imaging: Physics, Engineering and Clinical Applications – by Barrie Smith and Webb || Clear and well-organized introduction to the main modalities of medical imaging. This text explains physical principles behind the operation of technologies such as magnetic resonance imaging, x-ray computed tomography, ultrasound, and more. It also discusses some important concepts in computational image processing. While mathematics certainly plays a key role in this book, it is overall fairly light on quantitative aspects. Depending on your goals, this can be advantageous or a drawback. The illustrations are helpful from a practical perspective, though not especially lush.

Microbiology

Bacterial Pathogenesis: A Molecular Approach – by Wilson, Winkler, Ho || Really nice book on the molecular mechanisms of bacterial pathogenesis. This book has a fair amount of detail on the subject but explains clearly. I own the 3rd edition rather than the more recent 4th edition, but I have had a chance to look through the 4th edition. It should be noted that the 4th edition has major updates including beautiful full-color illustrations which greatly enhance its explanatory power. The 3rd edition already had quite helpful diagrams, but the 4th edition appears to have taken this to a new level entirely.

Virology: Principles and Applications – by Carter and Saunders || This virology text is less comprehensive than many other virology books, but it makes up for this in that it explains viruses in a highly concise and pragmatic manner. The sections on bacteriophages and HIV are especially strong. For the reader who seeks to gain clear and direct understanding of the key molecular mechanisms used by viruses, this text is excellent.

Principles of Virology – by Flint, Racaniello, Rall, Skalka, Enquist || This book comes in two volumes. The first emphasizes molecular biology of viruses and the second emphasizes the pathogenesis and control of viruses. The diagrams are quite consistent, beautiful, and helpful. The text explains clearly and covers a lot of valuable topics. As a result of its thoroughness, this book may seem somewhat overwhelming, but it still is excellent as a reference and as a general source of virology knowledge.

Molecular biology and genetics

Molecular Biology of the Gene – by Watson, Baker, Bell, Gann, Levine, Losick || Classic text which discusses molecular genetics at a somewhat higher level than a typical introductory molecular biology book. Great illustrations and clear explanations aid the reader’s understanding of the intricate molecular machines which tirelessly carry out the myriad of tasks necessary to run the genome and transcriptome. The book is fairly long, but if you already know some molecular biology, you can certainly jump around to learn more details about specific areas of interest.

Molecular Genetics of Bacteria – by Snyder, Peters, Henkin, Champness || Similar to Watson’s text (above), but specifically covering bacterial molecular genetics rather than molecular genetics in general. In the 4th edition, the illustrations convey strong understanding of molecular mechanisms, though they are not as sumptuous as the diagrams in some biology books. In the 5th edition, the illustrations are both sumptuous and convey strong understanding of molecular mechanisms. There is a lot of great material here which can be especially useful for biohackers (and other researchers) who want to use the bacterial cell as a chassis for synthetic biology.

Neuroscience

Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience – by Baars and Gage || Discusses cognitive neuroscience from both neuropsychological and neurophysiological perspectives. This text goes over a lot of psychological experiments for those who are interested in behavioral neuroscience, but also discusses mechanisms for those who want to focus more on the underlying ways that the brain operates. In my opinion, the largest drawback of this book is that it is weak on cellular neurophysiology.

Fundamentals of Computational Neuroscience – by Trappenberg || An excellent introduction to computational neuroscience for someone coming into the area from a less quantitatively-focused background. You will still need to know calculus and maybe a small amount of differential equations, but the book is less mathematically intense than most other computational neuroscience texts. Furthermore, the book explains key ideas from areas of mathematics such as linear algebra and probability so that the reader does not necessarily have to already know these subjects. It is fairly concise yet still clearly explains a wide variety of topics from the field.

Physical chemistry

Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience – by Dill and Bromberg || This book features elegant explanations of how statistical thermodynamics and molecular physics apply to biology and nanotechnology. In my opinion, one of its strengths is its excellent organization. The text also features very clean formatting which makes it a smoother read. Though this text is mathematics-focused, it reviews key concepts in probability and multivariable calculus for readers who have less quantitative backgrounds. There are some great chapters on foundational topics (e.g. entropy, the Boltzmann distribution, electrostatics, etc.) as well as numerous chapters on exciting applications such as polymer physics, biochemical machines and nanomachines, and cooperative binding.

Physiology

The Biology of Cancer – by Weinberg || This book provides an amazing introduction to the molecular biology, genetics, biochemistry, and treatment of cancer. Lots of great content on tumor pathogenesis from perspectives of cell signaling, DNA repair and recombination, tissue microenvironment, immunobiological aspects, virology, and more. The book features a wealth of breathtaking diagrams and histological photographs which are colorful, detailed, and highly informative. Though some of the book goes through a lot of basic molecular biology review, readers who feel comfortable with that material can easily skip to more advanced sections.

Advanced:

Books that cover specialized topics in depth and books that involve somewhat complicated mathematics are listed here. These texts typically assume that you have a fair amount of background. They are usually employed at the senior undergraduate level or at the graduate level (but please do not let this discourage you from trying them out regardless). Note that a few of these might be called monographs rather than textbooks. Because of the breadth of the biological sciences, there are many thousands of possible titles to include in this section, so please realize that these texts represent a small sampling.

Biochemistry

Protein Actions: Principles and Modeling – by Bahar, Jernigan, Dill || Excellent text on the biophysics of proteins. This book goes through a lot of challenging content on physical chemistry and computational modeling, yet it is presented in a very understandable way. Full color illustrations, clearly organized equations, and elegant explanations contribute to its pedagogical strength.

Genetics

Epigenetics – by Allis, Caparros, Jenuwein, Reinberg || Very detailed but also very rewarding, this book goes over epigenetics in a series of engaging chapters written by expert authors. Despite having different authors for different chapters, the book uses consistent illustrations throughout. The illustrations are also of high quality and are in full color, which helps to motivate the reader and aids understanding. This text covers the epigenetics of a series of model organisms as well as a myriad of key topics in mammalian epigenetic research.

Mobile DNA III – edited by Craig, Chandler, Gellert, Lambowitz, Rice, Sandmeyer || Very long and highly technical, this monograph delves deep into research on topics such as transposons, recombination, and programmed DNA rearrangements. Despite its technical character, this book still includes a myriad of helpful (and colorful) diagrams and usually has good explanations. I especially enjoyed the chapter on integrons.  

Imaging

Fundamentals of Biomedical Optics || A good text on microscopy and other forms of imaging as well as the underlying optical physics involved in the engineering of imaging systems. The book is well-organized, engagingly illustrated, detailed, and emphasizes generalizable principles. Many parts of this text can be a struggle for a reader without a strong physics background, but this makes sense given the subject matter and level of depth.

Nanotechnology

Bioconjugate Techniques – by Hermanson || A great reference text for those interested in nanobiotechnology, drug delivery, contrast agents, and other areas involving bioconjugates. This book is filled with beautiful diagrams which aid understanding. The explanations are a less concise than would be ideal, though they are still effective. The text also provides lots of clear laboratory protocols for interested researchers.

The Nature of the Mechanical Bond: From Molecules to Machines – by Bruns and Stoddart || Beautiful and comprehensive text on supramolecular chemistry, an area which is highly relevant to bioengineering disciplines. It focuses on the synthesis and dynamics of supramolecular structures which perform desired mechanical actions. The book is somewhat long due to its high level of detail and coverage, but it is gorgeously illustrated and well-written. There is a fair amount of historical content included throughout and the first chapter discusses some connections between supramolecular chemistry and art. I would recommend having a strong understanding of your chemical thermodynamics, chemical kinetics, organic chemistry, and perhaps even some organometallic chemistry when reading this book. While this kind of background knowledge is not absolutely necessary, it can certainly help to get more out of the text.

Neuroscience

Dendrites – edited by Stuart, Spruston, Häusser || Beautiful text which goes through the biology of dendrites in a series of engaging chapters by expert authors. Exceptionally well-made diagrams (with full color also) help the reader to understand concepts and useful tables facilitate referencing of detailed information. One drawback of the book is that it is lacking in concision, though this is partly due to the need to discuss ambiguity in content at the frontiers of dendrite research.  

Handbook of Brain Microcircuits – edited by Shepherd and Grillner || This book provides a series of short reviews on the mechanistic workings of neuronal microcircuits in both vertebrate and invertebrate systems. Though brief, each chapter packs in a lot of interesting information. As with many of the texts I have chosen for this list, the text features many full color diagrams to aid the reader. If you want to see a myriad of examples of the precise mechanisms which produce cognition and behavior, this book is excellent. Of course, the book is far from comprehensive; there are many papers which examine other neural circuits and there remains a vast universe of neural circuits still waiting to be uncovered.

Neuronal Dynamics: From single neurons to networks and models of cognition – by Gerstner, Kistler, Naud, Paninski || An elegantly-written computational neuroscience book which has been made freely available by the authors online. Lots of mathematical modeling is discussed in this text, but it explains the mathematics clearly and does not muddle understanding through unnecessary digressions. Note that this book focuses much more on the mathematical models than on actual coding (depending on your goals, you may find this beneficial or detrimental). This textbook is great for facilitating deeper understanding of computational neuroscience.

Fundamentals of Brain Network Analysis – by Fornito, Zalesky, Bullmore || An excellent text on using graph theory in neuroscience. It is beautifully illustrated, well-organized, and clearly explained. The mathematical tools of graph theory and complex networks are made accessible to those coming from a biological background. My only complaint about this book is that it is somewhat lacking in conciseness. My personal view is that it would have been possible to explain the subject more concisely without losing out on the depth and other beneficial qualities. Nonetheless, the book can be very rewarding (and enjoyable).

cover images are from Amazon.com

banner image is from ThailandTatler.com

List of some interesting people


No Comments

Disclaimer: this list is non-comprehensive, these are just the people who happened to pop into my head. There are certainly many more people who may belong on this list. Furthermore, everyone is “interesting” in his/her own unique way, so please do not feel badly if you do not happen to have made it into this personal compilation.

Adam Marblestone, Albert Einstein, Aleksei Aksimentiev, Allen Ginsberg Anders Sandberg, Anton Arkhipov, Anushree Chatterjee, Arthur C. Clarke, Aubrey de Grey, Barack Obama, Bertrand Russell, Bill Gates, Bobby Fischer, Brian David Johnson, Carson Bruns, Charles Lieber, Christof Koch, Christopher Voigt, Cole Hugelmeyer, Conrad Farnsworth, Craig Venter, David Baker, David Pearce, Deblina Sarkar, Dhash Shrivathsa, Donald Ingber, Donna Haraway, Drue Kataoka, Easton LaChappelle, Ed Boyden, Elon Musk, Emily St. John Mandel, Emma Watson, Eric Betzig, Erik Drexler, Erin Smith, Fei Chen, Feng Zhang, Francis Collins, Francis Crick, Freeman Dyson, F. Scott Fitzgerald, Garry Kasparov, Gene Roddenberry, George Church, George Whitesides, Greg Bear, Greg Egan, Greta Thunberg, Grimes, Hannu Rajaniemi, Henry Markram, Hod Lipson, Isaac Asimov, Jack Andraka, Jacob Barnett, James Collins, James Watson, Jay Keasling, Jeff Gore, Jennifer Doudna, J.K. Rowling, John von Neumann, Kai Kloepfer, Karan Jerath, Karl Deisseroth, Karl Friston, Kazuo Ishiguro, Ken Rinaldo, Kevin Esvelt, Kurt Gödel, Kwanghun Chung, Laura Deming, Liz Parrish, Magnus Carlsen, Mark Bathe, Martin Fussenegger, M.C. Escher, Neri Oxman, Nicole Ticea, Nikola Tesla, Octavia Butler, Orit Peleg, Pamela Silver, Peter Diamandis, Peter Singer, Poppy, Ray Kurzweil, Raymond Wang, Robert Langer, Robert McCall, Ron Weiss, Rosalind Franklin, Ryan Robinson, Sanath Devalapurkar, Sebastian Seung, Simon Stålenhag, Srinivasa Ramanujan, Stephen Baxter, Steven Pinker, Suganth Kannan, Taylor Swift, Taylor Wilson, Terence Tao, Theodore Berger, Thomas Crowther, Tracy K. Smith, Wei-Chung Allen Lee, William Shih, Yayoi Kusama

Cover image: the photographs which comprise the cover image were taken from various online sources. If you own one of these pictures and would like for it to be removed from the cover image, feel free to let me know and I will do so.

Note on links: as this page ages, some of the links may begin to break due to changes at the target sites. Feel free to let me know if you see this happen with any particular entries and I will see what I can do about fixing them.

Notes on Classical Mechanics


No Comments

PDF version: Notes on Classical Mechanics – by Logan Thrasher Collins

These notes will cover the basics of Lagrangian mechanics and Hamiltonian mechanics using linear oscillatory motion as a lens (including coupled oscillations). I will assume that the reader already has knowledge of Newtonian mechanics at the level of a typical introductory physics course. That said, these notes are targeted towards readers who want to apply mechanics in engineering disciplines. I will not go much into derivations or at all into proofs, but rather present mechanics as a tool for solving engineering problems.

Lagrangian mechanics

Overview of Lagrangian mechanics

Although Newtonian mechanics is useful in many situations, there exist many mechanical systems for which Newtonian methods are difficult to apply. One way of circumventing a cumbersome Newtonian problem is to utilize the Lagrangian method instead.

Lagrange’s method centers around a quantity known as the Lagrangian L. This quantity is equal to the system’s kinetic energy T minus the system’s potential energy V (see the first expression below). The Lagrangian is needed for a differential equation called the Euler-Lagrange equation (see the second expression below). Here, q represents the generalized coordinates of the system. The concept of generalized coordinates will be explained subsequently. As one example, q could equal the one-dimensional position x of a free particle. When the Euler-Lagrange equation is simplified, it reduces to a differential equation that describes the motion of the desired system. Solving that differential equation gives the equations of motion for the system.

eq.1

Generalized coordinates

Generalized coordinates qi are any set of independent coordinates that can uniquely specify the configuration of a system. Because they are independent variables, generalized coordinates cannot exhibit any functional relationship to each other. Because they specify the configuration of the system, if all the generalized coordinates are known, all positions of every part of the system can be found.

In practice, generalized coordinates are typically displacements and angles. Cartesian coordinates, polar coordinates, and more can be equivalent to generalized coordinates. Note that one system can have many sets of generalized coordinates. However, it is typically most useful to choose a set that has the fewest possible generalized coordinates.

When the fewest possible generalized coordinates are chosen, they can be used to determine the system’s configuration in any other coordinate system. For instance, consider a 2D pendulum with a particle at the end of a swinging rod of length p. One of the sets of generalized coordinates with the fewest possible variables consists of just θ, the angle between the pendulum and its equilibrium position (straight down). To obtain the Cartesian coordinates in terms of θ, one can use the expression (x, y) = (psinθ, pcosθ). As is clear from this example, the single generalized coordinate fully specifies the configuration of the pendulum system.  

To further explore generalized coordinate systems, some key examples of generalized coordinates are given in the following table. While there are infinite possible systems to explore, these examples should help to give some intuition regarding how to implement generalized coordinates.

Table1

Lagrangian mechanics is typically most useful for constrained systems rather than unconstrained systems. To understand the meanings of constrained and unconstrained, realize that the cases of the systems above that are unconstrained include the free particle systems and the cases of the systems above that are constrained include the single 2D pendulum, the double 2D pendulum, and the 1D spring system.

Using the Euler-Lagrange equation

As mentioned earlier, employing the Euler-Lagrange equation first requires computing the Lagrangian L = T – V. Next, one must find the derivatives ∂L/∂q, ∂L/∂q̣̇. Note that, since q̣̇ is a time derivative of a position coordinate, it is a velocity variable. Once these derivatives are found, one must differentiate the result of ∂L/∂q̣̇ with respect to time. Put these results back into the Euler Lagrange equation and simplify. This will produce a differential equation that describes the motion of the system. Solving the differential equation will give the system’s equations of motion. Note that it is often necessary to solve the resulting differential equation numerically.

To further illustrate how to apply the Euler-Lagrange equation, consider the system of the fixed spring linked to a mass. The following series of equations shows how to find the differential equation describing this system. Recall that the kinetic energy of a moving mass is 0.5mv2 and the potential energy of a spring is 0.5kx2.

eq.2

When applying the Lagrangian method, it is useful to understand ignorable coordinates. If a coordinate qi is ignorable, the corresponding generalized momentum pi = ∂L/∂q̣̇i must be constant. When the generalized momentum pi is constant, ∂L/∂qi = 0 for the generalized coordinate qi. Ignorable coordinates can simplify Lagrangian problems since L no longer depends on the ignorable coordinate q. However, it should be noted that q̣̇ is not always a constant, so the Lagrangian often still depends on q̣̇.

Another advantage of the Lagrangian method over the Newtonian method is that any set of generalized coordinates q can be transformed to a new set of generalized coordinates Q(q) where each new Qi is some function of the original q1 … qn and the Euler-Lagrange equations will still be valid with respect to the new coordinates.

Linear oscillations

Simple harmonic motion

Simple harmonic motion (SHM) is an important type of oscillation which happens when the acceleration of a mass is linearly proportional to its displacement from an equilibrium position and is directed towards the equilibrium position. In SHM, there is no loss of energy. SHM in 1D is mathematically described by the following differential equation. Some examples of SHM include the oscillations of simple springs and pendulums. For a simple spring system, ω2 = k/m. For a simple pendulum system, ω2 = g/L. (The constant ω is the angular frequency).

eq.3

There are several equivalent ways of writing the solution to the SHM differential equation, each of which has benefits and drawbacks. The exponential solution to the SHM equation is the first expression given below. The sine and cosine solutions arise by using Euler’s formula eiωt = cos(t) + isin(t) and are given by the second expression below. B1 is the initial position and ωB2 is the initial velocity.

eq.4

Another equivalent way of writing the solution to the SHM equation is to use the phase-shifted cosine solution. The phase-shifted cosine solution is given as the first equation below. Here, A is a constant describing the amplitude of the oscillations. The constant A can also be computed using the constants B1 and B2 which were described above. Finally, the solution to the SHM equation can be written as the real part of a complex exponential. This version of the solution is given by the second equation below.

eq.5

To extend SHM to the 3D case, the differential equation describing the system is split into three independent differential equations for the x, y, and z directions. Solving these differential equations gives the equations of SHM for x, y, and z. The 2D case is the same, but with only two independent differential equations. There are a variety of interesting graphical phenomena that come out of plotting 2D and 3D SHM equations, especially when there are different values of ω, A, or δ for x, y, and z.

eq.6

Damped oscillations

When some force resists oscillatory motion (e.g. friction, air resistance, etc.), causing energy loss over time, the resulting system undergoes damped oscillations. This type of system is described by the differential equation where b is a damping constant (see the first expression below). To make later calculations easier, the differential equation can be rewritten with alternative constants 2β = b/m and ω02 = k/m. The general solution to this differential equation is given by the second formula below.

eq.7

To understand the solution above, it is helpful to consider three cases: underdamping where β < ω0, overdamping where β > ω0, and critical damping where β = ω0. The solution above simplifies to different forms depending on whether β < ω0, β > ω0, or β = ω0. These results are summarized in the following table. After the table, plots of x(t) for the underdamped, overdamped, and critically damped cases are given.

Table2

Fig.2

Driven damped oscillations

When an external force influences a damped oscillating system, driven damped oscillations occur. Mathematically, this is described by setting the differential equation for the damped oscillator equal to a function f(t) instead of zero. Here, f(t) represents the amount of external force acting on the system as a function of time.

eq.28

To find the general solution to the above differential equation, one must first solve the differential equation where f(t) = 0. This solution, called the homogenous solution xh, is already known from the undriven damped oscillation case. Next, one must find the particular solution xp. The particular solution is any solution which solves the differential equation for the given nonzero force function f(t). The general solution to the differential equation of driven damped oscillations is equal to xh + xp.

eq.8

One useful special case to consider is when a driving force of the form f(t) = f0cos(ωt) is applied. In this case, f0 is the amplitude of the driving force divided by the oscillator’s mass and ω is the driving force’s frequency. Note that ω is a distinct parameter from the oscillation frequency ω0. The differential equation for this system and its general solution are given below. Note that the non-cosine term in the expression for x(t) is the homogenous solution. Because this non-cosine term decays over time, it only contributes to the waveform during the early stages of the oscillations.

eq.9

Resonance

Consider the previously described case of the driven damped oscillator where the driving force is a sinusoidal function (which includes cosine). More specifically, let β take on a fairly small value. In this situation, when the frequency ω of the driving force is close to the frequency of the oscillator ω0, the amplitude of the driven oscillations grows very large.

The reason for this comes from the denominator of the amplitude A (see previous section). When β is small, the (ω02 – ω2)2 term is responsible for determining most of the value of the denominator. If ω0 and ω are close together, (ω02 – ω2)2 takes on a very small value. Since this term is in the denominator, a very small value leads to a very large amplitude A. This phenomenon is called resonance. To better understand resonance, see the plot of A2 versus ω below.

Fig.3

Resonance can be further characterized by computing the maximum amplitude Amax of oscillations where ω = ω0. This quantity is given by the following equation.

eq.10

Another way to characterize resonance is by finding the quality factor or Q factor. The Q factor describes the sharpness of the resonance peak and is often defined by the equation below. Note that 2β approximately equals the full width at half maximum (FWHM), the width of the resonance peak where at A = Amax/2. When Q is large, the resonance peak is narrow and vice versa. The Q factor is also useful because Q/π = the number of cycles the oscillator makes during one decay time. The decay time is defined as the amount of time it takes for the amplitude to drop to 1/e of its initial value.

eq.11

Finally, it can be useful to note that the phase shift at resonance is π/2. The reason for this is that ω02 – ω2 = 0 at resonance and the equation for the phase shift is arctan(2βω/( ω02 – ω2)). The zero in the denominator results in an arctangent of infinity, which equals π/2.

Coupled linear oscillations

Case of two masses linked by springs

To understand coupled linear oscillations, it is often helpful to consider the case of two masses linked by springs that are fixed to walls as seen in the image below. Here, m1 and m2 refer to the masses and k1, k2, k3 are the spring constants.

FIg.4

This system can be solved by Newtonian or Lagrangian methods. Here, the Lagrangian approach will be employed. Recall that the Lagrangian L = T – V. The kinetic energy T is found as the sum of the kinetic energies of the masses as shown in the first equation below. The potential energy V requires carefully evaluating the extensions of the springs. In this system, the respective extensions of the three springs are x1, x2 – x1, and –x2. Using this information and Hooke’s law Fs = –kx, the potential energy is given by the second equation below. The Lagrangian L is given by the third equation below.

eq.12

Using the Euler-Lagrange equation (see the first equation below), the Lagrangian above reduces to the equations of motion for the system (see the second equation below). By rearranging the spring constants, these equations of motion can be written in matrix form (see the equivalent third and fourth equations below).

eq.13

Solutions to this system of equations can be written in the complex form as seen below. Here, p1 and p2 are arbitrary constants and the actual motions of the masses are determined by Re(z(t)). Note that, although the frequency ω is assumed to be the same for z1(t) and z2(t), there are actually two solutions for ω (this will be explained subsequently).

eq.14

By substituting the above equation into the matrix equation for the coupled oscillator system, the following eigenvalue equation for K can be obtained.

eq.15

The characteristic polynomial which results after taking the above determinant is a quadratic equation with two solutions for ω2. As a result, there are two frequencies ω1 and ω2 at which the masses can oscillate. These are called the normal frequencies of the system. The equations governing the motion of the system at each normal frequency are called the normal modes of the system.

The general solution for the case of two masses linked by springs system is given as follows. The vectors are the eigenvectors from the eigenvalue equation of K. Note that this solution is a linear combination of the two normal mode solutions. As usual, the constants A1, A2, δ1, and δ2 are determined by initial conditions.

eq.16

To better understand normal modes, it can be helpful to investigate the specific case where k1 = k2 = k3 = k and m1 = m2 = m. In this situation, the eigenvalue equation reduces to the first expression below. The normal frequencies are the solutions to this eigenvalue equation and are given by the second expression below.

eq.17

Similarly, there are two normal frequencies for the general solution. However, the normal frequencies of the general solution are much more elaborate and so will not be written out here. If one needs the normal frequencies of the general solution, they can be obtained by solving its characteristic polynomial equation (most easily by using a computer algebra system).

Going beyond the case of the two masses linked by springs, for a similar system with N coupled masses, there are N normal frequencies and the equation of motion for each mass consists of superpositions of N normal modes. This principle also extends to other types of oscillators such as coupled pendulums.

Hamiltonian mechanics

Overview of Hamiltonian mechanics

To understand Hamiltonian mechanics, it can be helpful to first further examine Lagrangian mechanics. With Lagrangian mechanics, the n generalized position coordinates and their n derivatives define a set of possibilities called a state space. By using the Euler-Lagrange equation, the state space reduces to the equations of motion for the system. Each set of initial conditions then determines a unique path of the components of the system through state space.

For Hamiltonian mechanics, it is also important to reiterate the generalized momentum where qi are the generalized coordinates of the system (see below). Note that, if qi are Cartesian coordinates, then the generalized momentum is equivalent to the usual momentum.

eq.22

While Lagrangian mechanics employs n generalized position coordinates and their n derivatives, Hamiltonian mechanics instead uses n generalized position coordinates and n generalized momenta. These n generalized position coordinates and generalized momenta are called the phase space of the system. Each set of initial conditions determines a unique path of the components of the system through phase space.

The Hamiltonian and Hamilton’s equations

To achieve this, the Hamiltonian H and Hamilton’s equations are used. The Hamiltonian is a quantity that often holds equivalent to the total energy of the system. Here, pi are the generalized momenta, L is the Lagrangian, and q̣̇i are the generalized position coordinates.

eq.23

When the relationship between the generalized coordinates and the underlying Cartesian coordinates is independent of time (often the case), the Hamiltonian is equal to the total energy as H = T + V. However, the above equation (more general) should be used when the conversion between the generalized coordinates and Cartesian coordinates might depend on time.

Hamilton’s equations use the Hamiltonian to derive equations of motion for a system. By contrast to the Euler-Lagrange method which reduces a system with n degrees of freedom to n second-order differential equations, Hamilton’s method instead reduces a system to 2n first-order differential equations, which can sometimes be advantageous. Note that degrees of freedom are the number of independent parameters needed to define the state of a system. For many systems, the degrees of freedom are equal to the number of generalized coordinates (when this is the case, the system is called holonomic). Hamilton’s equations for i = 1, 2, 3… n are given as follows.

eq.24

The results of Hamilton’s equations can be combined with each other to produce the equations of motion for a given system.

Using Hamilton’s equations

As an example of how to apply Hamilton’s equations, consider the system of two masses linked by three springs with fixed walls at the edges (see the image in the previous section). For this system, the Hamiltonian is equivalent to the total energy as H = T + V (since the only generalized coordinate is x, which is already a Cartesian coordinate and so does not depend on time to undergo conversion to Cartesian coordinates). The Hamiltonian is given by the first equation below. Hamilton’s equations and their results are given by the second, third, and fourth lines of the equations below. Recall that the derivative of momentum is equivalent to force.

eq.25

As with the Lagrangian method, when applying Hamilton’s equations, it is useful to understand ignorable coordinates. If a coordinate qi is ignorable, the corresponding generalized momentum pi must be constant. When the generalized momentum pi is constant, –∂H/∂qi = ∂L/∂qi = 0 for the generalized coordinate qi. Note that, if a generalized coordinate is ignorable for the Lagrangian approach, it is also ignorable for the Hamiltonian approach and vice versa.

Ignorable coordinates lead to an elegant simplification of the Hamiltonian. If a system has an ignorable coordinate q, then the Hamiltonian H will no longer depend on q and the corresponding momentum p will be absorbed into the Hamiltonian as a constant. As an example, consider a system with two generalized coordinates q1 and q2, but the q2 is an ignorable coordinate. The Hamiltonian will depend on H(q1, p1, k) where k = p2. As a result, each ignorable coordinate decreases the number of degrees of freedom by one when employing the Hamiltonian approach. By contrast, this is not always true for the Lagrangian approach since even if q is ignorable and p is a constant, q̣̇ is not always a constant.

An advantage of the Hamiltonian method over the Newtonian and Lagrangian methods is that Hamilton’s equations are even more flexible than the Euler-Lagrange equation when it comes to coordinate changes. Under certain conditions, changes of both generalized coordinates and generalized momenta of the forms Q(q,p) and P(q,p), preserve the validity of Hamilton’s equations (with respect to the new coordinates and momenta). When these changes preserve the validity of Hamilton’s equations, the changes are called canonical transformations. But as mentioned, canonical transformations only work under certain conditions. The conditions for a transformation to be canonical are given by the equations below. The subscripts denote variables which must be held constant for the formulas in parentheses.

eq.26

The Hamiltonian method and phase space

Another advantage of the Hamiltonian method over the Lagrangian method is that Hamilton’s equations are automatically of the form dz/dt = h(z). There are many mathematical tools available for working with differential equations of this form. One of the most important of these tools is phase space analysis.

In Hamiltonian mechanics, the phase space vector is a 2n-dimensional vector z(q,p) where q is all of the generalized coordinates and p is all of the generalized momenta. Each value of z identifies a unique set of initial conditions for the system. With this notation, the equation h(z) is an expression of Hamilton’s equations as a first-order differential equation. Here, h is a vector of the functions fi = ∂H/∂pi and gi = –∂H/∂qi.

eq.27

Trajectories in the phase space with axes given by the elements of z(q,p) are vital in Hamiltonian mechanics. Any point z0 at a time t0 defines a unique trajectory in phase space of z. Since phase space vectors have 2n elements, it is difficult to visualize phase space for systems with more than one generalized coordinate, though there are methods to aid such visualization.

It is important to note that, for a given point in phase space, only a single trajectory can pass through that point across all times t. If there appear to be two trajectories crossing the same point, the trajectories must represent the same path looping back on itself. This property follows from Hamilton’s equations.

As an example of a phase space trajectory, consider the one-dimensional harmonic oscillator. For this system, the Hamiltonian is H = T + V = p2/2m + 0.5mω2x2 where k = mω2. Hamilton’s equations give ẋ = ∂H/∂p = p/m and ṗ = ∂H/∂p = –mω2x. By differentiating ẋ to get ṗ/m, the equation of motion for the system is found as ẍ = –ω2x. The solution to this equation of motion is x = Acos(ωt – δ). As a result, the momentum of the system is given by p = mẋ = –mωAsin(ωt – δ). These expressions for x and p act as parametric equations that define phase space trajectories. The phase space trajectories for this system take the form of ellipses which each start from unique phase points (p,x) and which never cross over each other. Different values of A determine the unique trajectories.

Fig.10 

Reference: Taylor, J.R. (2005). Classical Mechanics. University Science Books.

Cover image: originally created by John Harris.

Introduction to the Physical Chemistry of Protein Folding


No Comments

PDF version: Introduction to the Physical Chemistry of Protein Folding

Native and denatured states

Under a given set of conditions, a protein will exhibit a stable equilibrium configuration. Stable equilibrium configurations are conformational states which depend only on thermodynamics and not on kinetics. That is, a stable equilibrium configuration can be reached by allowing a protein to achieve thermodynamic equilibrium (under the given set of conditions). The steps that the protein takes to reach this configuration and the rates at which these steps occur are not relevant to stable equilibrium conformations since such factors involve kinetics.

The native structure of a protein and the denatured structure of a protein (under native and denaturing conditions respectively) represent two important types of equilibrium configurations. Because of this, many properties of folded and unfolded proteins can be expressed in terms of equilibrium thermodynamics rather than needing to involve kinetics. Recall that native structures occur under typical biological conditions and denatured structures occur under harsh conditions such as high salt or temperature.

Changes in protein structure are either reversible or irreversible. In the case of a reversible change, a protein will eventually return to its initial state if the initial conditions are reestablished. In the case of an irreversible change, a protein will not return to its initial state even if initial conditions are reestablished. As an example, the formation of covalent bonds can induce irreversible changes in proteins.

Equilibrium and protein folding

To measure protein stability experimentally, solutions of a protein are mixed with varying levels of denaturing agents or incubated at varying temperatures. The fraction of folded protein fN and the fraction of unfolded protein fD = 1 – fN are measured via spectroscopy or other techniques. These data can be used to make a denaturation curve.

To gain insight from this experiment, the data are used to find the folding free energy ΔGfold. The folding free energy can be computed using the following equation. K is a folding equilibrium constant, R is the gas constant, and T is the temperature in Kelvin.

eq.1

When denaturing proteins via chemical methods, the denaturation curve is given by the function ΔGfold(c) where c is the concentration of the denaturant. ΔGfold(c) is typically a linear function.Fig.1

When denaturing proteins via heat, the denaturation curve is given by the function ΔGfold(T) where T is the temperature in Kelvin. An instrument called a differential scanning calorimeter can apply varying temperatures to a protein solution and measure the amount of heat taken up or given off by the solution. As temperature increases, the heat absorption rises to a maximum level and then decreases, indicating that energy is invested to unfold the proteins until they are fully denatured. The point at which heat absorption reaches a maximum is called the denaturation temperature or melting temperature Tm. The excess heat capacity of unfolding ΔCp is the difference between the pre-denaturation baseline heat capacity and post-denaturation baseline heat capacity.

Since ΔGfold is the free energy change associated with the folding of a protein, it can be related to the enthalpy change ΔHfold and the entropy change ΔSfold using the following Gibbs free energy change equation.

eq.2

Protein folding energy landscapes

Energy landscapes are often used to visualize protein folding. Energy or free energy is plotted along the vertical axis while the configurational space of the protein chain is represented by the horizontal axes. Protein folding energy landscapes roughly funnel-shaped, but with a variety of further peaks and valleys along the sides of the funnels. This illustrates that the protein tends towards a global energetic minimum (its folded Fig.4state), but that to get to the folded state, the polypeptide chain must navigate past local energetic minima and energetic maxima.

Although energy landscapes are typically represented as 3-dimensional plots, configurational space often exists in hundreds of dimensions or more. For instance, a peptide consisting of 10 amino acids might have 150 atoms, each with x, y, and z coordinates. If this were used as the basis for the configurational space, the peptide’s configurational space would be 450-dimensional and its energy landscape would be 451-dimensional. While there are other ways of describing configurational space, it is difficult to reduce the number of dimensions to 2. As a result, the visual picture of a 3-dimensional energy landscape is usually more of a helpful conceptual aid than a quantitative portrait.

Driving forces of protein folding

Native protein structures typically have hydrophobic cores while exposing more polar moieties to the solvent. Hydrogen bonds are also important for protein structures as they are key in stabilizing α-helices and β-sheets as well as in a variety of other aspects of protein folding. Van der Waals interactions contribute to the tight packing of protein chains. Salt bridges (ionic interactions) can stabilize protein structures via electrostatic attraction.

There are several other important ways of classifying interactions between amino acids in proteins. Local interactions are those between amino acid residues that are close to each other in a polypeptide chain (e.g. within the same helix or turn) while nonlocal interactions occur between amino acid residues that are farther apart in a sequence (e.g. between β-sheet strands). Short-ranged interactions are those which are close together in space such as van der Waals interactions that depend on 1/r6 (where r is the distance of the interaction). Long-ranged interactions are those which are far apart in space such as Coulombic interactions that depend on 1/r.

Beyond these factors, the entropies of proteins make major contributions to protein folding. However, chain entropies are not observable from structures alone. Statistical mechanics models allow one to gain insights into protein entropy.

Statistical mechanics and protein folding

According to statistical mechanics, the free energy of a system can be computed using the partition function Q, a description of the microstates of the system. Microstates consist of specific configurations of a system and the associated energy levels of those configurations. Because quantum mechanics dictates that there are discrete energy levels, there exists a finite number of microstates for any given system. As an example, consider a protein in a specific conformational state with specific energy levels across all its constituent particles. This protein exists in a single microstate until it moves and changes to a different microstate. To compute the free energy of a system from the partition function, the following equation is employed.

eq.3

The partition function itself is a sum of the relative statistical weights of all of the system’s possible microstates. Here, εj is the energy of a given microstate and ω(εj) is the number of microstates that have energy εj. Note that ω(εj) is also called the degeneracy of the jth microstate.

eq.4

Making a statistical mechanics model requires first knowing all of the system’s microstates. This includes the configurations, the energies εj of those configurations, and the number of configurations ω(εj) that result in a specific energy level εj. When the microstates are known, the probability of a given microstate can be computed using the equation below.

eq.5

After computing these probabilities, it is possible to use them to compute weighted averages (known as ensemble averages) for desired properties of the system such as energy or fraction of folded proteins. For some property A that takes on the value Aj when in state j, the ensemble average is given by the following equation.

eq.6

The HP model of protein folding and unfolding

Simple models of protein folding can reveal insights into how folding works. The HP model represents a protein as a chain of beads on a 2D or 3D lattice. The chain of beadsFig.2 can take on a variety of different configurations, but it cannot double back on itself or place two beads in the same location. In the HP model, there are two types of beads including hydrophobic beads and polar beads.

The model also describes contacts between beads which are adjacent to each other on the lattice but not adjacent in the sequence of the chain. When two hydrophobic beads make a contact, there is a favorable interaction energy of ε0 < 0. All other contacts have interaction energies of zero.

As an example, consider a chain with six beads on a 2D lattice. The beads at positions 1, 4, and 6 are hydrophobic while the beads at positions 2, 3, and 5 are polar. In this case, there are 36 possible configurations, each with a corresponding energy value. Each of these configurations is a specific microstate of the system.

Fig.3

The energy level of each microstate is determined by how many hydrophobic contacts are made. In this system, three energy levels are possible including 0, ε0, and 2ε0. The collections of microstates at each of these energy levels are called macrostates. The macrostate at energy 0 includes 28 microstates (also called a degeneracy of 28), the macrostate at energy ε0 has 7 microstates, and the macrostate at energy 2ε0 has 1 microstate. Using this information, the partition function Q for the system is given as follows.

eq.7

By using the following equation (which was described in general terms earlier) along with the partition function from above, the probabilities of the macrostates are found. Here, p1 corresponds to the unfolded state, p2 is an intermediate state, and p3 represents the folded state.

eq.8

Once the probabilities of the microstates are found, one can compute the ensemble average energy using the next equations.

eq.9

Another useful way of applying statistical thermodynamics to protein folding using the HP model is to compute the value of ΔGfold. Recall that ΔGfold = –RT·ln(fN/fD). Since fN is the native fraction and fD is the denatured fraction, fN = p3 = pN (folded state) and fD = p1 = pD (unfolded state).

eq.10

By using the above equation for ΔGfold, the midpoint temperature of the chain’s folding transition can be found. The folding transition happens at the temperature where pN = pD. When this is the case, the term inside the natural logarithm is equal to 1, making ΔGfold = 0. As a result, solving for the temperature under these conditions gives the midpoint temperature of the chain’s folding transition.

eq.11

These statistical thermodynamics methods are applicable to many protein properties and the principle of the HP model can be extended to analogous, but more complicated, models of protein folding. Using such techniques, a broad variety of insights around protein folding can be gleaned.

Reference: Bahar, I., Jernigan, R. L., & Dill, K. A. (2017). Protein Actions: Principles and Modeling. CRC Press LLC.

 

Notes on Digital Electronics


No Comments

PDF version: Notes on Digital Electronics – Logan Thrasher Collins

Logic gatesTable1

In digital electronics, circuit components (e.g. transistors, resistors, etc.) can be organized to form logic gates. Logic gates take input signals and determine output signals according to simple rules. Different logic gates exhibit distinct rules. Fundamental logic gates in digital electronics include the NOT gate (also called an inverter), AND gate, OR gate, NAND gate, NOR gate, XOR gate, and XNOR gate. The rules for these gates are given in the following table. 

It should be noted that there are also versions of these logic gates which take more than two inputs. For example, an AND gate with three inputs would require a signal value of 1 for all three inputs in order to give an output of 1.

NAND gates and NOR gates have alternative symbols (relative to the ones in the previous table) in certain situations. The following two paragraphs describe the reasons for these alternative symbols.

When a NAND gate is used to take one or more 0 inputs (i.e. 00, 01, 10 for a devicenegative-OR gate with two inputs), it acts as a “negative-OR” gate. In this case, any of the 0 inputs will give an output of 1, so the operation is similar to OR. The symbol for a negative-OR version of a NAND gate is given at right.

When a NOR gate is used to take all 0 inputs (i.e. 00 for a device with two inputs), it acts negative-AND gateas a “negative-AND” gate. In this case, all the 0 inputs together will give an output of 1, so the operation is similar to AND. The symbol for a negative-AND version of a NOR gate is given above.

Boolean algebraTable2

Boolean algebra provides a mathematical way of representing the behaviors of logic gates. Complementation, Boolean addition, and Boolean multiplication are important operations in Boolean algebra. These are shown in the table to the right. Combining these operations allows complex logic gate arrangements to be described. Note that any variable or its complement is referred to as a “literal” in the language of digital logic.

The distributive law, the commutative addition and multiplication laws, and the associative addition and multiplication laws are the same in Boolean algebra as in ordinary algebra. As a result, logic gate arrangements are subject to these laws.

There are twelve basic rules for Boolean algebra which are often useful for analyzing logic gates. They are listed in the box below.

Box1

DeMorgan’s theorems are useful tools for simplifying Boolean expressions. DeMorgan’s first theorem states that the complement of a Boolean product of variables equals the sum of the complements of the variables. DeMorgan’s second theorem states that the complement of a Boolean sum of variables equals the Boolean product of the complements of the variables. DeMorgan’s theorems also extend to expressions with more than two variables. DeMorgan’s first and second theorem are given below in the form with two variables and the form with n variables.

eq.1

SOP and POS forms

There are two forms into which Boolean expressions can be converted via simple algebraic methods; the sum-of-products (SOP) form and the product-of-sums (POS) form. These forms make it easier to use Boolean expressions for digital electronics. To understand these forms, note that the domain of a Boolean expression is the set of variables (either complemented or uncomplemented) which appear in the expression.Fig.1

The SOP form is exactly as its name suggests, a sum of terms which consist of products of literals (e.g. AB̅ + AC). The SOP form can also include terms which consist of a single literal. To implement the SOP form using logic gates, the following methods can be used. The first is to connect the output terminals of two or more AND gates to the input terminals of an OR gate. The second is to connect the output terminals of one or more NAND gates to the input terminals of another NAND gate (which operates in the negative-OR mode).

There is also a standard SOP form which is distinct from the regular SOP form. The standard SOP form is one in which all of the variables in the domain appear in each term of the expression. To convert a SOP expression into its standard form, multiply each nonstandard term by a sum of the missing variable and the missing variable’s complement. Repeat this until all of the terms contain all the variables in the domain either in complemented or uncomplemented form. As an example, consider the domain {A, B, C} and the SOP expression AB̅ + AC. Perform the operation AB̅(C + C̅) + AC(B + B̅) = AB̅C + AB̅C̅ + ABC. Now the expression is in standard SOP form (note that one of the AB̅C terms vanished as a result of the basic rules of Boolean algebra).

When looking at the binary representation of a standard SOP term, only a single combination of variable values will make the term equal to 1. Furthermore, a standard SOP expression will only equal 1 if at least one of its terms are equal to 1.Fig.2

The POS form is also exactly as its name suggests, a product of terms which consist of sums of literals. An example of this is (A̅ + B)(A + B̅ + C). The POS form can also include terms which consist of a single literal. However, a complementation overbar cannot extend to more than one term in a POS expression. To implement the POS form using logic gates, the following methods can be used. The first is to connect the output terminals of two or more OR gates to the input terminals of an AND gate. The second is to connect the output terminals of one or more NOR gates to the input terminals of another NOR gate (which operates in the negative-AND mode).

There is a standard POS form which is distinct from the regular POS form. The standard POS form is one in which all of the variables in the domain appear in each parentheses-enclosed sum term of the expression. To convert a POS expression into its standard form, add a product of the missing variable and its complement (e.g. AA̅) inside each parentheses-enclosed sum term. Next, apply the basic Boolean algebra rule that A + BC = (A + B)(A + C). Repeat this until the POS expression is in standard form. As an example, consider the domain {A, B, C} and the POS expression (A̅ + B)(A + B̅ + C). Perform the operation (A̅ + B + CC̅)(A + B̅ + C) = (A̅ + B + C)(A̅ + B + C̅)(A + B̅ + C). Now the expression is in standard POS form.

When looking at the binary representation of a standard POS sum term (parentheses-enclosed term), only a single combination of variable values will make the term equal to 0. Furthermore, a standard POS expression will only equal 0 if at least one of its parentheses-enclosed sum terms are equal to 0.

Truth tables

When working with Boolean expressions and digital electronics, a truth table provides a valuable way of representing the logical operation of a circuit. Furthermore, standard SOP and POS expressions can be determined using truth tables. A truth table is a listing of the possible combinations of input variable values and corresponding output values for a given Boolean expression.

To convert a standard SOP or POS expression into truth table format, first list all of the possible combinations of binary values (2n possibilities where n is the number of variables). Next, places 1s in the output column for all of the binary values that make the standard SOP or POS expression equal to 1 and place 0s in the output column for all of the binary values that make the standard SOP or POS expression equal to 0. Recall that a standard SOP expression will equal 1 if at least one of its terms are equal to 1 and that a standard POS expression will equal 0 if at least one of its parentheses-enclosed sum terms are equal to 0. The following tables are examples of truth tables for a SOP expression (left) and a POS expression (right).

Table3

To determine the standard SOP expression represented by a truth table, list the binary values for which the output is 1. Convert each binary value of 1 into its corresponding variable and each binary value of 0 into its corresponding complemented variable. This will produce the terms of the SOP expression which are then added together.

To determine the standard POS expression represented by a truth table, list the binary values for which the output is 0. Convert each binary value of 0 into its corresponding variable and each binary value of 1 into its corresponding complemented variable. This will produce the parentheses-enclosed sum terms of the POS expression which are then multiplied together.

Organization of Karnaugh maps

The Karnaugh map gives a systematic method for simplifying Boolean expressions. It can produce the simplest possible SOP or POS expression for a given problem. Karnaugh maps can be employed for expressions of two, three, four, or five variables. Note that there are also other algorithms, the Quine McCluskey method and the Espresso algorithm, which work on expressions with five or more variables. These more advanced algorithms are more readily automated by software as well. Nonetheless, Karnaugh maps represent a useful exercise for thinking about digital logic design.

The three-variable Karnaugh map is an array of eight cells (as shown below at left) with the possible binary values for A and B given along the rows and the possible binary values for C given along the columns. The four-variable Karnaugh map is an array of sixteen cells (as shown below at right) with the possible binary values for A and B given along the rows and the possible binary values for C and D given along the columns.

Fig.3

Karnaugh maps are arranged such that there is only a single change between any two adjacent cells. Two cells are physically adjacent if they are touching via the top, bottom, left side, or right side (diagonals do not count). Adjacency also extends to the cells at opposite edges of the tables. That is, the tables “wrap around” so that cells at the top are adjacent to corresponding cells at the bottom and cells at the left side are adjacent to corresponding cells at the right side.

Karnaugh maps with SOP and POS expressions

To map a standard SOP expression, place a 1 on the Karnaugh map for each term that is part of the expression. Make sure to place the 1s on the cells which match that term of the expression (e.g. A̅BC̅ goes in the 010 cell of a three-variable Karnaugh map). To map a nonstandard SOP expression, it must first be converted to standard SOP expression. This conversion is often carried out using a binary-based version of the method outlined earlier.

Karnaugh maps are often used to construct minimized SOP expressions (after starting with standard SOP expressions). By contrast to standard SOP expressions, minimized SOP expressions contain the fewest possible terms and the fewest possible variables per term. Minimized SOP expressions are usually implementable with fewer logic gates than standard SOP expressions.

To minimize a standard SOP expression, first perform the process of grouping the 1s. The goal of grouping the 1s is to maximize the size of the groups and minimize the number of groups. For Karnaugh maps of three or four variables, any individual group must consist of 1, 2, 4, 8, or 16 cells. Each cell in a group must be adjacent to one or more cells in the Fig.4same group (but not all cells in a group need to be adjacent). Each group must contain the largest possible number of 1s. Finally, each 1 on the map must be included in at least one group. Note that a 1 can be included in overlapping groups so long as each of the groups involved also have noncommon 1s. At right, an example of grouping the 1s is displayed.

The next step in minimizing a standard SOP expression is to analyze the groups of 1s. Each group of cells with 1s creates a product term composed of all of the variables in that group which occur in only the complemented or only the uncomplemented form. Any variables which occur in both forms within the group are eliminated. The remaining variables in a given group comprise a term within the minimized SOP expression. All of the terms from the groups are summed to find the full minimized SOP expression.

To use Karnaugh maps with truth tables for standard SOP expressions, place the 1s on the map according to the binary input values and the binary output values described by the truth table (e.g. if the truth table has a value of 101 for the SOP expression AB̅C, put a 1 on the corresponding 101 cell of the Karnaugh map).

Certain binary values are not used in certain applications. For instance, when encoding integers through binary, there are sixteen possible combinations of binary digits but only the integers 0 through 9 need to be represented (ten values). The other six combinations of binary digits are invalid. Since these binary values will never actually occur, their outputs can be treated as either 0 or 1. On Karnaugh maps, these are called “don’t care” terms. “Don’t care” terms are placed on the Karnaugh map as an X and treated as 1 if they help to make a larger group. If a “don’t care” term does not help to make a larger group, it is treated as a 0 output. By helping to make larger groups of 1s, “don’t care” terms can aid in further simplifying SOP expressions and allowing less logic gates to be used.

To map a standard POS expression, place a 0 on the Karnaugh map for each parentheses-enclosed sum term that is part of the expression. Make sure to place the 0s on the cells which match that term of the expression (e.g. (A̅ + B + C̅) goes in the 101 cell of a three-variable Karnaugh map). To map a nonstandard POS expression, it must first be converted to standard POS expression. This conversion is often carried out using a binary-based version of the method outlined earlier.

Karnaugh maps are often used to construct minimized POS expressions (after starting with standard POS expressions). By contrast to standard POS expressions, minimized POS expressions contain the fewest possible terms and the fewest possible variables per term. Minimized POS expressions are usually implementable with fewer logic gates than standard POS expressions.

To minimize a standard POS expression, first perform the process of grouping the 0s. The rules for doing so are identical to the rules for grouping the 1s as outlined in the previous section, except that 0s are used. The next step in minimizing a standard POS expression is to analyze the groups of 0s. This is also carried out in the same way as that described for the SOP version of the process. After doing this, the remaining variables in a given Fig.5group comprise a parentheses-enclosed sum term within the minimized POS expression. All of the parentheses-enclosed sum terms from the groups are multiplied to find the full minimized POS expression. “Don’t care” terms are also applied the same way for standard POS expressions as they are for standard SOP expressions.

To use Karnaugh maps with truth tables for standard POS expressions, place the 0s on the map according to the binary input values and the binary output values described by the truth table (e.g. if the truth table has a value of 101 for the POS expression AB̅C, put a 0 on the corresponding 101 cell of the Karnaugh map).

Karnaugh maps can be used to convert between standard SOP and standard POS expressions. This is useful to help compare the minimized versions of the expressions and to see if one of the two can be implemented using fewer logic gates than the other. For going from SOP to POS, all of the cells which do not contain 1s must instead contain 0s. These 0 cells encode the equivalent POS expression. For going from POS to SOP, all of the cells which do not contain 0s must instead contain 1s. These 1 cells encode the equivalent SOP expression.

Basic combinational logic

As described earlier, SOP expressions can be implemented via using AND logic gates as inputs to an OR logic gate output. This is called AND-OR logic. Another important type of logic is using AND logic gates as inputs to an OR logic gate and subsequently inverting the output of the OR gate. This is called AND-OR-Invert logic. To build a XOR gate (referred to as exclusive-OR logic), two AND gates, one OR gate, and two inverters are organized as shown in the figure below. To build a XNOR gate (referred to as exclusive-NOR logic), AND gates, OR gates, and inverters can be organized in multiple ways as shown in the figure below.

Fig.6

Implementing combinational logic from a Boolean expression requires recalling that Boolean multiplication is equivalent to an AND gate, Boolean addition is equivalent to an OR gate, and complementation is equivalent to a NOT gate (inverter). To implement combinational logic from a truth table, first convert the truth table to a Boolean expression via the method described in the truth table section. If a minimized Boolean expression is required, use a Karnaugh map as described in the section on Karnaugh maps.

Universal properties of NAND and NOR gates

The NAND gate is a universal gate because it can be used to make NOT, AND, OR, and NOR functions. The configurations of NAND gates needed to construct these functions are displayed below at left. The NOR gate is a universal gate because it can be used to make NOT, AND, OR, and NAND functions. The configurations of NOR gates needed to construct these functions are displayed below at right.

Fig.7

Combinational logic with NAND and NOR gates

NAND gates can be used to construct AND-OR logic systems (which implement SOP expressions). To do this, connect the output terminals of one or more NAND gates to the input terminals of another NAND gate. Note that the latter NAND gate is acting as a negative-OR gate.

Recall that, when a NAND gate is used to take one or more 0 inputs (i.e. 00, 01, 10 for a device with two inputs), it acts as a “negative-OR” gate. In this case, any of the 0 inputsnegative-OR gate will give an output of 1, so the operation is similar to OR. Any NAND gate carrying out a negative-OR operation is represented by the alternative symbol at right.

When drawing a combinational logic circuit with both NAND gates and negative-OR Fig.8gates, the symbols should be drawn with NAND gate bubbles facing negative-OR gate bubbles to help make it easier to visualize how the inversion properties of the gates are cancelling each other out (a bubble represents that a gate carries out inversion as at least part of its operation).

NOR gates can be used to construct logic systems which implement POS expressions. To do this, connect the output terminals of one or more NOR gates to the input terminals of another NOR gate. Note that the latter NOR gate is acting as a negative-AND gate.

Recall that, when a NOR gate is used to take all 0 inputs (i.e. 00 for a device with two inputs), it acts as a “negative-AND” gate. In this case, all the 0 inputs together will give an negative-AND gateoutput of 1, so the operation is similar to AND. Any NOR gate carrying out a negative-AND operation is represented by the alternative symbol at right.

When drawing a combinational logic circuit with both NOR gates and negative-ANDFig.9 gates, the symbols should be drawn with NOR gate bubbles facing negative-AND gate bubbles to help make it easier to visualize how the inversion properties of the gates are cancelling each other out. (As mentioned, a bubble represents that a gate carries out inversion as at least part of its operation).

Pulse waveform operation

The voltage or current signals associated with electronic circuits are often represented using trains of pulsed square waves. When a square wave is at its “HIGH” level of voltage or current, it represents a 1. When a square wave is at its “LOW” level of voltage or current, it represents a 0. Although these square waves are ideal forms of messier pulse trains, the binary qualities of digital electronic systems filter out the noise, allowing many analyses to be carried out with ideal square wave pulse trains.

In digital electronics, all waveforms are synchronized to a periodic waveform called the clock which keeps time for the system. All signals within the system are measured against the rate of periodic pulses from the clock. The clock sets the unit of time which denotes a single HIGH or LOW pulse state. This makes it so that the signals can pass through circuit elements (e.g. logic gates) at the appropriate time points and therefore perform appropriate operations.

Timing diagrams compare the signal trains within a digital system. Within each of these units of time, the waveforms for signal trains within the system might take on any combination of HIGH and LOW states.

Fig.10

In the context of logic gates, timing diagrams help to establish what inputs are arriving at which logic gates at what times. To review, the output of an AND gate at a given time is only HIGH if all of its inputs at the given time are HIGH, the output of an OR gate at a given time is only HIGH if at least one of its inputs at the given time are HIGH, the output of a NAND gate at a given time is LOW only when all of its inputs at the given time are HIGH, and the output of a NOR gate at a given time is LOW only when at least one of its inputs at the given time are HIGH.

Binary arithmetic

In digital systems, binary arithmetic serves as an essential mathematical language. It is especially vital for constructing complex combinational logic circuits which perform useful functions. As a starting point, the first fifteen binary numbers are 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111. These correspond to 1-15 in the base-10 system. Binary uses a base-2 system rather than a base-10 system. As will be explained in the following paragraphs, there are tricks which help to interconvert between binary and base-10 numbers.Fig.11

The columns of a binary number can be thought of as representing the of base-10 values which sum up to the binary number’s base-10 equivalent where the nth column from right to left equals 2n starting with n = 0. To explain this, consider the following example. The binary value of 1011 is equal to eleven in the base-10 system. From right to left, 20•1 + 21•1 + 22•0 + 23•1 = eleven. This technique can be employed to convert any binary number to a base-10 number. Going the other way, one can convert a base-10 number to a binary number by finding the binary weights wi that cause the sum of 2n•wi to add up to the desired binary value.

To convert a base-10 decimal number to a binary fraction, a similar method is used, but negative binary weights are used for the part of the decimal number which is less than one. For instance, 2-1 = 0.5 and 2-2 = 0.25. In addition, a decimal point is inserted into the final binary number and the n within 2-n grows from left to right. As an example, consider that the base-10 decimal number 0.75 = 2-1•1 + 2-2•1, so its binary fractional equivalent is 0.11. Likewise, a binary fraction can be converted to a base-10 decimal number by finding the binary weights wi that cause the sum of 2n•wi + 2-n•wi to add up to the desired base-10 decimal number.

Binary addition is summarized in the box below. The top part describes the basic Fig.12rules for addition of two bits (a bit is a single binary value) and the bottom part describes the rules for the addition of two bits plus a carry bit, which is a value that is carried in the same way as in base-10 addition. The carry bits are highlighted in gray. To illustrate this concept, an example of binary addition with carrying is also shown at right.

Box2

Binary subtraction is summarized in the box below. When performing binaryFig.13 subtraction, a borrow is only needed if subtracting a 1 from a 0 digit (e.g. 110 – 1). To borrow, take a 1 from the column to the left, create a 10 in the column undergoing subtraction, and apply the rule 10 – 1 = 1. To illustrate this concept, an example of binary subtraction with borrowing is also shown at right.

Box3Fig.14

Basic binary multiplication rules are summarized in the box below. To multiply two binary numbers, the top number is multiplied by each digit of the bottom number from right to left. The first of these partial products is not shifted left, the second is shifted left by place, the third is shifted left by two places and so on. After shifting, 0 values are put in the empty slots. The partial products are then summed to obtain the result. To illustrate this concept, an example of binary multiplication with shifting is shown at right.

Box4

To perform binary division, one must (1) set up long division. The quotient goes on the top, the dividend goes under the quotient, and the divisor goes to the side of the dividend. (2) Place a copy of the divisor below the dividend but align it to the leftmost digits of the dividend. (3) If the part of the dividend above the divisor is greater than or equal to the divisor, then subtract the divisor from that part of the dividend, place the Fig.15result of the subtraction below, and concatenate a 1 to the rightmost end of the quotient. If the dividend above the divisor is less than the divisor, concatenate a 0 to the rightmost end of the quotient. (4) Place another copy of the divisor at the bottom but shift it one column to the right. (5) Repeat steps 3 and 4 until the part of the dividend is less than the divisor. The result is the quotient with the dividend as a remainder. To illustrate this concept, an example of binary division is shown at right.

Half-adders and full-adders

Half-adders take in two input bits and subsequently generate a sum bit and a carry bit, performing the first step in binary addition. Half-adders implement the basic rules of binary addition: 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, and 1 + 1 = 10. Because XOR gates onlyTable4 produce outputs of 1 when the inputs are not equal, a XOR gate is used to generate the sum bit. Because AND gates only produce outputs of 1 when the inputs are both 1, an AND gate is used to generate the carry bit. The truth table for a half-adder is displayed at right. The logic gate diagram for a half-adder is shown below at left and the equivalent logic symbol is shown below at right.

Fig.16

Full-adders take in two input bits and an input carry and subsequently generate a sum bit and an output carry before performing the OR operation on them. In other words, full-adders can implement the addition of two 1-bit numbers and an input carry. To see how this corresponds to the rules for binary addition, refer to the box in the Table8section on binary addition. Since the sum of the two input bits is A⊕B, the sum of the two input bits and the input carry is (A⊕B)⊕Cin. (Note that the circled plus indicates the XOR operation). The equation for the output carry is Cout = AB + (A⊕B)Cin. Full-adders are composed of two half-adders and an OR gate. The truth table for a full-adder is displayed at right. The logic gate diagram for a full-adder is shown below at left and the equivalent logic symbol is shown below at right.

Fig.17

Parallel binary adders

Parallel binary adders are composed of two or more full adders and are used to add binary numbers of more than one bit. To add two binary numbers, a full-adder is needed for each bit in the numbers (e.g. for a 2-bit number, two full-adders are necessary). The least significant bit of the output is often grounded to make it zero since there is no carry input to the least significant bit (LSB) position. Note that there is also a most significant bit position (MSB) at the other end of the system. The block diagram for a 2-bit parallel adder alongside the addition operation it performs are shown below.

Fig.18

Extension of these concepts to 4-bit parallel adders and beyond is straightforward. The output carry terminal of each full-adder is linked to the input carry of the next full-adder in the lineup (these links are known as internal carries). In this way, larger binary numbers can undergo the sum operation. As an example, the block diagram for a 4-bit parallel adder and its equivalent logic symbol are shown below.

Fig.19

Table8

The truth table for a the 4-bit adder example is shown at right. The subscript n represents the adder bits. Cn – 1 represents the carry from the previous adder. Carries 1, 2, and 3 are internal carries while carry 4 is an output carry and carry 0 is an input carry.

Ripple carry adders are a category of parallel binary adder which have the output carry of each full-adder connected to the input carry of the next full-adder in the sequence. The sum and the output carry of any stage (a single full-adder is one stage) cannot be produced until the input carry occurs, causing time delays associated with each successive stage. The input carry to the LSB stage must move through all of the stages before a final sum is obtained. Because of this, the cumulative delay is assumed as the number of stages multiplied by the maximum amount of time for the signal to pass through each stage. For an individual ripple carry adder, this cumulative delay is often on the order of tens of nanoseconds.

Look-ahead carry adders are a category of parallel binary adder which eliminate the ripple carry delay and so operate more efficiently than ripple carry adders. To eliminate the ripple carry delay, look-ahead carry adders generate a carry only when both of the input bits are 1s. The input carry then propagates within its full-adder to the output carry. The generated carries are denoted Cg and the propagated carries are denoted Cp. The output carry of a single full-adder can be expressed in Boolean algebraic terms as Cout = Cg + CpCin. In parallel binary adders, the Cin of each successive stage equals the Cout of the previous stage. Since each Cg and Cp is expressible in terms of the A and B input bits (to the full-adders), all of the output carries are available almost immediately and it is not necessary to wait for the carries to ripple through all of the stages. The logic gate implementation of a 4-bit look-ahead carry adder is displayed below.

Fig.20

Comparators

Comparators compare two binary numbers and determine relationships between those quantities. The simplest type of comparator decides if two binary numbers are equal. An XNOR gate can act as a comparator to decide if the input bits are equal, giving an output of 1 if the inputs are equal and giving an output of 0 if the inputs are unequal. To expand this to numbers consisting of more bits, the output terminals of multiple XNOR gates are linked to the input terminals of an AND gate. In this way, all of the bits comprising the two numbers must be equal in order for the two numbers themselves to be equal. As an example, a 4-bit comparator and is shown below.

Fig.21

There are also magnitude comparators which can determine whether a binary number is greater than or less than another binary number. To do this, the magnitude comparator must examine if the MSB of number A is greater than, less than, or equal to the MSB of number B. If AMSB > BMSB, then A > B, if AMSB < BMSB, then A < B, and if AMSB = BMSB, then the same process must be performed upon the next most significant bit in the number. The steps are repeated until the relationship between the two numbers is determined. The output which correctly describes the relationship between the two numbers is 1 and the other two outputs are 0. As an example, a logic gate implementation of a 2-bit magnitude comparator and its equivalent logic symbol are shown below.

Fig.22

Decoders

Decoders give an output of 1 when a certain binary number is used as an input and they give an output of 0 for any other binary number input. For example, a decoder might detect the binary number 1001 and output 1 only when the inputs are 1001.

To perform their function, decoders must convert all the 0s of the targeted number Fig.23into 1s via NOT gates and then feed every input to an AND gate. Since the AND gate will only output 1 when all inputs are 1, the targeted number will be detected through the NOT gates converting all 0s in the targeted number to 1s. As an example, the logic gate implementation of a decoder which detects 1001 is shown at right.

One can also construct a decoder using a NAND gate in place of the AND gate. In the NAND gate version, an output of 0 indicates that the specified binary number has been detected.

One common type of decoder configuration is a decoder which takes in n bits and decodes every one of the 2n possible combinations of those bits. These decoders typically operate by the same principle as described above, though they must send their n inputs to 2n AND gates or NAND gates (depending on whether the active output needs to be a 1 or a 0). For example, a 4-line-to-16-line decoder receives a 4-bit input and outputs a 1 from a different terminal when it detects each of the 16 distinct combinations of 4 bits. To illustrate this, the truth table of a 4-line-to-16-line decoder which uses AND gates is displayed below.

Table5

Because this type of decoder outputs a 1 (or a 0 if NAND gates are used) corresponding to each of the possible 4-bit binary inputs, it can be used to convert 4-bit binary numbers to base-10 numbers. For instance, the binary value 1110 is equivalent to the base-10 value of 14 and 1110 activates the 14th output terminal. The logic symbol for a 4-line-to-16-line decoder (with 1s as active outputs) is displayed below.

Fig.24

There are other similar decoders that act as standard parts for binary to base-10 conversion. In particular, the BCD-to-decimal converter (“decimal” is another name for base-10 numbers and “BCD” stands for Binary Coded Decimal) or 4-line-10-line decoder uses the same setup as the 4-line-to-16-line decoder, but it only has ten outputs. These ten outputs correspond to the base-10 values of zero through nine. Once again, the active output is 1 if AND gates are used and is 0 if NAND gates are used. Another common decoder is the BCD-to-7-segment decoder, which has a 4-bit input and gives one of seven outputs depending on the input. This decoder also operates on the same principle as the 4-line-to-16-line decoder. However, the BCD-7-segment decoder is used to control the images formed on a seven-segment display device such as a handheld calculator.

Encoders

An encoder first receives an active input on one of its input terminals. The input often represents a digit (such as a base-10 digit), an alphabetic character, or another symbol. Next, the encoder translates this input into a coded output such as a binary value.

One common type of encoder is the decimal-to-BCD encoder. As alluded to in the previous section, BCD code is a way of representing base-10 numbers using 4-bit binary values. The BCD code uses the first ten binary numbers (0000 to 1010) to represent base-10 values of 0 to 9.Table6

The decimal-to-BCD encoder has inputs of 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 and outputs of A3, A2, A1, and A0. Decimal-to-BCD encoders use OR gates to convert each input to its appropriate output. The logic of this (assuming 1 as the active value) is as follows. Bit A3 only outputs a 1 for base-10 digits 8 or 9, bit A2 only outputs a 1 for base-10 digits 4 or 5 or 6 or 7, bit A1 only outputs a 1 for base-10 digits 2 or 3 or 6 or 7, and bit A0 only outputs a 1 for digits 1 or 3 or 5 or 7 or 9. To better understand this, it is helpful to examine the decimal-to-BCD truth table above. The logic gate implementation for a decimal-to-BCD encoder is displayed below alongside its corresponding logic symbol. Note that an input for the base-10 zero digit is not needed since, when all of the binary inputs are 0, the binary outputs are 0000.

Fig.25

Another similar type of encoder is the decimal-to-BCD priority encoder. This type of encoder is configured to include a priority function. As a result, if multiple inputs are given to the decimal-to-BCD priority encoder, the binary output will represent only the largest base-10 value that was used as an input. By comparison, the decimal-to-BCD encoder without the priority function must have only one active input to work properly.

Code converters

It is often necessary in electronic systems to convert between different digital codes. Though there are others, one important type of code converter is the BCD-to-binary converter. This type of code converter can help to illustrate the general idea of code conversion in digital electronics.

From a mathematical standpoint, to perform BCD-to-binary conversion, one must first weight the BCD digits depending on their position within the BCD number. For instance, a decimal value of 27 is represented in BCD as 00100111. In BCD code, the 0010 corresponds to the 2 in the tens place and the 0111 corresponds to the 7 in the ones place. The weights of the tens place are 80, 40, 20, 10 and the weights of the ones place are 8, 4, 2, 1.

In BCD-to-binary circuits, the binary representations of the weights of the BCD bits are added to obtain the corresponding binary number. For the case of 00100111, this is 1•20 + 1•4 + 1•2 + 1•1 = 27 in decimal form and 1•10100 + 1•100 + 1•10 + 1•1 in binary form (the parts that are multiplied by 0 have been excluded here since they equal 0). The BCD-to-binary conversion can be implemented using adder circuits to sum the weighted binary numbers.

Multiplexers

Multiplexers (or data selectors) are devices which take in data from multiple sources and route those data into a single transmission line to send them to a common destination. The way that a multiplexer works is that it receives both data-select lines and data-input lines. Bits sent to the data-select lines control which of the data-input lines is transmitted to the single data-output line.

As an example, consider a 4-bit multiplexer. The 4-bit multiplexer receives two data-Table7select lines which control four data-input lines. The four possible inputs to the data-select lines (00, 01, 10, and 11) control which of the four data-input lines undergoes transmission to the data-output line. The truth table for this device is displayed at right where S1 and S0 are the data-select lines.

To understand how to implement a 4-bit multiplexer using logic gates, it is helpful to see that the Boolean expression below describes the multiplexer’s operation. Y represents the data output. Recall that OR gates implement Boolean addition, AND gates implement Boolean multiplication, and NOT gates implement Boolean complementation. The logic gate implementation and corresponding logic symbol of a 4-bit multiplexer are given along with the Boolean expression.

Fig.26

The principles of 4-bit multiplexers can readily be extended to any n-bit multiplexers. For a given n-bit multiplexer with n data-input lines, log2(n) data-select lines would be necessary. The logic gate implementation would depend on a Boolean expression analogous to the one above, but with more input variables and more of their corresponding data-select bit combinations.

Demultiplexers

Demultiplexers take information from a single input line and send the information to a given number of output lines. As with multiplexers, demultiplexers also take data-select inputs to determine to which output line the data are sent.

To illustrate how demultiplexers work, consider a 1-line-to-4-line demultiplexer. The logic gate implementation for a 1-line-to-4-line demultiplexer is displayed below. This demultiplexer receives a single data-input line which goes to all of the AND gates. It also receives two data-select lines which control which of the AND gates transmits the data-input line. Since all inputs to an AND gate must be 1 in order for the AND gate to transmit a 1, the data-select lines make it possible to choose which AND gate receives all 1 inputs. Using the data-select lines, only one of the data-output lines will transmit the information at a time.

Fig.27

The principles of demultiplexers in general can readily be found by extending the concept of the above demultiplexer. For a demultiplexer with n data-output lines, log2(n) data-select lines are needed to control where the data-input line signal is sent.

Modified decoders are also usable as demultiplexers. When a decoder is used as a demultiplexer, its input lines are used as data-select lines since they determine which of the output lines sends an active signal. The modification needed is an enable gate. If the enable gate is not active on both inputs, the decoder cannot transmit any active outputs. By fixing one of the enable gate’s inputs as permanently active, the other input to the enable gate can behave as the data-input line. In this way, the decoder acts as a demultiplexer.

Parity checkers

Parity is a method of error detection. For a given system, any set of bits contains either an even or an odd number of 1s. Depending on the system, the parity bit is attached to a set of bits to make it so that the total number of 1s in the given system is always even or so that the total number of 1s in the given system is always odd. If a system operates with even parity, a parity check is made on each set of bits to make sure that the total number of 1s is even. If a system operates with odd parity, a parity check is made on each set of bits to make sure that the total number of 1s is odd. In the case that these conditions are not met, the system in question reports an error.

To determine if a code has even parity or odd parity, all of the bits in that code are added together. Two bits can be summed using a single XOR gate, four bits can be summed using a pair of XOR gates with their output terminals linked to the input terminals of a third XOR gate, and so on. It is important to note that the sum is a modulo-2 sum. This is a binary sum where a 0 results whenever a carry would otherwise occur (0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 0). If a set of bits contains an even number of bits, the XOR gate system will produce a 1. If a set of bits contains an odd number of bits, the XOR gate system will produce a 0. The XOR gate implementations of parity checkers for sets of two, four, and eight bits are given below.

Fig.28

Reference: Floyd, T. (2015). Digital Fundamentals, Global Edition. Pearson Education Limited.

Cover image source: SciTechDaily.com