Boolean Basics - LEKULE

Breaking

4 Sept 2015

Boolean Basics

Intended for people with little or no exposure to digital logic, as well as those wanting a refresher or to understand Boolean algebra at a deeper level, this article walks through the very basics of Boolean algebra. The goal is to prepare you for subsequent articles that delve deeper into aspects of Boolean algebra.

Recommended Level

Beginner

What is Boolean Algebra?

Our focus is more geared toward engineers and technicians and not toward mathematicians and philosphers, thus we describe Boolean algebra as the manipulation of two-state signals to implement useful logic functions. At its core is a deceptively simple set of concepts, namely signals that can take on only one of two values (states) at any moment in time combined with three fundamental operations that can be applied to those signals. For the purposes of most engineers and technicians, the terms "digital logic" and "Boolean logic" are synonomous, while "Boolean algebra" is only slightly different, implying a more math-oriented approach to analyzing or designing digital systems.
Boolean algebra (named in honor of George Boole), involves only two values -- FALSE and TRUE. Sometimes we use different names depending on what makes sense; common names are {F, T}, {LO, HI}, {L, H}, or {0, 1}. Like normal algebra, we have Boolean operators that take one or two operands and produce a value (a Boolean value). An "operand" is nothing more than a value (or signal) and an "operator" is nothing more than some manipulation of the values to produce a result. In normal arithmetic, the expression "2 + 3" has two operands (the "2" and the  "3") and one operator (the "+") which produce the result "5".
Boolean algebra involves three primitive operators, one unary (takes one operand) and two binary (takes two operands) -- the unary operator is the logical negation (NOT) operator while the binary operators are the logical disjunction (OR) and logical conjunction (AND). That's all there is to keep track of: two possible values, FALSE and TRUE, manipulated by three operators, NOT, OR, and AND.
This is a good time to point out that different terminology is used in different fields. In more math-centric fields we speak about logic operators and logic functions, while in more engineering-centric fields we talk about logic gates. But in each case we are taking one or more input signals (operands) and using a logic gate (logic operator) to produce an output signal (result) that depends only on the current input signals (for those who already know about combinatorial logic and sequential logic, we will be keeping the discussion focussed on combinatorial logic -- for those who don't already know about these two types of logic, forget you read this parenthetical comment). For our purposes, because AAC is very engineering-centered, we will use the terminology most commonly found in engineering fields. However, as we introduce the basic gates in the next section we will present the operator notation used in both engineering and in math; this is useful because most engineers and technicians will have discussions from time to time with people that use non-engineering notation.

Truth Tables

Because Boolean algebra is so constrained as to the values that the signals can take on, we can describe or even define functions by merely exhaustively listing all of the possible combinations of the input signals along with the value of the resulting output. This list is known as a Truth Table. An example table would be
Truth Table Example
A B Y
F F T
F T F
T F F
T T T

In this table 'A' and 'B' are the input signals and 'Y' is the output signal. The usual convention is that inputs are listed in the left hand columns and outputs in the right hand columns -- intermediate signals, if there are any, go between the input and output columns. Since we will only be discussing gates that have a single output and no intermediate signals, the right-most column is the output and all other columns are inputs. This example table is sufficient to define a particular Boolean gate; if we were to describe this gate in words, we might say something like, "The output is TRUE if and only if both inputs are the same." We might then give this gate the name "Equality Gate" with the symbol "EQ" and write expressions such as
Y = A EQ B
and say that Y is TRUE if and only if A equals B.
For simple gates (and even moderately complex logic functions) truth tables are the preferred way of describing and defining them because they are unambiguous. For instance, what if we just said that we wanted a gate whose output was TRUE if the two inputs were equal? This is ambiguous because we haven't actually said what the output should be when they aren't equal. SImilarly, if we say that we want a gate whose output is TRUE only when the outputs are equal, we haven't actually said what the output should be when the ARE equal, we've only ruled out what they can be equal to when they aren't equal. We could certainly argue that in both of these cases "it's obvious" -- and, in these simple cases it would be a very strong argument. But in general it may not be obvious and we end up relying on the person reading the description happening to interpret it the same way we did when we wrote it -- this is a recipe for miscommunication and potential disaster. By using a truth table there is absolutely no room for misinterpretation (provided that we use a "proper" truth table, meaning that all possible input combinations are included).
The truth table above has exactly one row for each possible combination of inputs and is therefore known as an "explicit" or a "fully-enumerated" truth table. But the number of rows needed for such a table grows exponentially with the number of inputs. For instance, a device with just six inputs would require sixty-four rows. Not only would this take up a lot of space, but it makes the truth table much more difficult to comprehend for most humans. So we devise shortcuts to enable us to describe multiple rows from the explicit table in a single row of what is known as a "condensed" truth table. There are several conventions that are used, but here we will only use one: the "don't care". Consider the following table:
Condensed Truth Table
A B Y
0 X 0
X 0 0
1 1 1

The 'X' in a given cell means that that row applies for ANY value of that input. So the first row of the above table says that if A is 0 that the output is 0 regardless of what value B might have. Similarly, it says that if B is 0 that the output is 0 regardless of what value A might have. One potential problem with a condensed table is when multiple rows apply to the same set of input values. For instance, which row governs the behavior when both A and B are 0? Since both the first and second rows apply, the table is "over-defined". However, as long as all rows that apply to a given set of inputs produce the same output, the table is consistent and still valid. If they are not consistent, then the table is simply not a valid truth table. Note that a fully-enumerated truth table can never be invalid -- it might be "wrong", meaning that it doesn't represent the logic function we wanted it to, but it will always represent a valid Boolean function.

The Three Fundamental Boolean Operators

Each of the three fundamental Boolean operators, NOT, OR, and AND, have a schematic symbol that is used for them (there's actually more than one, but the ones we show here are by far the most common) and a truth table that defines them; in this section we will provide these plus the more common notations used to express them in engineering, programming, and math/logic contexts. It should be kept in mind that the syntax used by a particular programming language is specified by that language and may be completely different than what is shown here, which is the dominant notation in the C-based languages. Also, programming languages deal with variables that can take on more than just two values, so many have multiple operators that deal with this situation in different ways. As this is well beyond the scope of this treatment, we will not distinguish between these various methods, so just remember that while the programming operators given represent the same logical operation, they are NOT equivalent in the details of how that operation is applied.

The NOT Gate (Logical Negation)

NOT Gate
A Y
F T
T F

Common ways of expressing NOT
Engineering
Y=A¯¯¯¯=A=NOT(A)=NOTA=A
Programming
Y=!A=~A
Math and Formal Logic
Y=¬A

Also known as logical negation, this unary operator merely produces the opposite value of the operand. In common English, we might say, "Y is true if A is NOT true."
Notice that, particularly in engineering contexts, we can specify logical negation in multiple ways. In engineering the three most common ways are with an overbar, with an apostrophe after the operand (known as a "postfix operator"), or with a function-like syntax. With the function-like syntax, the name might be NOT(), INV(), or NEG() for "not", "inverse", or "negation" respectively. You may also see a negative sign used before the operand (known as a "prefix operator"), but this is becoming increasingly uncommon. The overbar is easy to write manually, but it difficult to type into a text document or a computer program, thus the postfix apostrophe is widely used.

The OR Gate (Logical Disjunction)

OR Gate
A B Y
F F F
F T T
T F T
T T T

Common way of expressing OR
Engineering
Y=A+B=OR(A,B)=AORB
Programming
Y=A|B=A||B
Math and Formal Logic
Y=AB

Also known as logical disjunction, this binary operator produces a TRUE output if either of its operands is TRUE or, put another way, it produces a FALSE output if and only if both of its operands are FALSE. In common English we might say, "Y is true if A is true OR B is true."
When we speak of signals that are OR'ed together, we often talk about the result being the "sum" of those signals. This stems from the use of the '+' symbol for the OR operator and this terminoligy is widely and formally accepted. However, it is generally considered poor form to say that we are "adding" these signals together, though you will hear it from time to time.

The AND Gate (Logical Conjunction)

AND Gate
A B Y
F F F
F T F
T F F
T T T

Common ways of expressing AND
Engineering
Y=AB=AB=(A)(B)=AND(A,B)=AANDB
Programming
Y=A&B=A&&B
Math and Formal Logic
Y=AB

Also known as logical conjunction, this binary operator produces a TRUE output if and only if BOTH of its operands are TRUE or, put another way, it produces a FALSE output if either of its operands are FALSE. In common English we might say, "Y is true if A is true AND B is true."
When we speak of signals that are AND'ed together, we often talk about the result being the "product" of those signals. This stems from the use of the various notations for multiplication being used for the OR operator and this terminoligy is widely and formally accepted. However, it is generally considered poor form to say that we are "multiplying" these signals together, though you will hear it from time to time.

Operator Precedence and Associativity

Operator precedence (also known as "order of operations") are simply rules that dictate which operator is executed first when we would otherwise have the choice of picking between two different operators (or, more correctly, two operators of different precedence). Operator associativity, on the other hand, is simply a rule that dictates which operator is executed first when we would otherwise have the choice of picking between two operators that are the same (or, more correctly, two operators of the same precedence). Since a given expression, if evaluated according to the precedence and associativity rules, might not implement the desired logic function, we can override those rules through the use of parentheses (which, in actuality, are simply another operator with the highest precedence).
Before discussing precedence and associativity rules for Boolean algebra, let's examine them for the normal algebra and arithmetic that we are comfortable with. We know, in the absence of parentheses, that multiplication and division have precedence over addition and subtraction; hence multiplication and division are said to have "higher precedence" or be "higher order" than addition or subtraction. We also know that if we have multiple multiplication and/or division operators that we execute them left to right, and the same for multiple addition and/or subtraction operators. This is known as "left associativity".
Let's examine how these rules are applied to the following expression.
A + B * C / D - E + F * G / H * I - J * K + L
Since multiplication and division are the highest precedence operators present, we first split the expression into groups that contain only these two operators. We'll use square brackets for this purpose (we will remove these later).
A + [B * C / D] - E + [F * G / H * I] - [J * K] + L
Within each group, we have "factors" that are all operands for either a multiplication or division operator. We evaluate these operators according to the associativity, which for multiplication and division is left-to-right (left associative).
A + [((B * C) / D)] - E + [(((F * G) / H) * I)] - [(J * K)] + L
At this point we have with "terms" that are all operands for either an addition or subtraction operator. We could have removed the square brackets at this point, but they are useful in visually identifying terms that are made up of multiple factors. We now evalaute the addition and subtraction operators according to their associativity, which is also left associative. We'll go ahead and remove the square brackets at this time.
(((((A + ((B * C) / D)) - E) + (((F * G) / H) * I)) - (J * K)) + L)
This is known as a "fully-parenthesized" expression whose significance is that it is independent of the precedence and associativity rules. The reason is simple, those rules dictate which operator to evaluate when we have a choice, but a fully-parenthesized expression gives us no such choices.
Shifting our attention to Boolean algabra, we discover that there are no precedence and associativity rules that are universal. This is particularly true when we start adding more operators (which are nothing but short cut notations for certain combinations of the three fundamental ones). Programming languages, by their nature, HAVE to have well-defined precedence and associativity rules, but they may vary significantly from one language to another. For this reason, it is best to write Boolean expressions with a liberal dose of parentheses.
Having said that, most programming languages have converged on a fairly consistent convention, which itself is a reasonably natural, albeit somewhat arbitrary, outgrowth of the operator notation that is used. In normal arithmetic, the unary minus sign (the "negative sign") has higher precedence than either multiplication or addition. Since we use the multiplication symbol for AND and the addition symbol for OR, this leads to the the Boolean precedence of {NOT, AND, OR} (in order from highest to lowest). As is the case with most binary operators, AND and OR are both left associative. The associativity of NOT depends on the notation used. If a prefix operator is used, such as the '~', the '!', the '
¬
', or even the non-function-like 'NOT', then the operator is right associative, but if a postfix operator is used, such as the trailing apostrophe, the operator is left associative. This makes sense with just a couple of examples.

Left associative NOT:
NOT NOT A = NOT (NOT A)
~~A = ~ (~A)
¬
¬
A =
¬
(
¬
A)

If these were left associative, we would have NOT NOT A being the same as (NOT NOT) A, which is nonsensical since an operator operating on an operator is undefined (though, yes, we could define it if we wanted to).
Right associative NOT:
A' ' = (A')'
Similar to the prefix operator, if this postfix operator wasn't left associative we would have A' ' = A (' ') which would be undefined.
So let's consider the following example by fully parenthesizing it.
Y = A AND NOT B OR NOT A AND B = A B' + A' B 
We start with the highest precendence operator, the NOT, and get
Y = A AND (NOT B) OR (NOT A) AND B = A (B') + (A') B 
We then move to the next highest, the AND, and get
Y = (A AND (NOT B)) OR ((NOT A) AND B) = (A (B')) + ((A') B) 
And finally we only have one remaining operator, which we will parenthesize just for completeness to get
Y = ((A AND (NOT B)) OR ((NOT A) AND B)) = ((A B')) + ((A') B))
 One of the common mistakes that people make is failing to recognize that, using these very common rules of precedence, that
AB(AB)


Though this WOULD be true if all three operators were deemed to be of the same precedence and to be left associative, which would be unusual, but not unheard of. Instead
AB=(A)(B)




Conclusion

This has been a pretty lightweight article that has hopefully prepared you for a number of articles that will explore many aspects of digital logic and Boolean Algebra at a much deeper level -- in fact at a level deeper than most courses go. This is useful because to truly master digital logic design, it is necessary to be fully conversant with the concepts and subtleties upon which it is based and to also be aware of the common misconceptions and fallacies that frequently trap less well-prepared designers.

Upcoming Articles

       
  • Boolean Expressions and Logic Schematics
  • Evaluating Boolean Logic via Truth Tables
  • Boolean Identities
  • Boolean Duality
  • Bubble Logic
  • The Sixteen Two-input Logic Gates -- Including Implication and Inhibition
  • Universal Logic Gates -- More than just NAND and NOR
  • Canonical Forms -- SOP and POS   
  • CMOS Implementations of Standard Logic Gates

No comments: