Generalising Logic Gates

Logic circuits traditionally consist of a directed, acyclic graph. Each node can take a value of 0 or 1. The nodes can have 0, 1, or 2 inputs. 0-input nodes are the “input” for the whole system. 1-input nodes are always NOT gates, they output 0 if they are input 1 and vice versa. 2-input nodes are either AND or OR gates, which work like you would expect. Some of the nodes are designated output nodes.

Let’s generalise. We can define an gate as a function . In this case a NOT gate is a gate; both AND and OR gates are gates. Consider all of the possible gates: there are possible inputs, and each one of them has one of possible outputs. This means there are possible gates. Keeping track of the number of gates is important so we will thoroughly convince ourselves of this.

For there are possible gates. These correspond to gates with no inputs and one output, so the two gates are a node which always outputs 0 and a node which always outputs 1. If we allow to vary, we have possible gates. This makes sense, as a gate just consists of a node which always outputs the same string of bits. gates can then be considered as a lookup table with locations, each containing bits of information. In general there are bits of information required to specify an gate.

Example 1

Now let’s consider all possible gates. There are of them:

01
000
100

01
010
100

01
001
100

01
011
100

01
000
110

01
010
110

01
001
110

01
011
110

01
000
101

01
010
101

01
001
101

01
011
101

01
000
111

01
010
111

01
001
111

01
011
111

So what are the numbers above each gate?

Consider the input as a set of numbers . We can define the input uniquely using . In binary this is just the concatenation . This means there are possible inputs. We will index these with .

We then take as the th output when the input is fed into the gate. We define similarly to above as . We then let be the output number corresponding to the input .

Each has digits, so we can define . This is equivalent to concatenating in binary, which also makes clear that has digits so the maximum value of is .

Each thus uniquely defines an gate, therefore the three values with define a function . This is a slight generalisation of the notion of a boolean function, as boolean functions are generally defined as functions . We will call C the identifier of the gate.

Let’s look at the above table more closely. We see some familiar gates: 1 is NOR, 6 is XOR, 7 is NAND, 8 is AND, 9 is XNOR, and 15 is OR. In fact for any gate , the gate is the result of applying a NOT function to the output.

We also have some input-asymmetric gates. 2, 4, 11, and 13 are all of the form ( [GATE] NOT where [GATE] is an AND, OR, NAND or NOR gate.

The remaining gates are also of note. 1 and 15 are both constant. 3, 5, 10, and 12 all only depend on one of the inputs. These gates could be constructed from a circuit containing a single or gate. These illustrate a way in which gates can be trivial: they can be independent of one or more of the inputs.

There are other ways gates can be trivial: but these rely on having multiple output nodes. The first is duplicate output nodes, imagine the set of gates for which for all inputs. There are 16 of them, each corresponding to a gate.

Another is separability. We will define an gate as separable if we can split the inputs into two or more subsets, each of which can only affect a subset of the outputs. A case of this would be the gate where = , and = . Another example would be the gate where = NOT , and = .

Gates can also be equivalent to each other with regards to permuting the inputs. We see this with the pairs 2 and 4; and 11 and 13 above.

Example 2

A half-adder is a gate. We can work through and find its identifier: First we need a table of inputs and outputs. We will take the carry term first and the sum second.

01
00, 00, 1
10, 11, 0

The binary representation of can be easily read off as 10 01 01 00. therefore the half-adder is identified as .

A full-adder is a gate:

01
01
00, 00, 1
10, 11, 0
01
00, 11, 0
11, 01, 1

The binary number is here read off as 11 10 10 01 10 01 01 00. . So the full adder can be expressed as . gets big fast as gates gain inputs.

Constructing Gates from Simpler Gates

We might want to construct larger gates from simpler ones. This is how computers are built, and in fact we can build a half-adder from just two “standard” logic gates, an AND and an XOR. A full-adder requires 5 “standard” logic gates. It may be possible to build a full-adder with even fewer logic gates if we are allowed to use all 15, but I have not checked and I do not know a good way to check yet.

We can use some tools from information theory to lower-bound how many smaller gates we ought to need in general to construct a bigger one. We will consider the space complexity as the amount of information required to specify a gate of a certain size, and compare this to the space complexity of a network of gates.

The space complexity of a gate is . This grows as .

When specifying a gate network, we need to specify the gates, which grows linearly in : . We must also consider the information required to specify the connectivity of the network. This is an acyclic directed graph. The number of vertices is , and the number of edges is . The space complexity of this thus grows as the space complexity of a sparse directed graph, which is . Here this ends up being .

Binary Addition

But this is not always the case: consider a gate which adds together two -digit binary numbers. This will be a gate. But to construct this from 1 half-adder and full-adders requires only separate standard logic gates. Our value of here does not satisfy , as . Something is clearly up here. Addition appears to be some sort of special process which is unusually simple.

In this framework, this appears to us as addition being much easier than expected to construct as a network. Being easier than expected to construct as a network also makes it faster to compute, since the number of gates corresponds the number of operations needed to evaluate the process.