Logical matrix

Source: Wikipedia, the free encyclopedia.

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a

combinatorial mathematics and theoretical computer science
.

Matrix representation of a relation

If R is a

indexed sets
X and Y (so RX ×Y), then R can be represented by the logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of M are defined by

In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive

indexed sets
for more detail.

Example

The binary relation R on the set {1, 2, 3, 4} is defined so that aRb holds if and only if a

divides
b evenly, with no remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because when 3 divides 4, there is a remainder of 1. The following set is the set of pairs for which the relation R holds.

{(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}.

The corresponding representation as a logical matrix is

which includes a diagonal of ones, since each number divides itself.

Other examples

Some properties

Multiplication of two logical matrices using boolean algebra.

The matrix representation of the equality relation on a finite set is the identity matrix I, that is, the matrix whose entries on the diagonal are all 1, while the others are all 0. More generally, if relation R satisfies I ⊆ R, then R is a reflexive relation.

If the Boolean domain is viewed as a

matrix product
of the matrix representations of these relations. This product can be computed in expected time O(n2).[2]

Frequently, operations on binary matrices are defined in terms of

Galois field
. They arise in a variety of representations and have a number of more restricted special forms. They are applied e.g. in
XOR-satisfiability
.

The number of distinct m-by-n binary matrices is equal to 2mn, and is thus finite.

Lattice

Let n and m be given and let U denote the set of all logical m × n matrices. Then U has a

partial order
given by

In fact, U forms a

or
between two matrices applied component-wise. The complement of a logical matrix is obtained by swapping all zeros and ones for their opposite.

Every logical matrix A = (Aij) has a transpose AT = (Aji). Suppose A is a logical matrix with no columns or rows identically zero. Then the matrix product, using Boolean arithmetic, contains the m × m identity matrix, and the product contains the n × n identity.

As a mathematical structure, the Boolean algebra U forms a lattice ordered by inclusion; additionally it is a multiplicative lattice due to matrix multiplication.

Every logical matrix in U corresponds to a binary relation. These listed operations on U, and ordering, correspond to a calculus of relations, where the matrix multiplication represents composition of relations.[3]

Logical vectors

Group-like structures
Totalityα
Associativity
Identity Divisibilityβ
Commutativity
Partial magma Unneeded Unneeded Unneeded Unneeded Unneeded
Semigroupoid Unneeded Required Unneeded Unneeded Unneeded
Small category Unneeded Required Required Unneeded Unneeded
Groupoid Unneeded Required Required Required Unneeded
Magma Required Unneeded Unneeded Unneeded Unneeded
Quasigroup Required Unneeded Unneeded Required Unneeded
Unital magma
Required Unneeded Required Unneeded Unneeded
Loop
Required Unneeded Required Required Unneeded
Semigroup Required Required Unneeded Unneeded Unneeded
Associative quasigroup Required Required Unneeded Required Unneeded
Monoid Required Required Required Unneeded Unneeded
Commutative monoid
Required Required Required Unneeded Required
Group Required Required Required Required Unneeded
Abelian group Required Required Required Required Required
The closure axiom, used by many sources and defined differently, is equivalent.
Here, divisibility refers specifically to the quasigroup axioms.

If m or n equals one, then the m × n logical matrix (mij) is a logical vector or

bit string
. If m = 1, the vector is a row vector, and if n = 1, it is a column vector. In either case the index equaling 1 is dropped from denotation of the vector.

Suppose and are two logical vectors. The

rectangular relation

A reordering of the rows and columns of such a matrix can assemble all the ones into a rectangular part of the matrix.[4]

Let h be the vector of all ones. Then if v is an arbitrary logical vector, the relation R = v hT has constant rows determined by v. In the

calculus of relations such an R is called a vector.[4]
A particular instance is the universal relation .

For a given relation R, a maximal rectangular relation contained in R is called a concept in R. Relations may be studied by decomposing into concepts, and then noting the

induced concept lattice
.

Consider the table of group-like structures, where "unneeded" can be denoted 0, and "required" denoted by 1, forming a logical matrix To calculate elements of , it is necessary to use the logical inner product of pairs of logical vectors in rows of this matrix. If this inner product is 0, then the rows are orthogonal. In fact,

small category is orthogonal to quasigroup, and groupoid is orthogonal to magma
. Consequently there are zeros in , and it fails to be a
universal relation
.

Row and column sums

Adding up all the ones in a logical matrix may be accomplished in two ways: first summing the rows or first summing the columns. When the row sums are added, the sum is the same as when the column sums are added. In incidence geometry, the matrix is interpreted as an incidence matrix with the rows corresponding to "points" and the columns as "blocks" (generalizing lines made of points). A row sum is called its point degree, and a column sum is the block degree. The sum of point degrees equals the sum of block degrees.[5]

An early problem in the area was "to find necessary and sufficient conditions for the existence of an incidence structure with given point degrees and block degrees; or in matrix language, for the existence of a (0, 1)-matrix of type v × b with given row and column sums".[5] This problem is solved by the Gale–Ryser theorem.

See also

Notes

  1. ^ Petersen, Kjeld (February 8, 2013). "Binmatrix". Retrieved August 11, 2017.
  2. idempotent
    , cf. p.134 (bottom).
  3. Irving Copilowish (December 1948). "Matrix development of the calculus of relations", Journal of Symbolic Logic 13(4): 193–203 Jstor link
  4. ^ .
  5. ^ .

References

External links