Firstorder logic
Firstorder logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of
A theory about a topic, such as set theory, a theory for groups,^{}[3] or a formal theory of arithmetic, is usually a firstorder logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in firstorder logic.
The term "firstorder" distinguishes firstorder logic from higherorder logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted.^{[4]}^{: 56 } In firstorder theories, predicates are often associated with sets. In interpreted higherorder theories, predicates may be interpreted as sets of sets.
There are many
Firstorder logic is the standard for the formalization of mathematics into
The foundations of firstorder logic were developed independently by Gottlob Frege and Charles Sanders Peirce.^{[5]} For a history of firstorder logic and how it came to dominate formal logic, see José Ferreirós (2001).
Introduction
Logical connectives  



Related concepts  
Applications  
Category  
While
Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as p and q. They are not viewed as an application of a predicate, such as , to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false.^{[6]} However, in firstorder logic, these two sentences may be framed as statements that a certain individual or nonlogical object has a property. In this example, both sentences happen to have the common form for some individual , in the first sentence the value of the variable x is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about nonlogical individuals along with the original logical connectives, firstorder logic includes propositional logic.^{[7]}
The truth of a formula such as "x is a philosopher" depends on which object is denoted by x and on the interpretation of the predicate "is a philosopher". Consequently, "x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment.^{[8]} Relationships between predicates can be stated using logical connectives. For example, the firstorder formula "if x is a philosopher, then x is a scholar", is a conditional statement with "x is a philosopher" as its hypothesis, and "x is a scholar" as its conclusion, which again needs specification of x in order to have a definite truth value.
Quantifiers can be applied to variables in a formula. The variable x in the previous formula can be universally quantified, for instance, with the firstorder sentence "For every x, if x is a philosopher, then x is a scholar". The
The
The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the firstorder sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables.
An interpretation (or model) of a firstorder formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, consider the sentence "There exists x such that x is a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of the Republic." It is true, as witnessed by Plato in that text.^{[clarification needed]}
There are two key parts of firstorder logic. The syntax determines which finite sequences of symbols are wellformed expressions in firstorder logic, while the semantics determines the meanings behind these expressions.
Syntax
Part of Formal languages 
Unlike natural languages, such as English, the language of firstorder logic is completely formal, so that it can be mechanically determined whether a given expression is well formed. There are two key types of wellformed expressions: terms, which intuitively represent objects, and formulas, which intuitively express statements that can be true or false. The terms and formulas of firstorder logic are strings of symbols, where all the symbols together form the alphabet of the language.
Alphabet
As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols.
It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and nonlogical symbols, whose meaning varies by interpretation.^{[9]} For example, the logical symbol always represents "and"; it is never interpreted as "or", which is represented by the logical symbol . However, a nonlogical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate depending on the interpretation at hand.
Logical symbols
Logical symbols are a set of characters that vary by author, but usually include the following:^{[10]}
 Quantifier symbols: ∀ for universal quantification, and ∃ for existential quantification
 disjunction, → for implication, ↔ for biconditional, ¬ for negation. Some authors^{[11]} use Cpq instead of → and Epq instead of ↔, especially in contexts where → is used for other purposes. Moreover, the horseshoe ⊃ may replace →;^{[8]}the triplebar ≡ may replace ↔; a tilde (~), Np, or Fp may replace ¬; a double bar , ,^{}[12] or Apq may replace ∨; and an ampersand &, Kpq, or the middle dot ⋅ may replace ∧, especially if these symbols are not available for technical reasons. (The aforementioned symbols Cpq, Epq, Np, Apq, and Kpq are used in Polish notation.)
 Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context.
 An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, ... . Subscripts are often used to distinguish variables: x_{0}, x_{1}, x_{2}, ... .
 An equality symbol (sometimes, identity symbol) = (see § Equality and its axioms below).
Not all of these symbols are required in firstorder logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices.
Other logical symbols include the following:
 Truth constants: T, V, or ⊤ for "true" and F, O, or ⊥ for "false" (V and O are from Polish notation). Without any such logical operators of valence 0, these two constants can only be expressed using quantifiers.
 Additional logical connectives such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq.
Nonlogical symbols
Nonlogical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of nonlogical symbols for all purposes:
 For every integer n ≥ 0, there is a collection of predicate symbols. Because they represent relationsbetween n elements, they are also called relation symbols. For each arity n, there is an infinite supply of them:
 P^{n}_{0}, P^{n}_{1}, P^{n}_{2}, P^{n}_{3}, ...
 For every integer n ≥ 0, there are infinitely many nary function symbols:
 f^{ n}_{0}, f^{ n}_{1}, f^{ n}_{2}, f^{ n}_{3}, ...
When the arity of a predicate symbol or function symbol is clear from context, the superscript n is often omitted.
In this traditional approach, there is only one language of firstorder logic.^{}[13] This approach is still common, especially in philosophically oriented books.
A more recent practice is to use different nonlogical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all nonlogical symbols used in a particular application. This choice is made via a signature.^{[14]}
Typical signatures in mathematics are {1, ×} or just {×} for
Though signatures might in some cases imply how nonlogical symbols are to be interpreted, interpretation of the nonlogical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics.
In this approach, every nonlogical symbol is of one of the following types:
 A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These are often denoted by uppercase letters such as P, Q and R. Examples:
 In P(x), P is a predicate symbol of valence 1. One possible interpretation is "x is a man".
 In Q(x,y), Q is a predicate symbol of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y".
 Relations of valence 0 can be identified with propositional variables, which can stand for any statement. One possible interpretation of R is "Socrates is a man".
 A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase roman letters such as f, g and h. Examples:
 f(x) may be interpreted as "the father of x". In arithmetic, it may stand for "x". In set theory, it may stand for "the power set of x".
 In arithmetic, g(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y".
 Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet such as a, b and c. The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, it may stand for the empty set.
The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of nonlogical symbols.
Formation rules
BNF grammar


<index> ::= ""
 <index> "'"
<variable> ::= "x" <index>
<constant> ::= "c" <index>
<unary function> ::= "f1" <index>
<binary function> ::= "f2" <index>
<ternary function> ::= "f3" <index>
<unary predicate> ::= "p1" <index>
<binary predicate> ::= "p2" <index>
<ternary predicate> ::= "p3" <index>
<term> ::= <variable>
 <constant>
 <unary function> "(" <term> ")"
 <binary function> "(" <term> "," <term> ")"
 <ternary function> "(" <term> "," <term> "," <term> ")"
<atomic formula> ::= "TRUE"
 "FALSE"
 <term> "=" <term>
 <unary predicate> "(" <term> ")"
 <binary predicate> "(" <term> "," <term> ")"
 <ternary predicate> "(" <term> "," <term> "," <term> ")"
<formula> ::= <atomic formula>
 "¬" <formula>
 <formula> "∧" <formula>
 <formula> "∨" <formula>
 <formula> "⇒" <formula>
 <formula> "⇔" <formula>
 "(" <formula> ")"
 "∀" <variable> <formula>
 "∃" <variable> <formula>

The above contextfree grammar in BackusNaur form defines the language of syntactically valid firstorder formulas with function symbols and predicate symbols up to arity 3. For higher arities, it needs to be adapted accordingly.^{[15]}^{[citation needed]} 
The example formula ∀x ∃x' (¬x=c) ⇒ f2(x,x')=c' describes multiplicative inverses when f2' , c , and c' are interpreted as multiplication, zero, and one, respectively.

The formation rules define the terms and formulas of firstorder logic.^{[16]} When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally contextfree (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms.
Terms
The set of
 Variables. Any variable symbol is a term.
 Functions. If f is an nary function symbol, and t_{1}, ..., t_{n} are terms, then f(t_{1},...,t_{n}) is a term. In particular, symbols denoting individual constants are nullary function symbols, and thus are terms.
Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term.
Formulas
The set of
 Predicate symbols. If P is an nary predicate symbol and t_{1}, ..., t_{n} are terms then P(t_{1},...,t_{n}) is a formula.
 Equality. If the equality symbol is considered part of logic, and t_{1} and t_{2} are terms, then t_{1} = t_{2} is a formula.
 Negation. If is a formula, then is a formula.
 Binary connectives. If and are formulas, then () is a formula. Similar rules apply to other binary logical connectives.
 Quantifiers. If is a formula and x is a variable, then (for all x, holds) and (there exists x such that ) are formulas.
Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas.
For example:
is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. However, is not a formula, although it is a string of symbols from the alphabet.
The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability.
Notational conventions
For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is:
 is evaluated first
 and are evaluated next
 Quantifiers are evaluated next
 is evaluated last.
Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula:
might be written as:
In some fields, it is common to use infix notation for binary relations and functions, instead of the prefix notation defined above. For example, in arithmetic, one typically writes "2 + 2 = 4" instead of "=(+(2,2),4)". It is common to regard formulas in infix notation as abbreviations for the corresponding formulas in prefix notation, cf. also term structure vs. representation.
The definitions above use infix notation for binary connectives such as . A less common convention is Polish notation, in which one writes , and so on in front of their arguments rather than between them. This convention is advantageous in that it allows all punctuation symbols to be discarded. As such, Polish notation is compact and elegant, but rarely used in practice because it is hard for humans to read. In Polish notation, the formula:
becomes "∀x∀y→Pfx¬→ PxQfyxz".
Free and bound variables
In a formula, a variable may occur free or bound (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol x, each occurrence of a variable symbol x in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol x appears.^{[8]}^{p.297} Then, an occurrence of x is said to be bound if that occurrence of x lies within the scope of at least one of either or . Finally, x is bound in φ if all occurrences of x in φ are bound.^{[8]}^{pp.142143}
Intuitively, a variable symbol is free in a formula if at no point is it quantified:^{[8]}^{pp.142143} in ∀y P(x, y), the sole occurrence of variable x is free while that of y is bound. The free and bound variable occurrences in a formula are defined inductively as follows.
 Atomic formulas
 If φ is an atomic formula, then x occurs free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula.
 Negation
 x occurs free in ¬φ if and only if x occurs free in φ. x occurs bound in ¬φ if and only if x occurs bound in φ
 Binary connectives
 x occurs free in (φ → ψ) if and only if x occurs free in either φ or ψ. x occurs bound in (φ → ψ) if and only if x occurs bound in either φ or ψ. The same rule applies to any other binary connective in place of →.
 Quantifiers
 x occurs free in ∀y φ, if and only if x occurs free in φ and x is a different symbol from y. Also, x occurs bound in ∀y φ, if and only if x is y or x occurs bound in φ. The same rule holds with ∃ in place of ∀.
For example, in ∀x ∀y (P(x) → Q(x,f(x),z)), x and y occur only bound,^{[19]} z occurs only free, and w is neither because it does not occur in the formula.
Free and bound variables of a formula need not be disjoint sets: in the formula P(x) → ∀x Q(x), the first occurrence of x, as argument of P, is free while the second one, as argument of Q, is bound.
A formula in firstorder logic with no free variable occurrences is called a firstorder sentence. These are the formulas that will have welldefined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence ∃x Phil(x) will be either true or false in a given interpretation.
Example: ordered abelian groups
In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then:
 The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z.
 The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas. These are usually written as x + y = 0 and x + y − z ≤ x + y.
 The expression is a formula, which is usually written as This formula has one free variable, z.
The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written
Semantics
An
Firstorder structures
The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a domain of discourse D and an interpretation function I mapping nonlogical symbols to predicates, functions, and constants.
The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a firstorder formula becomes a statement about these objects; for example, states the existence of some object in D for which the predicate P is true (or, more precisely, for which the predicate assigned to the predicate symbol P by the interpretation is true). For example, one can take D to be the set of integers.
Nonlogical symbols are interpreted as follows:
 The interpretation of an nary function symbol is a function from D^{n} to D. For example, if the domain of discourse is the set of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function which, in this interpretation, is addition.
 The interpretation of a constant symbol (a function symbol of arity 0) is a function from D^{0} (a set whose only member is the empty tuple) to D, which can be simply identified with an object in D. For example, an interpretation may assign the value to the constant symbol .
 The interpretation of an nary predicate symbol is a set of ntuples of elements of D, giving the arguments for which the predicate is true. For example, an interpretation of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than its second argument. Equivalently, predicate symbols may be assigned Booleanvalued functions from D^{n} to .
Evaluation of truth values
A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as . The truth value of this formula changes depending on the values that x and y denote.
First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment:
 Variables. Each variable x evaluates to μ(x)
 Functions. Given terms that have been evaluated to elements of the domain of discourse, and a nary function symbol f, the term evaluates to .
Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the Tschema.
 Atomic formulas (1). A formula is associated the value true or false depending on whether , where are the evaluation of the terms and is the interpretation of , which by assumption is a subset of .
 Atomic formulas (2). A formula is assigned true if and evaluate to the same object of the domain of discourse (see the section on equality below).
 Logical connectives. A formula in the form , , etc. is evaluated according to the truth table for the connective in question, as in propositional logic.
 Existential quantifiers. A formula is true according to M and if there exists an evaluation of the variables that differs from at most regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment . This formal definition captures the idea that is true if and only if there is a way to choose a value for x such that φ(x) is satisfied.
 Universal quantifiers. A formula is true according to M and if φ(x) is true for every pair composed by the interpretation M and some variable assignment that differs from at most on the value of x. This captures the idea that is true if every possible choice of a value for x causes φ(x) to be true.
If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and if and only if it is true according to M and every other variable assignment .
There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol c_{d} is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows:
 Existential quantifiers (alternate). A formula is true according to M if there is some d in the domain of discourse such that holds. Here is the result of substituting c_{d} for every free occurrence of x in φ.
 Universal quantifiers (alternate). A formula is true according to M if, for every d in the domain of discourse, is true according to M.
This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments.
Validity, satisfiability, and logical consequence
If a sentence φ evaluates to true under a given interpretation M, one says that M satisfies φ; this is denoted^{[20]} . A sentence is satisfiable if there is some interpretation under which it is true. This is a bit different from the symbol from model theory, where denotes satisfiability in a model, i.e. "there is a suitable assignment of values in 's domain to variable symbols of ".^{[21]}
Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables , ..., is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables , ..., . This has the same effect as saying that a formula φ is satisfied if and only if its
A formula is logically valid (or simply valid) if it is true in every interpretation.^{[22]} These formulas play a role similar to tautologies in propositional logic.
A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.
Algebraizations
An alternate approach to the semantics of firstorder logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from firstorder logic that do not involve replacing quantifiers with other variable binding term operators:
 Cylindric algebra, by Alfred Tarski and colleagues;
 Polyadic algebra, by Paul Halmos;
 Predicate functor logic, mainly due to Willard Quine.
These algebras are all lattices that properly extend the twoelement Boolean algebra.
Tarski and Givant (1987) showed that the fragment of firstorder logic that has no
Firstorder theories, models, and elementary classes
A firstorder theory of a particular signature is a set of
A firstorder structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory.
Many theories have an
A theory is
Empty domains
The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as
There are several difficulties with empty domains, however:
 Many common rules of inference are valid only when the domain of discourse is required to be nonempty. One example is the rule stating that implies when x is not a free variable in . This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted.
 The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains.
Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition.
Deductive systems
A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for firstorder logic, including
A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective.
A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area.
In general, logical consequence in firstorder logic is only
Rules of inference
A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truthpreserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion.
For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φ[t/x] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.)
To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by , in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[t/y] is , which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is , which is again logically valid.
The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule.
Hilbertstyle systems and natural deduction
A deduction in a Hilbertstyle deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbertstyle systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference.
Natural deduction systems resemble Hilbertstyle systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof.
Sequent calculus
The sequent calculus was developed to study the properties of natural deduction systems.^{[25]} Instead of working with one formula at a time, it uses sequents, which are expressions of the form:
where A_{1}, ..., A_{n}, B_{1}, ..., B_{k} are formulas and the turnstile symbol is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that implies .
Tableaux method
Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent and children C and D.
Resolution
The
The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through
Provable identities
Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form. Some provable identities include:
 (where must not occur free in )
 (where must not occur free in )
Equality and its axioms
There are several different conventions for using equality (or identity) in firstorder logic. The most common convention, known as firstorder logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are:^{[26]}^{: 198–200 }
 Reflexivity. For each variable x, x = x.
 Substitution for functions. For all variables x and y, and any function symbol f,
 x = y → f(..., x, ...) = f(..., y, ...).
 Substitution for formulas. For any variables x and y and any formula φ(z) with a free variable z, then:
 x = y → (φ(x) → φ(y)).
These are axiom schemas, each of which specifies an infinite set of axioms. The third schema is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol f, is (equivalent to) a special case of the third schema, using the formula:
 φ(z): f(..., x, ...) = f(..., z, ...)
Then
 x = y → (f(..., x, ...) = f(..., x, ...) → f(..., x, ...) = f(..., y, ...)).
Since x = y is given, and f(..., x, ...) = f(..., x, ...) true by reflexivity, we have f(..., x, ...) = f(..., y, ...)
Many other properties of equality are consequences of the axioms above, for example:
 Symmetry. If x = y then y = x.^{[27]}
 Transitivity. If x = y and y = z then x = z.^{[28]}
Firstorder logic without equality
An alternate approach considers the equality relation to be a nonlogical symbol. This convention is known as firstorder logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and firstorder logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation.
When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In firstorder logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When firstorder logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered.
Firstorder logic without equality is often employed in the context of secondorder arithmetic and other higherorder theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted.
Defining equality within a theory
If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to
Some theories allow other ad hoc definitions of equality:
 In the theory of partial orderswith one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t t ≤ s.
 In set theory with one relation ∈, one may define s = t to be an abbreviation for ∀x (s ∈ x ↔ t ∈ x) ∀x (x ∈ s ↔ x ∈ t). This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, which can be stated as , with an alternative formulation , which says that if sets x and y have the same elements, then they also belong to the same sets.
Metalogical properties
One motivation for the use of firstorder logic, rather than higherorder logic, is that firstorder logic has many metalogical properties that stronger logics do not have. These results concern general properties of firstorder logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of firstorder theories.
Completeness and undecidability
Unlike
There are systems weaker than full firstorder logic for which the logical consequence relation is decidable. These include propositional logic and
The Löwenheim–Skolem theorem
The Löwenheim–Skolem theorem shows that if a firstorder theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a firstorder language with a countable signature. That is, there is no firstorder formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable).
The Löwenheim–Skolem theorem implies that infinite structures cannot be
The compactness theorem
The compactness theorem states that a set of firstorder sentences has a model if and only if every finite subset of it has a model.^{[29]} This implies that if a formula is a logical consequence of an infinite set of firstorder axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models.
The compactness theorem has a limiting effect on which collections of firstorder structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures).
There are also more subtle limitations of firstorder logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a
Lindström's theorem
Per Lindström showed that the metalogical properties just discussed actually characterize firstorder logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type:
 A logical system satisfying Lindström's definition that contains firstorder logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to firstorder logic.
 A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to firstorder logic.
Limitations
Although firstorder logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe.
For instance, firstorder logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C_{2}: firstorder logic with two variables and the
Expressiveness
The
Formalizing natural languages
Firstorder logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, firstorder logic is used as a basis for
Still, there are complicated features of natural language that cannot be expressed in firstorder logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than firstorder predicate logic".^{}[31]
Type  Example  Comment 

Quantification over properties  If John is selfsatisfied, then there is at least one thing he has in common with Peter.  Example requires a quantifier over predicates, which cannot be implemented in singlesorted firstorder logic: Zj → ∃X(Xj∧Xp). 
Quantification over properties  Santa Claus has all the attributes of a sadist.  Example requires quantifiers over predicates, which cannot be implemented in singlesorted firstorder logic: ∀X(∀x(Sx → Xx) → Xs). 
Predicate adverbial  John is walking quickly.  Example cannot be analysed as Wj ∧ Qj; predicate adverbials are not the same kind of thing as secondorder predicates such as colour. 
Relative adjective  Jumbo is a small elephant.  Example cannot be analysed as Sj ∧ Ej; predicate adjectives are not the same kind of thing as secondorder predicates such as colour. 
Predicate adverbial modifier  John is walking very quickly.  — 
Relative adjective modifier  Jumbo is terribly small.  An expression such as "terribly", when applied to a relative adjective such as "small", results in a new composite relative adjective "terribly small". 
Prepositions  Mary is sitting next to John.  The preposition "next to" when applied to "John" results in the predicate adverbial "next to John". 
Restrictions, extensions, and variations
There are many variations of firstorder logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity.
Restricted languages
Firstorder logic can be studied in languages with fewer logical symbols than were described above:
 Because can be expressed as , and can be expressed as , either of the two quantifiers and can be dropped.
 Since can be expressed as and can be expressed as , either or can be dropped. In other words, it is sufficient to have and , or and , as the only logical connectives.
 Similarly, it is sufficient to have only and as logical connectives, or to have only the Peirce arrow(NOR) operator.
 It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol one may use a predicate (interpreted as ) and replace every predicate such as with . A function such as will similarly be replaced by a predicate interpreted as . This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics.^{[32]}
Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express naturallanguage statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a tradeoff between the ease of working within the formal system and the ease of proving results about the formal system.
It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied.
Manysorted logic
Ordinary firstorder interpretations have a single domain of discourse over which all quantifiers range. Manysorted firstorder logic allows variables to have different sorts, which have different domains. This is also called typed firstorder logic, and the sorts called types (as in data type), but it is not the same as firstorder type theory. Manysorted firstorder logic is often used in the study of secondorder arithmetic.^{[33]}
When there are only finitely many sorts in a theory, manysorted firstorder logic can be reduced to singlesorted firstorder logic.^{[34]}^{: 296–299 } One introduces into the singlesorted theory a unary predicate symbol for each sort in the manysorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols and and the axiom:
 .
Then the elements satisfying are thought of as elements of the first sort, and elements satisfying as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula , one writes:
 .
Additional quantifiers
Additional quantifiers can be added to firstorder logic.
 Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as ∃!x P(x). This notation, called uniqueness quantification, may be taken to abbreviate a formula such as ∃x (P(x) ∀y (P(y) → (x = y))).
 Firstorder logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others.
 Bounded quantifiers are often used in the study of set theory or arithmetic.
Infinitary logics
Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory.
Infinitary logic generalizes firstorder logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed.
The most commonly studied infinitary logics are denoted L_{αβ}, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary firstorder logic is L_{ωω}. In the logic L_{∞ω}, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as L_{κω}. For example, L_{ω1ω} permits countable conjunctions and disjunctions.
The set of free variables in a formula of L_{κω} can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another.^{[35]} In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in L_{κ∞}, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic L_{κλ} permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ.
Nonclassical and modal logics
 Intuitionistic firstorder logic uses intuitionistic rather than classical reasoning; for example, ¬¬φ need not be equivalent to φ and ¬ ∀x.φ is in general not equivalent to ∃ x.¬φ .
 Firstorder modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard firstorder logic we have a single domain, and each predicate is assigned one extension. With firstorder modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a philosopher, but might have been a mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all.
 Firstorder fuzzy logics are firstorder extensions of propositional fuzzy logics rather than classical propositional calculus.
Fixpoint logic
Fixpoint logic extends firstorder logic by adding the closure under the least fixed points of positive operators.^{[36]}
Higherorder logics
The characteristic feature of firstorder logic is that individuals can be quantified, but not predicates. Thus
is a legal firstorder formula, but
is not, in most formalizations of firstorder logic. Secondorder logic extends firstorder logic by adding the latter type of quantification. Other higherorder logics allow quantification over even higher types than secondorder logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other highertype objects. Thus the "first" in firstorder logic describes the type of objects that can be quantified.
Unlike firstorder logic, for which only one semantics is studied, there are several possible semantics for secondorder logic. The most commonly employed semantics for secondorder and higherorder logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higherorder logic stronger than firstorder logic. In particular, the (semantic) logical consequence relation for secondorder and higherorder logic is not semidecidable; there is no effective deduction system for secondorder logic that is sound and complete under full semantics.
Secondorder logic with full semantics is more expressive than firstorder logic. For example, it is possible to create axiom systems in secondorder logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that secondorder and higherorder logics have fewer attractive metalogical properties than firstorder logic. For example, the Löwenheim–Skolem theorem and compactness theorem of firstorder logic become false when generalized to higherorder logics with full semantics.
Automated theorem proving and formal methods
The related area of automated
Some proof verifiers, such as
Automated theorem provers are also used to implement
For the problem of model checking, efficient algorithms are known to decide whether an input finite structure satisfies a firstorder formula, in addition to computational complexity bounds: see Model checking § Firstorder logic.
See also
 ACL2 — A Computational Logic for Applicative Common Lisp
 Aristotelian logic
 Equiconsistency
 EhrenfeuchtFraisse game
 Extension by definitions
 Extension (predicate logic)
 Herbrandization
 List of logic symbols
Notes
 ^ Hodgson, Dr. J. P. E., "First Order Logic" (archive.org), Saint Joseph's University, Philadelphia, 1995.
 ^ Hughes, G. E., & Cresswell, M. J., A New Introduction to Modal Logic (London: Routledge, 1996), p.161.
 ^ ^{a} ^{b} A. Tarski, Undecidable Theories (1953), p.77. Studies in Logic and the Foundation of Mathematics, NorthHolland
 ^ Mendelson, Elliott (1964). Introduction to Mathematical Logic. Van Nostrand Reinhold. p. 56.
 ^ Eric M. Hammer: Semantics for Existential Graphs, Journal of Philosophical Logic, Volume 27, Issue 5 (October 1998), page 489: "Development of firstorder logic independently of Frege, anticipating prenex and Skolem normal forms"
 ^ H. Friedman, "Adventures in Foundations of Mathematics 1: Logical Reasoning", Ross Program 2022, lecture notes. Accessed 28 July 2023.
 ^ Goertzel, B., Geisweiller, N., Coelho, L., Janičić, P., & Pennachin, C., RealWorld Reasoning: Toward Scalable, Uncertain Spatiotemporal, Contextual and Causal Inference (Amsterdam & Paris: Atlantis Press, 2011), pp. 2930.
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} W. V. O. Quine, Mathematical Logic (1981). Harvard University Press, 0674554515.
 ISBN 9781483207704.
 ^ "Predicate Logic  Brilliant Math & Science Wiki". brilliant.org. Retrieved 20200820.
 ^ "Introduction to Symbolic Logic: Lecture 2". cstlcla.semo.edu. Retrieved 20210104.
 ISSN 14314657.
 ^ More precisely, there is only one language of each variant of onesorted firstorder logic: with or without equality, with or without functions, with or without propositional variables, ....
 ^ The word language is sometimes used as a synonym for signature, but this can be confusing because "language" can also refer to the set of formulas.
 ^ Eberhard Bergmann and Helga Noll (1977). Mathematische Logik mit InformatikAnwendungen. Heidelberger Taschenbücher, Sammlung Informatik (in German). Vol. 187. Heidelberg: Springer. pp. 300–302.
 ^ Smullyan, R. M., Firstorder Logic (New York: Dover Publications, 1968), p. 5.
 ^ G. Takeuti, 'Proof Theory' (1989, p.6)
 ^ Some authors who use the term "wellformed formula" use "formula" to mean any string of symbols from the alphabet. However, most authors in mathematical logic use "formula" to mean "wellformed formula" and have no term for nonwellformed formulas. In every context, it is only the wellformed formulas that are of interest.
 ^ y occurs bound by rule 4, although it doesn't appear in any atomic subformula
 ^ It seems that symbol was introduced by Kleene, see footnote 30 in Dover's 2002 reprint of his book Mathematical Logic, John Wiley and Sons, 1967.
 ^ F. R. Drake, Set theory: An introduction to large cardinals (1974)
 ^ Rogers, R. L., Mathematical Logic and Formalized Theories: A Survey of Basic Concepts and Results (Amsterdam/London: NorthHolland Publishing Company, 1971), p. 39.
 ^ Brink, C., Kahl, W., & Schmidt, G., eds., Relational Methods in Computer Science (Berlin / Heidelberg: Springer, 1997), pp. 32–33.
 ^ Anon., Mathematical Reviews (Providence: American Mathematical Society, 2006), p. 803.
 ^ Shankar, N., Owre, S., Rushby, J. M., & StringerCalvert, D. W. J., PVS Prover Guide 7.1 (Menlo Park: SRI International, August 2020).
 ^ Fitting, M., FirstOrder Logic and Automated Theorem Proving (Berlin/Heidelberg: Springer, 1990), pp. 198–200.
 ^ Use formula substitution with φ(z) being z=x, so, φ(x) is x=x which implies φ(y): y=x, then use reflexivity.
 uncurrying.
 ^ Hodel, R. E., An Introduction to Mathematical Logic (Mineola NY: Dover, 1995), p. 199.
 ^ Horrocks, Ian (2010). "Description Logic: A Formal Foundation for Languages and Tools" (PDF). Slide 22. Archived (PDF) from the original on 20150906.
 ^ Gamut 1991, p. 75.
 Lefttotalitycan be expressed by an axiom ;rightuniquenessby , provided the equality symbol is admitted. Both also apply to constant replacements (for ).
 ^ Uzquiano, Gabriel (October 17, 2018). "Quantifiers and Quantification". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy (Winter 2018 ed.). See in particular section 3.2, ManySorted Quantification.
 ^ Enderton, H. A Mathematical Introduction to Logic, second edition. Academic Press, 2001, pp.296–299.
 ^ Some authors only admit formulas with finitely many free variables in L_{κω}, and more generally only formulas with < λ free variables in L_{κλ}.
 Zbl 0808.03024.
 .
 ^ "15815 Automated Theorem Proving". www.cs.cmu.edu. Retrieved 20240110.
 ^ Avigad, et al. (2007) discuss the process of formally verifying a proof of the prime number theorem. The formalized proof required approximately 30,000 lines of input to the Isabelle proof verifier.
References
 ISBN 9781441912206
 Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer.
 Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Raff, Paul (2007); "A formally verified proof of the prime number theorem", ACM Transactions on Computational Logic, vol. 9 no. 1
 .
 Monk, J. Donald (1976). Mathematical Logic. New York, NY: Springer New York. ISBN 9781468494549.
 Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications (Distributed by the University of Chicago Press)
 Bocheński, Józef Maria (2007); A Précis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from the French and German editions by Otto Bird
 Ferreirós, José (2001); The Road to Modern Logic — An Interpretation, Bulletin of Symbolic Logic, Volume 7, Issue 4, 2001, pp. 441–484, JSTOR 2687794
 Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English translation of Grundzüge der theoretischen Logik, 1928 German first edition)
 Hodges, Wilfrid (2001); "Classical Logic I: FirstOrder Logic", in Goble, Lou (ed.); The Blackwell Guide to Philosophical Logic, Blackwell
 ISBN 9780387942582
 Tarski, Alfred and Givant, Steven (1987); A Formalization of Set Theory without Variables. Vol.41 of American Mathematical Society colloquium publications, Providence RI: American Mathematical Society,
External links
 "Predicate calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
 Stanford Encyclopedia of Philosophy: Shapiro, Stewart; "Classical Logic". Covers syntax, model theory, and metatheory for firstorder logic in the natural deduction style.
 Magnus, P. D.; forall x: an introduction to formal logic. Covers formal semantics and proof theory for firstorder logic.
 Metamath: an ongoing online project to reconstruct mathematics as a huge firstorder theory, using firstorder logic and the axiomatic set theory ZFC. Principia Mathematicamodernized.
 Podnieks, Karl; Introduction to mathematical logic
 Cambridge Mathematical Tripos notes (typeset by John Fremlin). These notes cover part of a past Cambridge Mathematical Tripos course taught to undergraduate students (usually) within their third year. The course is entitled "Logic, Computation and Set Theory" and covers Ordinals and cardinals, Posets and Zorn's Lemma, Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories.
 Tree Proof Generator can validate or invalidate formulas of firstorder logic through the semantic tableauxmethod.