Descriptive complexity theory

From Wikipedia, the free encyclopedia

Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them.

Specifically, each logical system produces a set of queries expressible in it. The queries – when restricted to finite structures – correspond to the computational problems of traditional complexity theory.

The first main result of descriptive complexity was Fagin's theorem, shown by Ronald Fagin in 1974. It established that NP is precisely the set of languages expressible by sentences of existential second-order logic; that is, second order logic excluding universal quantification over relations, functions, and subsets. Many other classes were later characterized in such a manner.

The setting[]

When we use the logic formalism to describe a computational problem, the input is a finite structure, and the elements of that structure are the domain of discourse. Usually the input is either a string (of bits or over an alphabet) and the elements of the logical structure represent positions of the string, or the input is a graph and the elements of the logical structure represent its vertices. The length of the input will be measured by the size of the respective structure. Whatever the structure is, we can assume that there are relations that can be tested, for example " is true iff there is an edge from x to y" (in case of the structure being a graph), or " is true iff the nth letter of the string is 1." These relations are the predicates for the first-order logic system. We also have constants, which are special elements of the respective structure, for example if we want to check reachability in a graph, we will have to choose two constants s (start) and t (terminal).

In descriptive complexity theory we often assume that there is a total order over the elements and that we can check equality between elements. This lets us consider elements as numbers: the element x represents the number n iff there are elements y with . Thanks to this we also may have the primitive predicate "bit", where is true if only the kth bit of the binary expansion of n is 1. (We can replace addition and multiplication by ternary relations such that is true iff and is true iff ).

Overview of characterisations of complexity classes[]

If we restrict ourselves to ordered structures with a successor relation and basic arithmetical predicates, then we get the following characterisations:

  • First-order logic defines the class AC0, the languages recognized by polynomial-size circuits of bounded depth, which equals the languages recognized by a concurrent random access machine in constant time.[1]
  • First-order logic augmented with symmetric or deterministic transitive closure operators yield L, problems solvable in logarithmic space.[2]
  • First-order logic with a transitive closure operator yields NL, the problems solvable in nondeterministic logarithmic space.[3]
  • First-order logic with a least fixed point operator gives P, the problems solvable in deterministic polynomial time.[3]
  • Existential second-order logic yields NP.[3]
  • Universal second-order logic (excluding existential second-order quantification) yields co-NP.[4]
  • Second-order logic corresponds to the polynomial hierarchy PH.[3]
  • Second-order logic with a transitive closure (commutative or not) yields PSPACE, the problems solvable in polynomial space. [5]
  • Second-order logic with a least fixed point operator gives EXPTIME, the problems solvable in exponential time.[6]
  • HO is equal to ELEMENTARY[7]

Sub-polynomial time[]

FO without any operators[]

In circuit complexity, first-order logic with arbitrary predicates can be shown to be equal to AC0, the first class in the AC hierarchy. Indeed, there is a natural translation from FO's symbols to nodes of circuits, with being and of size n. First-order logic in a signature with arithmetical predicates characterises the restriction of the AC0 family of circuits to those constructible in alternating logarithmic time.[8] First-order logic in a signature with only the order relation corresponds to the set of star-free languages.[9][10]

Transitive closure logic[]

First-order logic gains substantially in expressive power when it is augmented with an operator that compute transitive closures of binary relations. The resulting transitive closure logic is known to characterise non-deterministic logarithmic space (NL) on ordered structures. This was used by Immerman to show that NL is closed under complement (i. e. that NL = co-NL).[11]

When restricting the transitive closure operator to deterministic transitive closure, the resulting logic exactly characterises logarithmic space on ordered structures.

Second-order Krom formulae[]

On structures which have a successor function, NL can also be characterised by second-order Krom formulae.

SO-Krom is the set of boolean queries definable with second-order formulae in conjunctive normal form such that the first order quantifiers are universal and the quantifier-free part of the formula is in Krom form, which means that the first order formula is a conjunction of disjunctions, and in each "disjunction" there are at most two variables. Every second-order Krom formula is equivalent to an existential second-order Krom formula.

SO-Krom characterises NL on structures with a successor function.[12]

Polynomial time[]

On ordered structures, first-order least fixed-point logic captures PTIME:

First-Order Least Fixed-Point Logic[]

FO[LFP] is the extension of first-order logic by a least fixed-point operator, which expresses the fixed-point of a monotone expression. This augments first-order logic with teh ability to express recursion. The Immerman-Vardi theorem, shown independently by Immerman and Vardi, shows that FO[LFP] characterises PTIME on ordered structures.[13][14]

As of 2022, it is still open whether there is a natural logic characterising PTIME on unorpered structures.

The states that FO[LFP]=FO[PFP] on all structures if and only if FO[LFP]=FO[PFP], hence if and only if P=PSPACE. This result has been extended to other fixpoints.[15]

Second-order Horn formulae[]

In the presence of a successor function, PTIME can also be characterised by second-order Horn formulae.

SO-Horn is the set of boolean queries definable with SO formulae in disjunctive normal form such that the first order quantifiers are all universal and the quantifier-free part of the formula is in Horn form, which means that it is a big AND of OR, and in each "OR" every variable except possibly one are negated.

This class is equal to P on structures with a successor function.[16]

Those formulae can be transformed to prenex formulas in existential second-order Horn logic.[12]

Non-deterministic polynomial time[]

Fagin's theorem[]

Ronald Fagin's 1974 proof that the complexity class NP was characterised exactly by those classes of structures axiomatizable in existential second-order logic was the starting point of descriptive complexity theory.[4][17]

Since the complement of an existential formula is a universal formula, it follows immediately that co-NP is characterized by universal second-order logic.[4]

SO, unrestricted second-order logic, is equal to the Polynomial hierarchy PH. More precisely, we have the following generalisation of Fagin's theorem: The set of formulae in prenex normal form where existential and universal quantifiers of second order alternate k times characterise the kth level of the polynomial hierarchy.[18]

Unlike most other characterisations of complexity classes, Fagin's theorem and its generalisation do not presuppose a total ordering on the structures. This is because existential second-order logic is itself sufficiently expressive to refer to the possible total orders on a structure using second-order variables.[19]

Beyond NP[]

Partial fixed point is PSPACE[]

The class of all problems computable in polynomial space, PSPACE, can be characterised by augmenting first-order logic with a more expressive partial fixed-point operator.

Partial fixed-point logic, FO[PFP], is the extension of first-order logic with a partial fixed-point operator, which expresses the fixed-point of a formula if there is one and returns 'false' otherwise.

Partial fixed-point logic characterises PSPACE on ordered structures.[20]

Transitive closure is PSPACE[]

Second-order logic can be extended by a transitive closure operator in the same way as first-order logic, resulting in SO[TC]. The TC operator can now also take second-order variables as argument. SO[TC] characterises PSPACE. Since ordering can be referenced in second-order logic, this characterisation does not presuppose ordered structures.[21]

Elementary functions[]

The time complexity class ELEMENTARY of elementary functions can be characterised by HO, the complexity class of structures that can be recognized by formulas of higher-order logic. Higher-order logic is an extension of first-order logic and second-order logic with higher-order quantifiers. There is a relation between the th order and non-deterministic algorithms the time of which is bounded by levels of exponentials.[22]

Definition[]

We define higher-order variables. A variable of order has an arity and represents any set of -tuples of elements of order . They are usually written in upper-case and with a natural number as exponent to indicate the order. Higher-order logic is the set of first-order formulae where we add quantification over higher-order variables, hence we will use the terms defined in the FO article without defining them again.

HO is the set of formulae with variables of order at most . HO is the subset of formulae of the form , where is a quantifier and means that is a tuple of variable of order with the same quantification. So HO is the set of formulae with alternations of quantifiers of order , beginning with , followed by a formula of order .

Using the standard notation of the tetration, and . with times

Normal form[]

Every formula of order th is equivalent to a formula in prenex normal form, where we first write quantification over variable of th order and then a formula of order in normal form.

Relation to complexity classes[]

HO is equal to the class ELEMENTARY of elementary functions. To be more precise, , meaning a tower of 2s, ending with , where is a constant. A special case of this is that , which is exactly Fagin's theorem. Using oracle machines in the polynomial hierarchy,

Notes[]

  1. ^ Immerman 1999, p. 86
  2. ^ Grädel, Erich; Schalthöfer, Svenja (2019). Choiceless Logarithmic Space. Leibniz International Proceedings in Informatics (LIPIcs). Vol. 138. pp. 31:1–31:15. doi:10.4230/LIPICS.MFCS.2019.31. ISBN 9783959771177.
  3. ^ a b c d Immerman 1999, p. 242
  4. ^ a b c Fagin, Ron (1974). "Generalized first-order spectra and polynomial-time recognizable sets". In Karp, Richard (ed.). Complexity of Computation. pp. 43–73.
  5. ^ Immerman 1999, p. 243
  6. ^ Abiteboul, Serge; Vardi, Moshe Y.; Vianu, Victor (1997-01-15). "Fixpoint logics, relational machines, and computational complexity". Journal of the ACM. 44 (1): 30–56. doi:10.1145/256292.256295. ISSN 0004-5411. S2CID 11338470.
  7. ^ Hella, Lauri; Turull-Torres, José María (2006). "Computing queries with higher-order logics". Theoretical Computer Science. Essex, UK: Elsevier Science Publishers Ltd. 355 (2): 197–214. doi:10.1016/j.tcs.2006.01.009. ISSN 0304-3975.
  8. ^ Immerman 1999, p. 86
  9. ^ Robert., McNaughton (1971). Counter-free automata. M.I.T. Press. ISBN 0-262-13076-9. OCLC 651199926.
  10. ^ Immerman 1999, p. 22
  11. ^ Immerman, Neil (1988). "Nondeterministic Space is Closed under Complementation". SIAM Journal on Computing. 17 (5): 935–938. doi:10.1137/0217058. ISSN 0097-5397.
  12. ^ a b Immerman 1999, p. 153–4
  13. ^ Immerman, Neil (1986). "Relational queries computable in polynomial time". Information and Control. 68 (1–3): 86–104. doi:10.1016/s0019-9958(86)80029-8.
  14. ^ Vardi, Moshe Y. (1982). "The Complexity of Relational Query Languages (Extended Abstract)". Proceedings of the fourteenth annual ACM symposium on Theory of computing - STOC '82. Proceedings of the Fourteenth Annual ACM Symposium on Theory of Computing. STOC '82. New York, NY, USA: ACM. pp. 137–146. CiteSeerX 10.1.1.331.6045. doi:10.1145/800070.802186. ISBN 978-0897910705. S2CID 7869248.
  15. ^ Serge Abiteboul, Moshe Y. Vardi, Victor Vianu: Fixpoint logics, relational machines, and computational complexity Journal of the ACM archive, Volume 44 , Issue 1 (January 1997), Pages: 30-56, ISSN 0004-5411
  16. ^ Grädel, Erich (1992-07-13). "Capturing complexity classes by fragments of second-order logic". Theoretical Computer Science. 101 (1): 35–57. doi:10.1016/0304-3975(92)90149-A. ISSN 0304-3975.
  17. ^ Immerman 1999, p. 115
  18. ^ Immerman 1999, p. 121
  19. ^ Immerman 1999, p. 181
  20. ^ Abiteboul, S.; Vianu, V. (1989). "Fixpoint extensions of first-order logic and datalog-like languages". [1989] Proceedings. Fourth Annual Symposium on Logic in Computer Science. IEEE Comput. Soc. Press: 71–79. doi:10.1109/lics.1989.39160. ISBN 0-8186-1954-6. S2CID 206437693.
  21. ^ Harel, D.; Peleg, D. (1984-01-01). "On static logics, dynamic logics, and complexity classes". Information and Control. 60 (1): 86–102. doi:10.1016/S0019-9958(84)80023-6. ISSN 0019-9958.
  22. ^ Hella, Lauri; Turull-Torres, José María (2006). "Computing queries with higher-order logics". Theoretical Computer Science. Essex, UK: Elsevier Science Publishers Ltd. 355 (2): 197–214. doi:10.1016/j.tcs.2006.01.009. ISSN 0304-3975.

References[]

External links[]

Retrieved from ""