TL;DR: No, but it is P-complete.

## Why does it matter?

Some systems we use every day, such as Rust and JavaScript, are Turing complete. Others, such as HTML, CSS, and regular expressions, are not.

Turing completeness has pros and cons. If a system is Turing complete, it is as powerful as possible; any algorithm can be written in it. On the other hand, it can run for an arbitrarily long time or infinitely loop, and there is no way for an algorithm to determine if it will.

At Oso, we build a programming language purpose-built for defining authorization permissions called Polar. I believe it would be a bad thing for Polar to be Turing complete. Our system is read heavy, and we want each query to run in less than 5 milliseconds — queries to our databases need to make access control decisions within user-facing applications, so the low latency is a requirement. Turing completeness would allow for our users to shoot themselves in the foot, performance-wise, in ways we could neither automatically detect nor mitigate.

##

What does it mean?

Turing completeness is defined in terms of **decision problems** — a language that is Turing complete could solve all decidable problems. A decision problem is anything where the input is a string and the output is a boolean; for example, “does this string have at least three occurrences of the letter ‘A’?” Decision problems come in many levels of difficulty, each of which has a name. There are four that most systems of computation fall into, known as the Chomsky hierarchy:

**Decidable****(a.k.a. recursively enumerable)**: You can write an algorithm for it in your favorite programming language. An example of a problem that is decidable but not context-sensitive is: “Do these two regular expressions match the same set of strings?”**Context-sensitive**: You can write an algorithm for it whose memory usage stays within a constant factor of the input length. An example of a problem that is context-sensitive but not context-free is: “is the string of the form, ‘A’ repeated n times followed by ‘B’ repeated n times followed by ‘C’ repeated n times?”**Context-free**: You can write a “context-free grammar” for it — a context-free grammar is a lot like the input to tools such as Yacc used to parse programming languages. An example of a problem that is context-free but not regular is: “is this string of parentheses correctly matched?”**Regular**: You can write a regular expression for it. An example of a problem that is regular is: “is this a string of alternating ‘A’s and ‘B’s?”

These difficulty levels are useful because of what they let us determine about Polar. If we can prove that Polar can solve all decidable problems, then we know Polar is Turing complete (this is the definition of Turing completeness). If we can prove that everything Polar solves is context-free, then we know Polar is not Turing complete, since context-free is strictly less powerful than decidable. If we can prove that everything Polar solves is regular, then not only is it not Turing complete; there would also be tricks we could use to make it very performant.

The four “difficulty levels” in the Chomsky hierarchy each have many useful definitions, therefore there are many tools for reasoning about Polar. Decidable problems are the set of all problems you can solve in Rust; equivalently, they’re all problems you can solve with a Turing machine or with the lambda calculus (this particular equivalence is known as the Church-Turing thesis). Regular problems are the set of all problems you can solve with regular expressions; equivalently, they’re all problems you can solve with a deterministic finite automaton or with a nondeterministic finite automaton.

## Decision problems in Polar

How can Polar solve a decision problem? The input to a decision problem is a string, but Polar can’t do anything interesting with its built-in string type — so how does it make sense to talk about Polar as potentially being Turing complete?

The inputs to a Polar program in Oso Cloud are: a set of facts, and (focusing on the /authorize endpoint) actor/action/resource. Since any information passed along in the actor/action/resource could equivalently be supplied as a fact, the decision problem is no more or less hard if we make the policy ignore the particular values of actor/action/resource and answer questions solely based on the facts in the database, and observe the boolean response to an arbitrary authorize request.

Arbitrary-length strings can be encoded as facts where each character of the string is represented by one fact, and together the facts form a sort of linked list. For the sake of simplicity, we’ll work with binary strings, where every character is a boolean.

We’ll use 3-arity facts bit for each bit of the string and a 1-arity fact end to denote the end of the string. For each bit fact, the first argument is an integer “address” of this bit; the second argument is a boolean value of the bit, and the third argument is the integer address of the following bit. The end fact’s argument is the integer address pointed to by the last bit. For example, the string “1101” is represented by the following five facts:

Using this encoding of strings, the following is an example of a Polar program that checks whether a string contains at least two ‘1’s.

## Polar solves all regular problems

The above policy is an example of Polar solving a regular problem (whether a string contains at least two ‘1’s). To get a lower bound on what Polar is capable of, I’ll show that Polar can solve ** any** regular problem.

Any regular problem can be expressed as a **deterministic finite automaton** (DFA). To show that Polar can solve any regular problem, I’ll show that any deterministic finite automaton can be expressed in Polar.

A DFA that operates over binary strings is defined by four things:

- A finite set of states
- A transition function that takes a state and a bit as input and gives a state as output
- A start state
- A set of accept states (a subset of the set of all states)

A DFA runs by scanning the input string from start to finish one bit at a time. It starts in the start state, then each time it scans a bit it uses the transition function to update to a new state. Once the full string has been scanned, if the DFA’s state is one of the accept states, it returns true; otherwise it returns false.

Here is an example of a DFA that solves the aforementioned problem of whether a string contains at least two ‘1’s. The start state is q0, and the only accept state (denoted by a double-circle) is q2. The transition function is denoted by arrows.

To show that Polar can solve any regular problem, I will show that any DFA can be expressed as a Polar policy.

Each state in the DFA is represented by an arity-1 predicate which takes as an argument the address of the position we’re currently at in the string. For each case (input_state, input_bit)→output_state of the transition function, we generate a rule of the form:

To hook this all up to the authorize endpoint, we add a rule connecting allow to the start state:

Finally, to return true when we end in an accept state, for all accept states we add the rule:

The Polar policy presented in the previous section encodes the DFA diagrammed above.

This proves that Polar can solve any regular problem — but is that the limit of what Polar can solve? In other words, are all problems solved by Polar regular? No. Polar can solve the problem of checking whether a string is a palindrome, and this problem is not regular (this can be shown by the pumping lemma). The following is a Polar program that checks whether a string is a palindrome:

The palindrome problem is an example of a problem that is context-free but not regular. I will now show that ** any** context-free problem can be solved by Polar.

Just like how any regular problem can be solved by a DFA, any context-free problem can be solved by a context-free grammar. A context-free grammar over binary strings is defined by three things:

- A set of “nonterminals”
- A mapping from each nonterminal to a string where each character is either a nonterminal or a bit. Each nonterminal may be mapped more than once.
- A start nonterminal

The following is a context-free grammar for palindromes. The start nonterminal is S:

To express an arbitrary context-free grammar as a Polar program, we represent each nonterminal as a predicate that accepts two integer arguments representing the start and end of the substring they must match. Each element of the mapping from nonterminals to strings is represented as a rule where the head is the nonterminal being mapped, and the body consists of each character’s Polar representation (bit if the character is a bit; the predicate associated with the nonterminal if it’s a nonterminal) anded together where each call’s last argument is the next call’s first argument. For example, the mapping A -> 0B1C where nonterminals A, B, and C are represented by predicates ntA, ntB, and ntC respectively, translates to the following in Polar:

The Polar program presented in the previous section translates the above palindrome grammar into Polar using approximately this scheme.

## Polar is not Turing complete

The last two sections established a lower bound on the difficulty of problems Polar can solve. It can solve all regular problems and all context-free problems (since all regular problems are also context-free, the former also follows from the latter). If Polar could solve all decidable problems, then it would be Turing-complete. If this were the case, nothing would stop our users from unwittingly writing Polar policies with unoptimizable and arbitrary slow algorithmic complexity; we would need our users to think about how Polar evaluates as they write their policies, and would need to supply them with some sort of methodology for how to approach it.

Fortunately, as I will show in this section, Polar is not Turing-complete. Even better, there exists a total algorithm (an algorithm that always returns an answer; never infinitely loops or raises an exception) for evaluating an arbitrary Polar program. The existence of this algorithm is the proof that Polar is not Turing complete. If Polar were Turing complete, no total algorithm for evaluating it could exist, as such an algorithm would solve the halting problem.

The following is pseudocode of a total algorithm that evaluates an arbitrary Polar program:

A key component of this algorithm is the apply function — given a set of facts and a rule, it produces a set of all facts produced from applying the rule to the set of facts. For example, given the rule

and the facts

the *apply* function would produce the facts

One thing that complicates this is when the rule head contains wildcards. Since at any given time there are a finite number of unique values (fact arguments) in the set of facts, we can say that as an overapproximation, apply instantiates the wildcard with every value that appears in the policy, the fact set, or the query. As Polar does not have the ability to construct new values at runtime (it lacks string concatenation, integer addition, boolean negation, etc.) we know this to be an exhaustive set of all values the wildcard may need to be instantiated with.

This algorithm is a technique called forward chaining.

How do we know this algorithm never infinitely loops? Every iteration of the loop must add at least one new fact to facts in order for the algorithm to continue running; otherwise new_facts is empty and the loop stops. If we take *N* to mean the number of unique fact argument values that appear in the policy, the input facts set, and the query; *M* to mean the number of unique predicates that appear as rule heads in the policy, and *K* to mean the maximum arity of any rule head in the policy, then the cardinality of the facts set can never exceed *MN^K**. Since every iteration of the loop (except the last one) adds at least one new fact to facts, the loop runs for at most **MN^K+1*** iterations before terminating. Since *K* is limited to be no greater than 5 in Polar, this algorithm runs in polynomial time.

## Polar is P-complete

A problem is P-complete if it can be solved in polynomial time and it can also solve any problem that can be solved in polynomial time. The previous section presented a polynomial-time algorithm for solving an arbitrary Polar policy, so all that’s left to show that Polar is P-complete is to show that it can solve any problem that can be solved in polynomial time. To show this, it is sufficient to show that it can solve a P-complete problem, as any other polynomial-time problem can be solved by the P-complete problem, and then by extension can be solved by Polar.

The following problem is known to be P-complete: “given a context-free grammar and a string, is the string accepted by the context free grammar?” In an earlier section I showed that an arbitrary context-free grammar can be expressed as a Polar program that solves the problem of whether a given string is accepted by the grammar. By the existence of a universal policy, from this we also know that there exists a Polar policy that can take a context-free grammar and a string as input, and solve whether the given grammar accepts the given string. Therefore **Polar can solve any problem that can be solved in polynomial time**.