A regular expression denial of service (ReDoS)[1] is an algorithmic complexity attack that produces a denial-of-service by providing a regular expression and/or an input that takes a long time to evaluate. The attack exploits the fact that many[2] regular expression implementations have super-linear worst-case complexity; on certain regex-input pairs, the time taken can grow polynomially or exponentially in relation to the input size. An attacker can thus cause a program to spend substantial time by providing a specially crafted regular expression and/or input. The program will then slow down or become unresponsive.[3] [4]
Regular expression ("regex") matching can be done by building a finite-state automaton. Regex can be easily converted to nondeterministic automata (NFAs), in which for each state and input symbol, there may be several possible next states. After building the automaton, several possibilities exist:
Of the above algorithms, the first two are problematic. The first is problematic because a deterministic automaton could have up to
2m
m
n
n
Note that for non-pathological regular expressions, the problematic algorithms are usually fast, and in practice, one can expect them to "compile" a regex in O(m) time and match it in O(n) time; instead, simulation of an NFA and lazy computation of the DFA have O(m2n) worst-case complexity. Regex denial of service occurs when these expectations are applied to a regex provided by the user, and malicious regular expressions provided by the user trigger the worst-case complexity of the regex matcher.
While regex algorithms can be written in an efficient way, most regex engines in existence extend the regex languages with additional constructs that cannot always be solved efficiently. Such extended patterns essentially force the implementation of regex in most programming languages to use backtracking.
The most severe type of problem happens with backtracking regular expression matches, where some patterns have a runtime that is exponential in the length of the input string.[8] For strings of
n
O(2n)
+
, *
) to a subexpression;The second condition is best explained with two examples:
(a|a)+$
, repetition is applied to the subexpression a|a
, which can match a
in two ways on each side of the alternation.(a+)*$
, repetition is applied to the subexpression a+
, which can match a
or aa
, etc.In both of these examples we used $
to match the end of the string, satisfying the third condition, but it is also possible to use another character for this. For example (a|aa)*c
has the same problematic structure.
All three of the above regular expressions will exhibit exponential runtime when applied to strings of the form
a...a!
aaaaaaaaaaaaaaaaaaaaaaaa!
on a backtracking expression engine, it will take a significantly long time to complete, and the runtime will approximately double for each extra a
before the !
.It is also possible to have backtracking which is polynomial time
O(nx)
a*b?a*x
", when the input is an arbitrarily long sequence of "a
"s.So-called "evil" or vulnerable regexes have been found in online regular expression repositories. Note that it is enough to find a vulnerable subexpression in order to attack the full regex:
^([a-zA-Z0-9]){{red|<nowiki>(([\-.]|[_]+)?([a-zA-Z0-9]+))*</nowiki>}}(@){1}[a-z0-9]+[.]{1}(([a-z]{2,3})|([a-z]{2,3}[.]{1}[a-z]{2,3}))$
^{{red|(([a-z])+.)+}}[A-Z]([a-z])+$
These two examples are also susceptible to the input aaaaaaaaaaaaaaaaaaaaaaaa!
.
If the regex itself is affected by user input, such as a web service permitting clients to provide a search pattern, then an attacker can inject a malicious regex to consume the server's resources. Therefore, in most cases, regular expression denial of service can be avoided by removing the possibility for the user to execute arbitrary patterns on the server. In this case, web applications and databases are the main vulnerable applications. Alternatively, a malicious page could hang the user's web browser or cause it to use arbitrary amounts of memory.
However, if a vulnerable regex exists on the server-side already, then an attacker may instead be able to provide an input that triggers its worst-case behavior. In this case, e-mail scanners and intrusion detection systems could also be vulnerable.
In the case of a web application, the programmer may use the same regular expression to validate input on both the client and the server side of the system. An attacker could inspect the client code, looking for evil regular expressions, and send crafted input directly to the web server in order to hang it.[9]
ReDoS can be mitigated without changes to the regular expression engine, simply by setting a time limit for the execution of regular expressions when untrusted input is involved.[10]
ReDoS can be avoided entirely by using a non-vulnerable regular expression implementation. After CloudFlare's web application firewall (WAF) was brought down by a PCRE ReDoS in 2019, the company rewrote its WAF to use the non-backtracking Rust regex library, using an algorithm similar to RE2.[11] [12]
Vulnerable regular expressions can be detected programmatically by a linter.[13] Methods range from pure static analysis[14] [15] to fuzzing.[16] In most cases, the problematic regular expressions can be rewritten as "non-evil" patterns. For example, (.*a)+
can be rewritten to ([^a]*a)+
. Possessive matching and atomic grouping, which disable backtracking for parts of the expression,[17] can also be used to "pacify" vulnerable parts.[18] [19]