Formalizing NIST standards

Thomas C. Hales
University of Pittsburgh

Based on Snowden documents, the New York Times reported that the NSA worked to subvert NIST cryptographic standards. This article discusses general weaknesses in the NIST standard 800-90A for pseudo-random number generation. Among other defects, this NIST standard is deficient because of a pervasive sloppiness in the use of mathematics. This, in turn, prevents serious mathematical analysis of the standard and promotes careless implementation in code. We propose formal verification methods as a remedy.

Levels of Mathematical Rigor

We may categorize mathematical argument as informal, rigorous, or formal.

Informal mathematics is the vulgar product of the popular press. Informally, a function is continuous if it can be sketched without lifting a pencil from a sheet of paper; chaos is the unpredictable effect of a butterfly flapping its wings in Brazil; and a pseudo-random number is one that is paradoxically deterministic yet effectively random. Informal mathematics is not wrong so much as it is unsuitable for careful science and engineering.

A more rigorous approach to mathematics became necessary in the final decades of the nineteenth century to resolve paradoxes in Cantor’s set theory and other troubles. For example, disputes about continuity were resolved by clarifying definitions, eventually refining a single intuitive notion of continuity into a family of related notions: continuity, uniform continuity, and so forth. Most professional mathematical publications now adhere to widely accepted standards of mathematical rigor, enforced through the diligence of human referees.

Formal mathematics is yet a higher standard. English and other natural languages are abandoned and replaced with languages whose syntax and semantics are designed with mathematical precision. The system specifies every admissible rule of logic and every mathematical axiom. Quality is enforced by a computer, which exhaustively checks every logical step of a proof.

Formal verification becomes appropriate in proofs whose complexity surpasses the capabilities of checking by hand. (A wiki page catalogues numerous long mathematical theorems that might benefit from formalization.) Formal methods are well-suited for many computer-assisted mathematical proofs. In fact, at the formal level the line is erased between algorithms implemented as computer code and mathematical reasoning. A single software system handles the formal verification of both.

Formal methods have been under development for decades, and in recent years the verification of complex software systems, hardware, and intricate theorems has become a reality. Already in 1989, it was possible to formally specify and verify a simple computer system from high-level language to microprocessor. As recent examples, we mention the full verification of a C compiler and complex mathematical theorems such as the Feit-Thompson theorem and the Four-Color theorem.

Formal Verification in Cryptography

Formal verification of computer code can be advised when human life or large economic interests are at stake: aircraft control systems, widely adopted cryptographic standards, or nuclear reactor controllers. Formal verification reduces the software defect rate to a level that is scarcely attainable by other means.

For several reasons, cryptography calls out for formal verification. The field is highly mathematical. Many key algorithms can be implemented as small blocks of code. A tiny defect can potentially defeat the entire algorithm. Adversaries actively seek out bugs to exploit. Cryptography safeguards large financial interests and fundamental human freedoms.

Various formal tools have been constructed especially for application to cryptography. The pi-calculus has been adapted to cryptographic protocols. Projects in the Isabelle proof assistant include protocol verification through inductive definitions and game analysis. In the Coq proof assistant, there have been successful formal verifications of cryptographic primitives and code-based cryptographic proofs. Significantly, formal methods have started to enter the standardization process.

The working group on the Theoretical Foundations of Security Analysis and Design and the Computer Security Foundations Symposium of the IEEE (CSF 2013) promote formal methods in cryptography.

In truth, our imperfect knowledge prevents the comprehensive verification of cryptographic systems. We are stymied by notorious problems like P versus NP and the existence of one-way functions. We lack definitive lower bounds on the computational complexity of concrete problems such as factoring of integers. Research into security reductions is ongoing. There is no comprehensive security model. For example, the Dolev-Yao model works at a high level of abstraction, assuming that cryptographic primitives function perfectly, while other models operate at various levels of detail.

Nevertheless, we can work with these limitations, implementing a collection of interrelated formal proofs grounded in current technological capabilities, and move forward from there.

The informality of NIST standards

Earlier critiques of the NIST standard 800-90A for pseudo-random number generation have focused on specific defects. Here, we argue that mathematical weaknesses run throughout the standard. Amid the accusations that the NSA has undermined NIST cryptographic standards, we remind NIST that one of the most effective ways to subvert a cryptographic standard is to muddle the math.

The first requirement of a standard is to set the tone and level of discourse that reflects the current technological capabilities of the matter at hand. By choosing to present an informal standard, avoiding both rigor and formal mathematics, NIST has produced a standard that is out of step with the technology of the time.

Some definitions in the NIST standard are merely informal. For example, the NIST standard defines1 pseudo-randomness as “deterministic yet also effectively random” (NIST 800-90A, p.7). A mathematically rigorous definition of pseudo-random generators requires much more care, referencing rigorous notions of measure, probability, and complexity theory. Properly formulated definitions are given in Luby, Yao, Blum and Micali. As it is manifestly impossible to base rigorous or formal mathematical proofs on something so vague as “deterministic yet effectively random,” the early pages of the NIST standard effectively ward off careful mathematical analysis.

The lack of rigor continues throughout the document. Algorithms are described with English-text pseudo-code. With more care, NIST might have provided a formal specification and a reference implementation in executable code in a language with precise semantics. Overall, the standard gives few mathematical arguments, and these do not inspire confidence. The document slips into convenient inaccuracies: heuristics rather than proofs, fixed-point arithmetic, and Shannon entropy rather than min-entropy. (See NIST 800-90A, Appendix C.2.) In fact, the standard is imprecise to such a degree that competing definitions of entropy are largely irrelevant.

An example of NIST reasoning

This section goes into detail about a particular mathematical argument that appears in the NIST standard.2 For our purposes, the topic of discussion matters less than the nature of the NIST committee’s mathematical thought. Do they reason as a mathematician in an unbroken chain of logic, or is the committee satisfied by a lesser standard?

The context is the following. Let E be an elliptic curve defined over a finite field Fp, defined in affine coordinates by a polynomial equation y2 = f(x). The pseudo-random generator extracts bits from the x coordinates of a sequence of points P1, P2,… on the elliptic curve. The construction of the sequence of points does not concern us here. The issue is this: if points are sampled uniformly at random from E(Fp), then their x coordinates are not uniformly distributed in the finite field; in fact, the x coordinates obviously belong to the subset of the finite field on which f(x) is a square. Research estimates are needed to determine how big an issue this is. Aggressive truncation of bits from the binary representation of x might improve pseudo-randomness but would make the algorithm less efficient.3

NIST quotes the research of El Mahassni and Shparlinski as “an additional reason that argues against increasing truncation.” There are numerous gaps in NIST reasoning.

  • A bound on discrepancy is not the same as uniform distribution.
  • Uniform distribution is not the same as cryptographically secure pseudo-randomness.
  • The sets {Pi} of points used in real-world implementations have cardinalities far too small to be relevant to the given asymptotic estimates.
  • The research does not support the specific NIST rule that “the recommended number of bits discarded from each x-coordinate will be 16 or 17″ and does not convincingly “argue against increasing truncation.”

Nevertheless, NIST uses the research to make the inflated claim that “certain guarantees can be made about the uniform distribution of the resulting truncated quantities” (NIST 800-90A). This is proof by intimidation.

Assurances

The NIST standard 800-90A states that “a user of a DRBG for cryptographic purposes requires assurance that the generator actually produces (pseudo) random and unpredictable bits. The user needs assurance that the design of the generator, its implementation and its use to support cryptographic services are adequate to protect the user’s information” (NIST 800-90A). We agree.

What assurances does NIST actually provide about the generator? The document contains no mathematical proofs of pseudo-randomness and no supporting citations. Indeed, careful proofs would be impossible, because as we have noted, definitions are more informal than rigorous. Instead, the user of the standard must rely on NIST’s authoritative claim that “the design of each DRBG mechanism in this Recommendation has received an evaluation of its security properties prior to its selection for inclusion in this Recommendation.” That one sentence is the extent of NIST assurance of design. That’s it! It seems that for NIST, assurance means to comfort with soothing words. To a mathematician, this attitude is exasperating. There is no mathematical statement of what those security properties are, and no discussion of the methods that were used to reach the conclusion. We are not told who made the evaluation or what the results of the evaluation were.

Based on the Memorandum of Understanding between NIST and NSA from 1989, quoted in Schneier (p. 601), we might wonder whether NIST’s part in the evaluation was limited. According to the memo, “The NIST will … recognize the NSA-certified rating of evaluated trusted systems under the Trusted Computer Security Evaluation Criteria Program without requiring additional evaluation” (emphasis added).

Here is the NIST assurance of HMAC_DRBG (deterministic random bit generation based on hash message authentication codes). It states, “In general, even relatively weak hash functions seem to be quite strong when used in the HMAC construction. On the other hand, there is not a reduction proof from the hash function’s collision resistance properties to the security of the DRBG” (NIST 800-90A Appendix E.2). Note the informal tone of the discussion, the reassurance that a weakness is strength, the brevity, and absence of mathematical theorems.

Cryptographic standards derive their security through the underlying mathematics. We can place our trust in mathematics but not in assurances such as these.

Conclusions

According to NIST aspirations, “NIST works to publish the strongest cryptographic standards possible.” Our analysis shows that judged by professional mathematical standards, NIST is very far from its goal. Indeed, the current NIST standard was written in a pre-Snowden era of unverified assurances.

NIST sets the standard both by its choice of algorithms and by its
attitude towards rigor. Overall, its general careless tone will
facilitate vulnerable implementations of the standard.

Better approaches to standardization are available. In fact, a number of formal verification projects have been completed (such as a formal verification of a C compiler mentioned above) that dwarf what we specifically ask NIST to do. Please adopt verification technologies in widespread use! Improvement in the formal specification of NIST standards is the first critical step in a larger process of formal verification along the entire chain, including the underlying mathematical concepts, cryptographic primitives, protocols and algorithms, and end implementations in computer code.

End Notes

1. [Here is the full definition from NIST: "A process (or data produced by a process) is said to be pseudorandom when the outcome is deterministic, yet also effectively random, as long as the internal action of the process is hidden from observation. For cryptographic purposes, `effectively' means `within the limits of intended cryptographic strength.'" Speaking of the data, we may ask with Knuth, "is 2 a random number?"] (return to text)
2. [We pick the most extensive mathematical argument in the document. It is telling that this argument is used to justify weakening the standard for the sake of efficiency.] (return to text)
3. [In light of the much discussed back door to the elliptic curve algorithm, NSA had a secret interest in persuading users not to discard many bits from x; aggressive truncation would make the back door more difficult to use.] (return to text)

References

M. Abadi and C. Fournet, Mobile values, new names, and secure communication, In POPL ’01: Proceedings of the 28th ACM SIGPLAN-SIGACT symposium on the principles of programming languages (2001), 104–115.

M. Abadi and A. D. Gordon, A calculus for cryptographic protocols: The spi calculus, Information and Communication 148 (January 1999), 1–70.

NIST 800-90A, E. Barker and J. Kelsey, Recommendation for random number generation using deterministic random bit generators, NIST Special Publication 800-90A (2012).

W. R. Bevier, W. A. Hunt Jr., J Strother Moore, and W. D. Young, An approach to systems verification, Journal of Automated Reasoning 5 (1989), 411–428 and 422–423.

M. Blum and S. Micali, How to generate cryptographically strong sequences of pseudo-random bits, SIAM Journal on Computing 13 (1984), 850–864.

D. Dolev and A. C. Yao, On the security of public-key protocols, IEEE Transaction on Information Theory 2 (March 1983), 198–208.

M. Drmota and R. F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Mathematics, Springer, 1997.

G. Gonthier et al., A machine-checked proof of the odd order theorem, Lecture Notes in Computer Science 7998 (2013), 163–179.

G. Gonthier, Formal proof — the four colour theorem, Notices of the AMS 55 (December 2008), no. 11, 1382–1393.

J. Harrison, Formal verification at Intel, Proceedings. 18th Annual IEEE Symposium on Logic in Computer Science (2003), 45–54.

M. Luby, Pseudorandomness and cryptographic applications, Princeton University Press, 1996.

E. El Mahassni and I. Shparlinski, On the uniformity of distribution of congruential generators over elliptic curves, Discrete Mathematics and Theoretical Computer Science (2002), 257–264, Sequences and their Applications.

R. Milner, Communicating and mobile systems: the pi-calculus, Cambridge University Press, 1999.

A. Yao, Theory and applications of trapdoor functions, FOCS (1982), 384–400.

Standard

3 thoughts on “Formalizing NIST standards

  1. Pingback: Trust the math? An Update | Not Even Wrong

  2. Hi Prof. Hales,

    I noticed another problem with the argument you criticize in “An example of NIST reasoning.”

    Even if it is true that a sufficiently large interval contains approximately the expected number of x-coordinates of points on the elliptic curve (as you point out, NIST’s argument does not show this), the conclusion that would follow from NIST’s argument is essentially the opposite of what they claim.

    In particular, the set of elements of F_p that have a fixed s low-order bits is an arithmetic progression of size approximately 2^{log(p) – s}. Hence, if there are approximately the expected number of x-coordinates in each sufficiently large interval, then we get a guarantee that approximately the same number of points on the elliptic curve correspond to each possible s-bit output, provided that s is sufficiently small. I have written up a more detailed explanation of the problem on my blog ( http://bendlund.wordpress.com/2013/12/23/nists-truncation-argument/ ).

  3. Pingback: NIST’s truncation argument | Ben Lund's Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s