27 Formal Proof Projects

Freek Wiedijk maintains a list of 100 well-known math theorems and tracks which of them have been formalized. As of May 2014, all but 11 have been formalized. There is a wiki page of 27 notoriously long mathematical proofs. Only two of them have been formalized: Feit-Thompson’s odd order theorem and the four-color theorem. A third has been partially formalized: the Kepler conjecture. Merging lists, we obtain the following list of suggestions for formal proof projects.

  1. Insolvability of General Higher Degree Equations (Abel-Ruffini)
  2. Green’s theorem
  3. The undecidability of the Continuum Hypothesis
  4. Fermat’s Last theorem
  5. Puiseux’s Theorem
  6. The Isoperimetric Inequality
  7. The Hermite-Lindemann Transcendence Theorem
  8. The Law of Large Numbers
  9. Dissection of Cubes (Littlewood)
  10. Classification of simple complex Lie algebras
  11. Constructibility of the 65537-gon
  12. Lasker-Noether theorem
  13. Resolution of singularities in characteristic zero
  14. Resolution of singularities for 3-folds in characteristic at least 7 (Abyhankar)
  15. Discrete series representations of Lie groups (Harish-Chandra)
  16. Burnside’s bounded exponent problem (negative solution by Novikov-Adian)
  17. Grothendieck’s FGA and SGA
  18. Ramanujan conjecture and the Weil conjectures
  19. Eisenstein series (Langlands)
  20. Arthur-Selberg trace formula
  21. Almgren’s regularity theorem
  22. Lafforgue’s theorem
  23. Poincare conjecture
  24. Thurston’s geometrization conjecture
  25. Classification of finite simple groups
  26. Robertson-Seymour theorem
  27. Strong perfect graph theorem
Standard

Formalizing NIST standards

Thomas C. Hales
University of Pittsburgh

Based on Snowden documents, the New York Times reported that the NSA worked to subvert NIST cryptographic standards. This article discusses general weaknesses in the NIST standard 800-90A for pseudo-random number generation. Among other defects, this NIST standard is deficient because of a pervasive sloppiness in the use of mathematics. This, in turn, prevents serious mathematical analysis of the standard and promotes careless implementation in code. We propose formal verification methods as a remedy.

Levels of Mathematical Rigor

We may categorize mathematical argument as informal, rigorous, or formal.

Informal mathematics is the vulgar product of the popular press. Informally, a function is continuous if it can be sketched without lifting a pencil from a sheet of paper; chaos is the unpredictable effect of a butterfly flapping its wings in Brazil; and a pseudo-random number is one that is paradoxically deterministic yet effectively random. Informal mathematics is not wrong so much as it is unsuitable for careful science and engineering.

A more rigorous approach to mathematics became necessary in the final decades of the nineteenth century to resolve paradoxes in Cantor’s set theory and other troubles. For example, disputes about continuity were resolved by clarifying definitions, eventually refining a single intuitive notion of continuity into a family of related notions: continuity, uniform continuity, and so forth. Most professional mathematical publications now adhere to widely accepted standards of mathematical rigor, enforced through the diligence of human referees.

Formal mathematics is yet a higher standard. English and other natural languages are abandoned and replaced with languages whose syntax and semantics are designed with mathematical precision. The system specifies every admissible rule of logic and every mathematical axiom. Quality is enforced by a computer, which exhaustively checks every logical step of a proof.

Formal verification becomes appropriate in proofs whose complexity surpasses the capabilities of checking by hand. (A wiki page catalogues numerous long mathematical theorems that might benefit from formalization.) Formal methods are well-suited for many computer-assisted mathematical proofs. In fact, at the formal level the line is erased between algorithms implemented as computer code and mathematical reasoning. A single software system handles the formal verification of both.

Formal methods have been under development for decades, and in recent years the verification of complex software systems, hardware, and intricate theorems has become a reality. Already in 1989, it was possible to formally specify and verify a simple computer system from high-level language to microprocessor. As recent examples, we mention the full verification of a C compiler and complex mathematical theorems such as the Feit-Thompson theorem and the Four-Color theorem.

Formal Verification in Cryptography

Formal verification of computer code can be advised when human life or large economic interests are at stake: aircraft control systems, widely adopted cryptographic standards, or nuclear reactor controllers. Formal verification reduces the software defect rate to a level that is scarcely attainable by other means.

For several reasons, cryptography calls out for formal verification. The field is highly mathematical. Many key algorithms can be implemented as small blocks of code. A tiny defect can potentially defeat the entire algorithm. Adversaries actively seek out bugs to exploit. Cryptography safeguards large financial interests and fundamental human freedoms.

Various formal tools have been constructed especially for application to cryptography. The pi-calculus has been adapted to cryptographic protocols. Projects in the Isabelle proof assistant include protocol verification through inductive definitions and game analysis. In the Coq proof assistant, there have been successful formal verifications of cryptographic primitives and code-based cryptographic proofs. Significantly, formal methods have started to enter the standardization process.

The working group on the Theoretical Foundations of Security Analysis and Design and the Computer Security Foundations Symposium of the IEEE (CSF 2013) promote formal methods in cryptography.

In truth, our imperfect knowledge prevents the comprehensive verification of cryptographic systems. We are stymied by notorious problems like P versus NP and the existence of one-way functions. We lack definitive lower bounds on the computational complexity of concrete problems such as factoring of integers. Research into security reductions is ongoing. There is no comprehensive security model. For example, the Dolev-Yao model works at a high level of abstraction, assuming that cryptographic primitives function perfectly, while other models operate at various levels of detail.

Nevertheless, we can work with these limitations, implementing a collection of interrelated formal proofs grounded in current technological capabilities, and move forward from there.

The informality of NIST standards

Earlier critiques of the NIST standard 800-90A for pseudo-random number generation have focused on specific defects. Here, we argue that mathematical weaknesses run throughout the standard. Amid the accusations that the NSA has undermined NIST cryptographic standards, we remind NIST that one of the most effective ways to subvert a cryptographic standard is to muddle the math.

The first requirement of a standard is to set the tone and level of discourse that reflects the current technological capabilities of the matter at hand. By choosing to present an informal standard, avoiding both rigor and formal mathematics, NIST has produced a standard that is out of step with the technology of the time.

Some definitions in the NIST standard are merely informal. For example, the NIST standard defines1 pseudo-randomness as “deterministic yet also effectively random” (NIST 800-90A, p.7). A mathematically rigorous definition of pseudo-random generators requires much more care, referencing rigorous notions of measure, probability, and complexity theory. Properly formulated definitions are given in Luby, Yao, Blum and Micali. As it is manifestly impossible to base rigorous or formal mathematical proofs on something so vague as “deterministic yet effectively random,” the early pages of the NIST standard effectively ward off careful mathematical analysis.

The lack of rigor continues throughout the document. Algorithms are described with English-text pseudo-code. With more care, NIST might have provided a formal specification and a reference implementation in executable code in a language with precise semantics. Overall, the standard gives few mathematical arguments, and these do not inspire confidence. The document slips into convenient inaccuracies: heuristics rather than proofs, fixed-point arithmetic, and Shannon entropy rather than min-entropy. (See NIST 800-90A, Appendix C.2.) In fact, the standard is imprecise to such a degree that competing definitions of entropy are largely irrelevant.

An example of NIST reasoning

This section goes into detail about a particular mathematical argument that appears in the NIST standard.2 For our purposes, the topic of discussion matters less than the nature of the NIST committee’s mathematical thought. Do they reason as a mathematician in an unbroken chain of logic, or is the committee satisfied by a lesser standard?

The context is the following. Let E be an elliptic curve defined over a finite field Fp, defined in affine coordinates by a polynomial equation y2 = f(x). The pseudo-random generator extracts bits from the x coordinates of a sequence of points P1, P2,… on the elliptic curve. The construction of the sequence of points does not concern us here. The issue is this: if points are sampled uniformly at random from E(Fp), then their x coordinates are not uniformly distributed in the finite field; in fact, the x coordinates obviously belong to the subset of the finite field on which f(x) is a square. Research estimates are needed to determine how big an issue this is. Aggressive truncation of bits from the binary representation of x might improve pseudo-randomness but would make the algorithm less efficient.3

NIST quotes the research of El Mahassni and Shparlinski as “an additional reason that argues against increasing truncation.” There are numerous gaps in NIST reasoning.

  • A bound on discrepancy is not the same as uniform distribution.
  • Uniform distribution is not the same as cryptographically secure pseudo-randomness.
  • The sets {Pi} of points used in real-world implementations have cardinalities far too small to be relevant to the given asymptotic estimates.
  • The research does not support the specific NIST rule that “the recommended number of bits discarded from each x-coordinate will be 16 or 17″ and does not convincingly “argue against increasing truncation.”

Nevertheless, NIST uses the research to make the inflated claim that “certain guarantees can be made about the uniform distribution of the resulting truncated quantities” (NIST 800-90A). This is proof by intimidation.

Assurances

The NIST standard 800-90A states that “a user of a DRBG for cryptographic purposes requires assurance that the generator actually produces (pseudo) random and unpredictable bits. The user needs assurance that the design of the generator, its implementation and its use to support cryptographic services are adequate to protect the user’s information” (NIST 800-90A). We agree.

What assurances does NIST actually provide about the generator? The document contains no mathematical proofs of pseudo-randomness and no supporting citations. Indeed, careful proofs would be impossible, because as we have noted, definitions are more informal than rigorous. Instead, the user of the standard must rely on NIST’s authoritative claim that “the design of each DRBG mechanism in this Recommendation has received an evaluation of its security properties prior to its selection for inclusion in this Recommendation.” That one sentence is the extent of NIST assurance of design. That’s it! It seems that for NIST, assurance means to comfort with soothing words. To a mathematician, this attitude is exasperating. There is no mathematical statement of what those security properties are, and no discussion of the methods that were used to reach the conclusion. We are not told who made the evaluation or what the results of the evaluation were.

Based on the Memorandum of Understanding between NIST and NSA from 1989, quoted in Schneier (p. 601), we might wonder whether NIST’s part in the evaluation was limited. According to the memo, “The NIST will … recognize the NSA-certified rating of evaluated trusted systems under the Trusted Computer Security Evaluation Criteria Program without requiring additional evaluation” (emphasis added).

Here is the NIST assurance of HMAC_DRBG (deterministic random bit generation based on hash message authentication codes). It states, “In general, even relatively weak hash functions seem to be quite strong when used in the HMAC construction. On the other hand, there is not a reduction proof from the hash function’s collision resistance properties to the security of the DRBG” (NIST 800-90A Appendix E.2). Note the informal tone of the discussion, the reassurance that a weakness is strength, the brevity, and absence of mathematical theorems.

Cryptographic standards derive their security through the underlying mathematics. We can place our trust in mathematics but not in assurances such as these.

Conclusions

According to NIST aspirations, “NIST works to publish the strongest cryptographic standards possible.” Our analysis shows that judged by professional mathematical standards, NIST is very far from its goal. Indeed, the current NIST standard was written in a pre-Snowden era of unverified assurances.

NIST sets the standard both by its choice of algorithms and by its
attitude towards rigor. Overall, its general careless tone will
facilitate vulnerable implementations of the standard.

Better approaches to standardization are available. In fact, a number of formal verification projects have been completed (such as a formal verification of a C compiler mentioned above) that dwarf what we specifically ask NIST to do. Please adopt verification technologies in widespread use! Improvement in the formal specification of NIST standards is the first critical step in a larger process of formal verification along the entire chain, including the underlying mathematical concepts, cryptographic primitives, protocols and algorithms, and end implementations in computer code.

End Notes

1. [Here is the full definition from NIST: “A process (or data produced by a process) is said to be pseudorandom when the outcome is deterministic, yet also effectively random, as long as the internal action of the process is hidden from observation. For cryptographic purposes, `effectively’ means `within the limits of intended cryptographic strength.'” Speaking of the data, we may ask with Knuth, “is 2 a random number?”] (return to text)
2. [We pick the most extensive mathematical argument in the document. It is telling that this argument is used to justify weakening the standard for the sake of efficiency.] (return to text)
3. [In light of the much discussed back door to the elliptic curve algorithm, NSA had a secret interest in persuading users not to discard many bits from x; aggressive truncation would make the back door more difficult to use.] (return to text)

References

M. Abadi and C. Fournet, Mobile values, new names, and secure communication, In POPL ’01: Proceedings of the 28th ACM SIGPLAN-SIGACT symposium on the principles of programming languages (2001), 104–115.

M. Abadi and A. D. Gordon, A calculus for cryptographic protocols: The spi calculus, Information and Communication 148 (January 1999), 1–70.

NIST 800-90A, E. Barker and J. Kelsey, Recommendation for random number generation using deterministic random bit generators, NIST Special Publication 800-90A (2012).

W. R. Bevier, W. A. Hunt Jr., J Strother Moore, and W. D. Young, An approach to systems verification, Journal of Automated Reasoning 5 (1989), 411–428 and 422–423.

M. Blum and S. Micali, How to generate cryptographically strong sequences of pseudo-random bits, SIAM Journal on Computing 13 (1984), 850–864.

D. Dolev and A. C. Yao, On the security of public-key protocols, IEEE Transaction on Information Theory 2 (March 1983), 198–208.

M. Drmota and R. F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Mathematics, Springer, 1997.

G. Gonthier et al., A machine-checked proof of the odd order theorem, Lecture Notes in Computer Science 7998 (2013), 163–179.

G. Gonthier, Formal proof — the four colour theorem, Notices of the AMS 55 (December 2008), no. 11, 1382–1393.

J. Harrison, Formal verification at Intel, Proceedings. 18th Annual IEEE Symposium on Logic in Computer Science (2003), 45–54.

M. Luby, Pseudorandomness and cryptographic applications, Princeton University Press, 1996.

E. El Mahassni and I. Shparlinski, On the uniformity of distribution of congruential generators over elliptic curves, Discrete Mathematics and Theoretical Computer Science (2002), 257–264, Sequences and their Applications.

R. Milner, Communicating and mobile systems: the pi-calculus, Cambridge University Press, 1999.

A. Yao, Theory and applications of trapdoor functions, FOCS (1982), 384–400.

Standard

The NSA back door to NIST

Thomas C. Hales (University of Pittsburgh)

(This article will be published in the Notices of the American Mathematical Society.)

Use once. Die once. — activist saying about insecure communication

This article gives a brief mathematical description of the NIST standard for cryptographically secure pseudo-random number generation by elliptic curves, the back door to the algorithm discovered by Ferguson and Shumow, and finally the design of the back door based on the Diffie-Hellman key exchange algorithm.

NIST (the National Institute for Standards and Technology) of the U.S. Department of Commerce derives its mandate from the U.S. Constitution, through the congressional power to “fix the standard of weights and measures.” In brief, NIST establishes the basic standards of science and commerce. Whatever NIST says about cryptography becomes implemented in cryptographic applications throughout U.S. government agencies. Its influence leads to the widespread use of its standards in industry and the broad adoption of its standards internationally.

Through the Snowden disclosures, the NIST standard for pseudo-random number generation has fallen into disrepute. Here I describe the back door to the NIST standard for pseudo-random number generation in elementary and mathematically precise terms. The NIST standard offers three methods for pseudo-random number generation [NIST]. My remarks are limited to the third of the three methods, which is based on elliptic curves.

Random number generators can either be truly random (obtaining their values from randomness in the physical world such as a quantum mechanical process) or pseudo-random (obtaining their values from a deterministic algorithm, yet displaying a semblance of randomness). The significance of random number generation within the theory of algorithms can be gauged by Knuth’s multivolume book, The Art of Computer Programming. It devotes a massive 193 pages (half of volume two) to the subject! A subclass of pseudo-random number generators are cryptographically secure, intended for use in cryptographic applications such as key generation, one-way hash functions, signature schemes, private key cryptosystems, and zero knowledge interactive proofs [Luby].

Elliptic curves as pseudo-random number generators

The NIST standard gives a list of explicit mathematical data (E,p,n,f,P,Q) to be used for pseudo-random number generation [NIST]. Here E is an elliptic curve defined over a finite field \mathbb{F}_p of prime order p. The group E(\mathbb{F}_p) has order n, which is prime for all of the curves that occur in the NIST standard. The elements of the group E(\mathbb{F}_p) consist of the set of points on an affine curve, together with a point at infinity which serves as the identity element of the group. The affine curve is defined by an equation y^2 = f(x) for some explicit cubic polynomial f in \mathbb{F}_p[x]. Finally, P and Q are given points on the affine curve.

NIST gives a few sets of data and in each case the prime number p is large. (The smallest is greater than 10^{77}.) No explanation is given of the particular choices (E,p,n,f,P,Q). We are told to use these data and not to question why. The standard stipulates that “one of the following NIST approved curves with associated points shall be used in applications requiring certification under FIPS-140 [U.S. government computer security accreditation].”

When A is any point other than the identity in E(\mathbb{F}_p), we may evaluate the coordinate function x at A, to obtain x(A)\in \mathbb{F}_p. By further lifting \mathbb{F}_p to a set of representatives in \mathbb{Z}, we obtain a function by composition x1 : E(\mathbb{F}_p)\setminus\{0\}~ \to~ \mathbb{F}_p~\to~ \mathbb{Z}. Write (n,A)\mapsto n * A for the \mathbb{Z}-module action of \mathbb{Z} on E. (We write powers of the group element A using multiplicative rather than exponential notation.)

The pseudo-random bit generator is initialized with a random integer seed s, obtained by some different process such as a separate random number generator. What is important for us is that the number s represents the hidden internal state of the algorithm. The hidden state must be kept secret for the pseudo-randomness to be effective. (Once the state is disclosed, a pseudo-random sequence becomes predictable and useless for many cryptographic purposes.)

The essence of the pseudo-random bit generator can be written in the Objective Caml language as follows. In the syntax of this language, each phrase (let x = a in …) defines the value of x to be a. The last line of the block of code gives the output of the function.

let pseudo_random s =
 let r = x1 (s * P) in
 let s' = x1 (r * P) in
 let t = x1 (r * Q) in
 let b = extract_bits t in
  (s',b);

That is, we successively apply the integer s or r to the point P or the point Q and take the x1 coordinate of the resulting point, then extract some bits from the number t. The integer s’ becomes the new secret internal state to be fed into the next iteration of the function. The output b is passed to the consumer of pseudo-random bits. This output may become publicly known. The function extract_bits operates by converting t to a list of bits, discarding the 16 most significant bits (for reasons that do not matter to this discussion), and giving the remaining bits as output. According to NIST standards, by iterating this function, updating the internal state at each iteration, a cryptographically secure stream b … of pseudo-random bits is obtained.

The back door

This algorithm is fatally flawed, as Ferguson and Shumow pointed out [Shumow-Ferguson]. Since P and Q are non-identity elements of a cyclic group of prime order, each is a multiple of the other. Write P = e * Q, for some integer e. We show that once we have e in hand, it is a simple matter to determine the secret internal state s of the pseudo-random bit generator by observing the output b, and thus to compromise the entire system.

The function extract_bits discards 16 bits. Given the output b, we take the 2^{16} (a small number of) possible preimages t of b under extract_bits. For each t, the coordinate x is known, and solving a quadratic, there are at most two possibilities for the coordinate y of a point A on the elliptic curve such that t = x1 (A). One such A is r * Q. For each A, we compute e * A. One of the small number of possibilities for e * A is

e * (r * Q) = r * (e * Q) = r * P.     (1)

Finally s’ = x1 (r * P). In short, the internal state s’ can be be narrowed down to a small number of possibilities by an examination of the pseudo-random output bitstream. Shumow and Ferguson state that in experiments, “32 bytes of output was sufficient to uniquely identify the internal state of the PRNG [pseudo-random number generator].”

The back door to the algorithm is the number e such that P = e * Q. To use the back door, one must know of the value e. The NIST standard does not disclose e (of course!), and extensive cryptographic experience suggests that it is hard to compute e from the coordinates of P and Q (unless you happen to own a quantum computer). This is the problem of discrete logarithms. But starting with e, there is no difficulty in creating a pair P and Q. The back door is universal: a single number e gives back door access to the internal state of the algorithm of all users worldwide.

It is a matter of public fact that the NSA was tightly involved in the writing of the standard. Indeed, NIST is required by law to consult with NSA in creating its standard. According to the New York Times, “classified NSA memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency” [NYT]. The news article goes on to say that “eventually, NSA became the sole editor” and then pushed aggressively to make this the standard for the 163 member countries of the International Organization for Standardization. Further historical and social context appears in [Wired]. NSA had facile access to the crown jewel e and motive to seize it. Draw your own conclusions.

Observations

1. This back door to this algorithm is extremely elementary from a mathematical perspective. We wrote the essential algorithm in six lines of computer code, even if more supporting code is needed to make it industrial strength. The algorithm could be explained to undergraduate math majors, or sufficiently advanced high-school students. The story also has the spy-agency intrigue to make a good math club talk or a special lecture in an elementary abstract algebra course. We essentially just need to understand that an elliptic curve is an abelian group whose elements (other than the identity element) are determined by two numbers x and y, that y is the root of a quadratic when x is given, and that every non-identity element of a cyclic group of prime order is a generator. Easy stuff.

2. Without prior knowledge of the back door, how difficult would it be to rediscover the possible existence of a back door? An analysis of the argument shows the required level of creativity is that of an undergraduate homework problem. We must think to write the element P as a multiple of the generator Q in a cyclic group of prime order. This a student learns in the first weeks of undergraduate algebra.

The rest of the process of inverting the pseudo-random number generator is determined by the definition of the function itself: simply take each step defining the function and reverse the steps, asking for the preimage of the function at each step of its definition, working from the output back to the secret state s’. Once the question of inverting the function is asked, it is easy to do the group theory, even if it is computationally difficult to write e explicitly.

One-way functions are a standard tool in the cryptographer’s bag. Every professional who has been trained to analyze cryptographic algorithms knows to ask the question of invertibility. It is unsettling that NIST and others do not seem to have asked this basic question.

Diffie-Hellman key exchange

In what follows, let us assume that someone, whom we will call the Spy, has access to the back door e. How is it possible for the Spy and the end user (the User) of the NIST algorithm to come into possession of the same shared secret (the internal state of the pseudo-random number generator), when all communication between them is public? Information flows from the Spy to the User through the published NIST standard, and from the User back to the Spy through the public output of the pseudo-random generator. The back door must have a remarkable cryptographic design to permit a secret to pass across these public channels, yet prevent the secret from becoming known to a third party.

As we now explain, the design of the back door to NIST is based on a well-known algorithm in cryptography called the Diffie-Hellman key exchange [Diffie-Hellman]. This is an algorithm to share a secret between two parties, when there is a possibility that the channel of communication is being monitored. In the current context, the Spy has full knowledge of the Diffie-Hellman key exchange for what it is. However, the User participates in the exchange innocently and unwittingly, by blindly following the rules of the NIST protocol.

The Diffie-Hellman key exchange requires a group, which we will take to be a cyclic group E of order n (to preserve notation). The group E, its order n, and a generator Q are made public. To share a secret, the first party (the Spy), picks a random number e, which is kept secret, and publishes P = e * Q to the world. The second party (the User) picks a random number r, which is kept secret, and publishes r * Q. Then by Equation (1), the Spy who knows e and r *Q, and the User who knows r and e * Q can both compute (r e) * Q = r * P, which is the shared secret. (In our context, the shared secret determines the internal state s’ of the pseudo-random number generator.) If E is a group in which the public knowledge E, n, Q, P = e * Q, r * Q does not allow the easy computation of (r e) * Q, then the shared secret is protected from public disclosure by the difficulty of the computation. In this way, the only two who learn the internal state of the pseudo-random number generator are the Spy and the User.

What we have described here is not an imaginary scenario: NIST documents do in fact publish the data E, n, Q, and P, needed to initiate the Diffie-Hellman exchange. A user, when making public the output from the pseudo-random number generator, does in fact complete the exchange. Diffie-Hellman is Diffie-Hellman, whether it has been advertised as such or not.

To say that the Diffie-Hellman key exchange algorithm is well-known is a vast understatement. This algorithm is a significant lesson in virtually every first course in cryptography everywhere in the world. Building on Merkle, the Diffie-Hellman paper, by starting the entire field of public key cryptography, is one of the most influential papers in cryptography ever written.

What is the significance of all this? It is no secret that the NSA employs some of the world’s keenest cryptographic minds. They all know Diffie-Hellman. In my opinion, an algorithm that has been designed by NSA with a clear mathematical structure giving them exclusive back door access is no accident, particularly in light of the Snowden documents. This is a work of experts.

References

[NIST] E. Barker and J. Kelsey, Recommendation for random number generation using deterministic random bit generators. NIST Special Publication 800-90A (2012), http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-90A.pdf.

[Diffie-Hellman] W. Diffie and M. Hellman, New directions in cryptography. IEEE Transactions on Information Theory 22 (1976), 644-654.

[Luby] M. Luby, Pseudorandomness and cryptographic applications, Princeton University Press, 1996.

[NYT] N. Perloth, J. Larson, and S. Shane. N.S.A. able to foil basic safeguards of privacy on web, September 5, 2013, New York Times, http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html.

[Shumow-Ferguson] D. Shumow and N. Ferguson, On the possibility of a back door in the NIST SP800-90 dual EC PRNG, http://rump2007.cr.yp.to/15-shumow.pdf, 2007.

[Wired] K. Zetter, How a crypto `backdoor’ pitted the tech world against the NSA, Wired (Sept 24, 2013), http://www.wired.com/threatlevel/2013/09/nsa-backdoor/

Standard