When we think about cryptographic keys we tend to think about closely guarded secrets. Keys are the only thing that separates the attacker from your encrypted data. Some keys really are treated with the appropriate level of respect. Those of you in the payments industry or those that have deployed a PKI, know all too well about the importance of auditing key management processes – some cases of which are no less than full-blown ceremonies.

But I’m focused here on all of the other keys: the billions of keys that are created on the fly, automatically, every second. The ones used in SSL, SSH, file and disk encryption and a thousand other applications. How are they created and who is responsible for making sure that they are good enough to do their job? How do we make sure that the generation of these keys isn’t taken for granted?

When I talk about keys being ‘good enough’, what I mean is, are they truly random? When keys are less than perfectly random they start to become predictable, and predictability is the enemy of all cryptography – it makes the attacker’s job a lot easier.

So, where do these random numbers come from? In almost all cases they are software-generated. The trouble is that software only does what it’s programmed to do; it doesn’t do random things.

Ironically, erratic behavior is normally called a bug. To trigger behavior that is actually random, the software normally scavenges randomness (more properly called entropy) from wherever it can, ideally by sampling some aspect of its physical environment. Entropy can come from many sources, some better (more random) than others. Everything from user mouse clicks, to video signals to timing jitter in the hardware can all yield entropy.

The trouble is that capturing entropy and converting it into statistically random numbers (equal numbers of independent ones and zeros) is not easy. For this reason, few software developers write their random number generators. Instead they use shared services provided by the operating system – for example, one of the most widely used is dev/urandom in Linux. Of course this now means that all applications on the same host compete for the single supply of shared randomness. It now becomes the operating system’s job to gather sufficient entropy to meet that needs.

What becomes clear is that random number generation spans the entire stack from hardware to OS to application, and very often with a virtualization layer spliced in between. The various layers are often ‘owned’ by different people or teams. They are designed and often deployed independently, which raises the question, “Who owns the job of making sure that random numbers are done right?”

The hardware guys have no idea what applications will run on any given box or how much entropy each will require. The OS has no idea how many random numbers will be required or how to prioritize individual applications (you’d like to believe the crypto apps get the best random numbers). And the applications have no idea if they are getting what they asked for or have the ability to raise alarms if they don’t.

The reality is that at each successive layer in the stack makes the assumption is that everything below (the hardware, the OS etc.) is doing its job in creating, capturing and processing entropy. Worse still, the measurements for assessing the quality of entropy and randomness are notoriously unreliable and so in practice there’s no easy way to find out if the various other layers are doing their job. The end result is that the application makes keys but no one can attest to their quality – either in real-time or retrospectively.

It would be nice to think that the security team will save the day. After all, it is their job to take a holistic view. But is that realistic? How many security teams know the specifics of how individual applications are designed and what randomness services are employed? How can they possibly know how commercial software or security appliances work at that level of detail? Could a CISO ever answer the question of how many VMs are running at any point in time, never mind what proportion of them are satisfying the entropy demands of their crypto apps? How many organizations have a policy about such apparently mundane tasks as generating random numbers?

Actually, some really do. They might require product security certifications such as FIPS 140, which includes RNG requirements, and a subset of these invest in dedicated devices such as hardware security modules (HSMs). But now we are in the territory of those special, regulated applications I mentioned at the beginning.

If we return to the mainstream – the millions of SSL stacks whirring away across the datacenter, the SSH keys generated on almost every system, the corporate web of VPNs – we need a generic solution, a solution that deals with random number generation and entropy on a grand scale. It will soon be hard to find an application that doesn’t need random numbers and most will need crypto strength randomness. Entropy sourcing and random number generation shouldn’t be left to individual boxes and VMs to do the best they can. It should be independent of the platform and environment.

Poor random number generation is a basic hygiene issue and it should be addressed through a utility, as a standard of due care.

Whitewood is a subsidiary of Allied Minds Federal Innovations, the division of Allied Minds dedicated to commercializing U.S.federal intellectual property. Allied Minds is an innovative U.S. science and technology development and commercialization company. Operating since 2006, Allied Minds forms, funds, manages and builds products and businesses based on innovative technologies developed at leading U.S. universities and federal research institutions. Allied Minds serves as a diversified holding company that supports its businesses and product development with capital, central management and shared services. More information about the Boston-based company can be found at www.alliedminds.com.

Whitewood