ROCA and the Role of Key Generation

By Richard Moulds

It’s been a busy week for crypto vulnerability stories. First there was the Key Reinstallation AttaCK (KRACK) that showed how a WiFi man in the middle could trick WPA2 handshakes into reusing encryption keys that are already known to the attacker. KRACK is scary because it points to a longstanding flaw in the WPA2 standard rather than an isolated implementation error – which means it could affect virtually every WiFi connected device. But perhaps even more worrying is the ROCA vulnerability which has even wider ramifications and might yet serve as a dress rehearsal for the arrival of quantum computers.

The ‘Return of Coppersmith’s Attack’ (ROCA) makes it possible for attackers to simply calculate the private key of an RSA keypair purely by knowing the public key (which is of course public, in the form of a certificate). The fundamental tenant of RSA asymmetric crypto is that determining the prime number factors of the public key is an immensely expensive task, way beyond practical computing reach. The problem is that it turns out that corners have been cut in the process of generating some of those keypairs such that the factors can be found in just a few minutes for 1024 bit keys and a few weeks for 2048 bit keys (3072 and 4096 bit keys are still thought to be safe).

One of the big bottlenecks in generating RSA keypairs is that it takes a lot of CPU effort (and therefore time) to find large random numbers that are prime. To overcome this, particularly in low power devices, algorithms have evolved to accelerate the process of checking whether a number is a prime number or not. It’s one of these accelerator algorithms (called Fast Prime) that has been found to have a vulnerability that results in keys that can be easily factored. Fast Prime was used by Infineon and installed as a firmware library in various of their crypto hardware devices such as smart cards and Trusted Platform Module (TPM) chips. It’s an unfortunate irony that the organizations that will be impacted most by ROCA are the ones that consciously tried to do key generation in the most secure way i.e. in dedicated hardware. Anyone generating keys in software (for example, with OpenSSL) is fine.

The good news is that the vulnerability is easy to detect. But the bad news is that the vulnerability is easy to detect. Normally, when key generation goes wrong, for example if there is insufficient entropy to generate keys that are actually random, it can be hard for an opportunistic attacker to exploit the weakness, since vulnerable keys are indistinguishable from secure keys. The attacker has to spend a lot of effort just to find vulnerable systems before they even start to exploit them. In the case of ROCA it takes only milliseconds to determine if the certificate is weak. That’s a big deal because weakness is something the attacker can, and will, scan for.

The ROCA vulnerability is different than other scare stories and offers several important lessons –

  1. You don’t always get what you pay for – The victims in this story weren’t trying to cut corners or caught tripping over their own bugs. Instead they were trying to go the extra mile by investing in crypto hardware. It’s a double whammy, these organizations put this protection in place presumably because they have something worth protecting, which is now at risk, a risk that is compounded by the fact that the vulnerability can be easily spotted and targeted.
  2. Keys are a single point of failure – Low-level tasks like entropy gathering and key generation are often taken for granted but are actually single points of failure that can bring down the whole crypto house of cards. They go unmonitored and unmanaged but when they hit, they hit big. It’s time to pay attention to your keys, knowing where they come from is as important as controlling how they are stored and managed.
  3. Embedded components have massive but unknown footprints – Infineon chips are sold to equipment vendors. End user organizations don’t buy them or know they have them. Which means you might have to test every certificate for vulnerability – and guess what, you have way more certificates than you think and they’re in places you might not expect or have access to.
  4. Remediation is easier said than done – Although it’s easy to test for the vulnerability (assuming you know how to find embedded certificates in systems you’ve never looked at before), updating firmware in proprietary devices and embedded systems will likely be a frustrating and expensive task. The situation is exacerbated by the fact that the devices and systems in question are designed to present a higher security posture and so have extra controls in place to prevent just the sort of changes that you need to make.
  5. The impact goes way beyond data theft, the infrastructure is at risk – The ability to find private keys opens the potential to fake signatures and credentials, not just decrypt data. If you can fake code signing signatures you can corrupt the infrastructure itself, bad news for the IoT – Stuxnet for the masses.
  6. What value certifications? – It’s interesting to note that the Infineon chips that are affected were proudly marketed as FIPS 140 and Common Criteria certified. Which raises obvious questions about the value of those particular certifications, and certification schemes in general. We all know that certification schemes can often lag the actual market threat but in this case the vulnerabilities hit mature and ideally tightly scrutinized functions and yet they went undetected. When vendors routinely push labs to speed up the certification process and standards bodies entertain the idea of self-certification, are we missing the point?
  7. There’s a right way to announce vulnerabilities – It’s tempting for anyone that discovers a vulnerability to immediately spill the beans and claim their 15 minutes of glory. But there’s a more responsible approach, and in this case the researchers seem to have got it right.
    1. Inform the vendors, and only the vendors
    2. Give them a deadline with enough time to create a patch
    3. Only go public about the vulnerability once a patch exists (or if the vendors have ignored the issue, as a last resort)
    4. Give end users the tools to assess the risk and a realistic window to deploy the update (in this case,
    5. Hold off from explaining the details of the exploit until (most) end users have had a chance to fix the issue

Looking further to the future, the ROCA vulnerability is eerily familiar to anyone that’s tracking the threat posed by quantum computers. Sometimes called the ‘cryptapocalypse’, it is expected that quantum computers will be able to execute Shor’s algorithm, something that regular computers (thankfully) can’t do. Shor’s algorithm enables a private key to be calculated from a public key – sound familiar? Painful as it will be, the ROCA vulnerability is nowhere near as far reaching as the quantum threat but it serves to illustrate the issue. By way of contrast the quantum threat is thought to impact all common asymmetric algorithms, not just RSA. It also applies to every device or application and isn’t limited to specific chips or implementations. Worse still, quantum resistance might not be achieved through a simple software upgrade. The good news is that the quantum threat hasn’t materialized, yet.

Anyway, back to the here and now, what the ROCA vulnerability shows us is that crypto isn’t just about the algorithms. It really is all about the keys and in this case, how they are generated. Key generation represents a single point of failure and the failure is likely to be absolute, once a key is broken the game is up, trust is lost. As crypto becomes ubiquitous in securing the internet, clouds, mobile and the IoT we can’t just take key generation for granted. We’ve historically judged crypto strength by how long the keys are. Maybe it’s time to start asking how good the keys are and not just how many bits they contain. Anything less than true randomness is a risk, and the cost of failure can be immense.




By Richard Moulds

We’re thrilled that one of our lead collaborators, Ray Newell, of Los Alamos National Laboratory received the 2016 Richard P. Feynman Innovation award for his leadership in the development and commercialization of Whitewood’s Entropy Engine.

I was excited to join Ray and his family at the annual awards ceremony at Los Alamos where the Deputy Director and CTO of the lab led the celebration.

Ray is the leader of the Los Alamos National Laboratory’s Quantum Communications team and played a pivotal role in helping us to develop groundbreaking work in quantum entropy and random number generation into a commercially available cybersecurity product for mainstream enterprise.

The collaboration between Los Alamos and Whitewood is also a fine example of the merits of the technology transfer model and how it can be successful through collaboration, innovation and perseverance.

Entropy made easy

Entropy made easy

By Richard Moulds

We all love services. Throughout history we’ve consumed services when we either can’t do or prefer not to do something ourselves. When you think about it, it’s quite surprising how little we actually do for ourselves!

That same attitude is now well established in the world of corporate IT, where the ‘as-a-service’ model has a heck of a lot going for it. Cloud services are probably cheaper, more flexible and more reliable than doing it yourself. In short, services are just easier.

Unfortunately, ‘easy’ is not a word that springs to mind when we think about crypto and key management. My colleagues and I at Whitewood are trying to change that with our new entropy-as-a-service offering at

It’s probably true that most security pros never give entropy a second thought. There’s a general awareness that entropy is what makes random numbers random but few have the time to worry about where it comes from, how it’s used and what’s the difference between something working and something not working – something safe and something not. 

But there’s a growing sense that entropy and randomness are topics that deserve our attention and even some action. NIST are working on a new set of standards and the SANS Institute who produce an annual prediction of the most dangerous attacks for the coming year included weak random number generation in their list of the top seven threats (I wrote about the SANS prediction here).

Recognizing the threats associated with entropy and random numbers is one thing – but doing something about them is quite another. It’s a poorly documented topic and hard to know where you would even start. Random number generators are buried in the depths of the operating system, there are virtually no tools to reliably measure the quality of the random numbers they generate, and no alarm bells go off when something goes wrong.

The very nature of random numbers means that fixing randomness and entropy starvation is not something that can be done reactively. If we could simply generate lots of keys and throw away the ones that aren’t very random life would be good, but sadly it doesn’t work that way. When it comes to improving random numbers we have to be proactive.

But as we all know, being proactive is tricky when there are always so many other things that we need to react to. Proactive measures have to be easy, otherwise they never happen. How many of us would proactively take the flu shot if it meant a week of special diets and rigorous exercise?

That brings me back to entropy-as-a-service. Wouldn’t it be nice if your applications, and particularly your crypto applications (SSL/TLS, SSH, encryption, payments, PKI, DRM, blockchain to name just a few) could get access to pure quantum entropy all the time? It would be even better if the quality of that entropy was independent of the machines those apps were running on and better still if it didn’t require plugging in new hardware or changing a line of code. Well now, that capability exits, and best of all, it’s free!

Try it out yourself! Head over to, download our netRandom client and start streaming your own quantum entropy for free. The received entropy is fed directly into the Linux entropy pool where it’s used to rapidly re-seed existing OS-based random number generators such as /dev/random and /dev/urandom (don’t worry, we’ll have the same thing for Windows very soon).

One of the nice things about entropy is that it’s always additive; you don’t have to rely on any single entropy source. Network-delivered entropy from Whitewood acts as a supplementary source to be combined with whatever local entropy you already have. It spreads your risk, boosts quality and brings consistency across VMs, containers, devices – whatever and wherever.

For the first time you’ll be able to track the total entropy you consumed, how demand changed over time and measure the randomness of the entropy you received, all from your personal admin page on

At the risk of overusing my flu shot analogy, think of the quantum entropy streamed from as inoculating your existing systems from making weak keys. Like any proactive measure, our new entropy-as-a-service is focused on peace of mind, instilling the confidence that your crypto and non-crypto applications alike all have access to true random numbers whenever they need them. It’s as easy as that.

Random – Ransom

By Richard Moulds

For some of us the RSA conference seems like a long time ago but amid all of the hype, some interesting points stood out. One particular session that jumps to mind was a keynote by the SANS Institute with the irresistible title of “The Seven Most Dangerous New Attack Techniques.”

Not surprisingly the Internet of Things (IoT) and ransomware were high on their list. Ransomware is already one of the most successful forms of attack, the modern equivalent of paying protection money to the mob. Attackers love it because it’s easy and effective. They don’t have the hassle (and risk) of actually having to steal anything and even better, most victims (apparently two-thirds) quietly pay up. What’s even more appealing is that bitcoin is the preferred method of payment which keeps everything wonderfully anonymous, not to mention safe. No bags of cash changing hands in the dead of night.

The SANS list of predictions gets interesting when they take the logical next step and conflate the two threats: ransomware applied to the IoT. It’s probably safe to assume that the one-third of current ransomware victims that don’t pay up are the ones that had the foresight or good fortune to keep a spare copy of their data safe, a copy that would remain unlocked.

I’m sure these folks quite rightly feel like they dodged a bullet. But they might not be so lucky when ransomware hits their IoT. People don’t keep a spare car just in case they can’t start their regular car in the morning, or maintain a spare building in case the elevators stop working, or build a spare power grid, implant a spare pace-maker – you get the picture. Keeping backups of ‘things’ is much more expensive than keeping backups of data. I think SANS called it right; ransomware in the IoT is likely to be a big deal.

Another of their “seven deadly attacks” is weak random number generators, a subject close to my heart. Johannes Ullrich explained the concern that if computers can’t generate random numbers that are truly random then how can they be trusted to make good keys for crypto? If your keys start to become predictable, even only a little bit predictable, then your crypto becomes weaker and your data easier to steal.

Like ransomware, an attack using weakened random numbers is potentially very attractive. In this case it’s attractive because weak random numbers are essentially undetectable. A computer with a weak random number generator is indistinguishable from one with a true random number generator. This means that an attack on random numbers is no smash and grab; it’s an attack that keeps on giving – the perfect backdoor.

A weakness with random number generation is already scary enough on computers, but just like with ransomware the threat gets dramatically amplified in the context of the IoT. It you think that sounds far-fetched, check out my post from last month when it turned out Siemens building controllers were spotted using the same keys for their SSL connections due to low randomness. (

OK, now for the irony. Have you ever wondered where the ransomware attackers get their random numbers?

The whole premise of ransomware is that it’s infeasible to crack the attacker’s encryption. The only way to get your data back is to pay. But if you can guess their key you can dodge their fee (sorry, I couldn’t resist the alliteration). Ransomware is an interesting example of where both the good guys and the bad guys are using the same tools, in this case crypto. The algorithms are not the issue, it’s the keys that count. The question is, who pays the most attention to making sure their keys are truly random, us or them? I’ve got a sneaking suspicion it might not be us.

If you want to read more about the SANS seven deadly threats this ZDNet article is a good start.

Weak encryption keys in the IoT

Weak encryption keys in the IoT

Last Christmas, while everyone was asking for their favorite IoT device (think Alexa), Siemens was busy patching a bug. Of course, the industrial IoT isn’t about Echos and Dots. It’s all about much more mundane devices; gray boxes that are easily taken for granted, quietly doing their thing – hopefully securely. Well, it turns out that many of them aren’t. Researchers at the University of Pennsylvania have identified dozens of vendors that provide IoT ‘things’ and other networking gear that unfortunately use weak keys to encrypt the data they share. Siemens, to their credit, actually did something to fix it. The research started a few years ago and was recently repeated and it found that there are serious issues with entropy generation.

Cast your mind back to those high school physics lessons and recall that entropy is the measure of randomness and that randomness is what we desperately need when we make crypto keys. When we think about SSL (yes, I know I should call it TLS but I just can’t shake the habit) it’s easy to focus on the pain associated with buying and managing the certificates and picking the right algorithms and key lengths. But how many of us think about randomness? Without good random numbers you can’t make good keys. Anything less than true randomness introduces risk. The trouble is that almost no-one gives much thought to where their random numbers come from and that’s what tripped up Siemens, and many others.

Almost all random numbers come from the operating system. The problem is that software can’t generate true random numbers. When software does sometime random we call it a bug, not a feature! The best that software can do is make ‘pseudo random numbers’. The good news is that these numbers can be actually much more random than their name would imply. But this depends on the OS having a sufficient source of high quality entropy to ‘seed’ the random number generation process. Fortunately, entropy is everywhere, it’s all around us, the big question is how do operating systems and applications get their digital hands on it? Well, it turns out that this is not as easy or common as we would all like to believe.

Believe it or not there are war stories of ingenious IT security folk using lava lamps and cameras to capture entropy. That’s fine (unless someone turns off the light) but it’s hard to scale to the IoT and it hardly fits the virtualized philosophy of the cloud where you know virtually nothing about the hardware your stuff actually runs on.

If you stand back you can see the dilemma. We’re doing more and more crypto and need longer and longer keys and yet we run our applications in environments where there is less and less entropy. Not surprisingly, the problem of entropy starvation is starting to be exposed. Now, in the case of Siemens, we’re only talking about air conditioning controllers but does anyone really think that the boxes that control the elevators, our mass transit systems, our power grids and one day our driver-less cars are really any smarter?

There’s much talk about IoT security. Chip vendors, OS providers and application developers are all doing their best, but entropy is one of those issues that has implications up and down the entire stack. As new IoT (and cloud) security standards get written let’s hope that entropy doesn’t fall through the cracks.

Here’s a link to the full article if you’re interested:

Whitewood netRandom Demonstration

This video introduces the role of entropy and random number generation, particularly in the context of cryptographic security. It demonstrates the random number generation capabilities of Linux (dev/random and /dev/urandom) and shows how the netRandom product from Whitewood can be deployed to improve entropy generation capabilities across corporate data centers and cloud services.

For further information about the netRandom product, click here.

The video is presented by Richard Moulds, Vice President of Strategy at Whitewood.

Whitewood is a subsidiary of Allied Minds Federal Innovations, the division of Allied Minds dedicated to commercializing U.S.federal intellectual property. Allied Minds is an innovative U.S. science and technology development and commercialization company. Operating since 2006, Allied Minds forms, funds, manages and builds products and businesses based on innovative technologies developed at leading U.S. universities and federal research institutions. Allied Minds serves as a diversified holding company that supports its businesses and product development with capital, central management and shared services. More information about the Boston-based company can be found at