Entropy made easy

Entropy made easy

By Richard Moulds

We all love services. Throughout history we’ve consumed services when we either can’t do or prefer not to do something ourselves. When you think about it, it’s quite surprising how little we actually do for ourselves!

That same attitude is now well established in the world of corporate IT, where the ‘as-a-service’ model has a heck of a lot going for it. Cloud services are probably cheaper, more flexible and more reliable than doing it yourself. In short, services are just easier.

Unfortunately, ‘easy’ is not a word that springs to mind when we think about crypto and key management. My colleagues and I at Whitewood are trying to change that with our new entropy-as-a-service offering at getnetrandom.com.

It’s probably true that most security pros never give entropy a second thought. There’s a general awareness that entropy is what makes random numbers random but few have the time to worry about where it comes from, how it’s used and what’s the difference between something working and something not working – something safe and something not. 

But there’s a growing sense that entropy and randomness are topics that deserve our attention and even some action. NIST are working on a new set of standards and the SANS Institute who produce an annual prediction of the most dangerous attacks for the coming year included weak random number generation in their list of the top seven threats (I wrote about the SANS prediction here).

Recognizing the threats associated with entropy and random numbers is one thing – but doing something about them is quite another. It’s a poorly documented topic and hard to know where you would even start. Random number generators are buried in the depths of the operating system, there are virtually no tools to reliably measure the quality of the random numbers they generate, and no alarm bells go off when something goes wrong.

The very nature of random numbers means that fixing randomness and entropy starvation is not something that can be done reactively. If we could simply generate lots of keys and throw away the ones that aren’t very random life would be good, but sadly it doesn’t work that way. When it comes to improving random numbers we have to be proactive.

But as we all know, being proactive is tricky when there are always so many other things that we need to react to. Proactive measures have to be easy, otherwise they never happen. How many of us would proactively take the flu shot if it meant a week of special diets and rigorous exercise?

That brings me back to entropy-as-a-service. Wouldn’t it be nice if your applications, and particularly your crypto applications (SSL/TLS, SSH, encryption, payments, PKI, DRM, blockchain to name just a few) could get access to pure quantum entropy all the time? It would be even better if the quality of that entropy was independent of the machines those apps were running on and better still if it didn’t require plugging in new hardware or changing a line of code. Well now, that capability exits, and best of all, it’s free!

Try it out yourself! Head over to getnetrandom.com, download our netRandom client and start streaming your own quantum entropy for free. The received entropy is fed directly into the Linux entropy pool where it’s used to rapidly re-seed existing OS-based random number generators such as /dev/random and /dev/urandom (don’t worry, we’ll have the same thing for Windows very soon).

One of the nice things about entropy is that it’s always additive; you don’t have to rely on any single entropy source. Network-delivered entropy from Whitewood acts as a supplementary source to be combined with whatever local entropy you already have. It spreads your risk, boosts quality and brings consistency across VMs, containers, devices – whatever and wherever.

For the first time you’ll be able to track the total entropy you consumed, how demand changed over time and measure the randomness of the entropy you received, all from your personal admin page on getnetrandom.com.

At the risk of overusing my flu shot analogy, think of the quantum entropy streamed from getnetrandom.com as inoculating your existing systems from making weak keys. Like any proactive measure, our new entropy-as-a-service is focused on peace of mind, instilling the confidence that your crypto and non-crypto applications alike all have access to true random numbers whenever they need them. It’s as easy as that.

Random – Ransom

By Richard Moulds

For some of us the RSA conference seems like a long time ago but amid all of the hype, some interesting points stood out. One particular session that jumps to mind was a keynote by the SANS Institute with the irresistible title of “The Seven Most Dangerous New Attack Techniques.”

Not surprisingly the Internet of Things (IoT) and ransomware were high on their list. Ransomware is already one of the most successful forms of attack, the modern equivalent of paying protection money to the mob. Attackers love it because it’s easy and effective. They don’t have the hassle (and risk) of actually having to steal anything and even better, most victims (apparently two-thirds) quietly pay up. What’s even more appealing is that bitcoin is the preferred method of payment which keeps everything wonderfully anonymous, not to mention safe. No bags of cash changing hands in the dead of night.

The SANS list of predictions gets interesting when they take the logical next step and conflate the two threats: ransomware applied to the IoT. It’s probably safe to assume that the one-third of current ransomware victims that don’t pay up are the ones that had the foresight or good fortune to keep a spare copy of their data safe, a copy that would remain unlocked.

I’m sure these folks quite rightly feel like they dodged a bullet. But they might not be so lucky when ransomware hits their IoT. People don’t keep a spare car just in case they can’t start their regular car in the morning, or maintain a spare building in case the elevators stop working, or build a spare power grid, implant a spare pace-maker – you get the picture. Keeping backups of ‘things’ is much more expensive than keeping backups of data. I think SANS called it right; ransomware in the IoT is likely to be a big deal.

Another of their “seven deadly attacks” is weak random number generators, a subject close to my heart. Johannes Ullrich explained the concern that if computers can’t generate random numbers that are truly random then how can they be trusted to make good keys for crypto? If your keys start to become predictable, even only a little bit predictable, then your crypto becomes weaker and your data easier to steal.

Like ransomware, an attack using weakened random numbers is potentially very attractive. In this case it’s attractive because weak random numbers are essentially undetectable. A computer with a weak random number generator is indistinguishable from one with a true random number generator. This means that an attack on random numbers is no smash and grab; it’s an attack that keeps on giving – the perfect backdoor.

A weakness with random number generation is already scary enough on computers, but just like with ransomware the threat gets dramatically amplified in the context of the IoT. It you think that sounds far-fetched, check out my post from last month when it turned out Siemens building controllers were spotted using the same keys for their SSL connections due to low randomness. (http://whitewoodsecurity.com/weak-encryption-keys-iot/).

OK, now for the irony. Have you ever wondered where the ransomware attackers get their random numbers?

The whole premise of ransomware is that it’s infeasible to crack the attacker’s encryption. The only way to get your data back is to pay. But if you can guess their key you can dodge their fee (sorry, I couldn’t resist the alliteration). Ransomware is an interesting example of where both the good guys and the bad guys are using the same tools, in this case crypto. The algorithms are not the issue, it’s the keys that count. The question is, who pays the most attention to making sure their keys are truly random, us or them? I’ve got a sneaking suspicion it might not be us.

If you want to read more about the SANS seven deadly threats this ZDNet article is a good start.

Weak encryption keys in the IoT

Weak encryption keys in the IoT

Last Christmas, while everyone was asking for their favorite IoT device (think Alexa), Siemens was busy patching a bug. Of course, the industrial IoT isn’t about Echos and Dots. It’s all about much more mundane devices; gray boxes that are easily taken for granted, quietly doing their thing – hopefully securely. Well, it turns out that many of them aren’t. Researchers at the University of Pennsylvania have identified dozens of vendors that provide IoT ‘things’ and other networking gear that unfortunately use weak keys to encrypt the data they share. Siemens, to their credit, actually did something to fix it. The research started a few years ago and was recently repeated and it found that there are serious issues with entropy generation.

Cast your mind back to those high school physics lessons and recall that entropy is the measure of randomness and that randomness is what we desperately need when we make crypto keys. When we think about SSL (yes, I know I should call it TLS but I just can’t shake the habit) it’s easy to focus on the pain associated with buying and managing the certificates and picking the right algorithms and key lengths. But how many of us think about randomness? Without good random numbers you can’t make good keys. Anything less than true randomness introduces risk. The trouble is that almost no-one gives much thought to where their random numbers come from and that’s what tripped up Siemens, and many others.

Almost all random numbers come from the operating system. The problem is that software can’t generate true random numbers. When software does sometime random we call it a bug, not a feature! The best that software can do is make ‘pseudo random numbers’. The good news is that these numbers can be actually much more random than their name would imply. But this depends on the OS having a sufficient source of high quality entropy to ‘seed’ the random number generation process. Fortunately, entropy is everywhere, it’s all around us, the big question is how do operating systems and applications get their digital hands on it? Well, it turns out that this is not as easy or common as we would all like to believe.

Believe it or not there are war stories of ingenious IT security folk using lava lamps and cameras to capture entropy. That’s fine (unless someone turns off the light) but it’s hard to scale to the IoT and it hardly fits the virtualized philosophy of the cloud where you know virtually nothing about the hardware your stuff actually runs on.

If you stand back you can see the dilemma. We’re doing more and more crypto and need longer and longer keys and yet we run our applications in environments where there is less and less entropy. Not surprisingly, the problem of entropy starvation is starting to be exposed. Now, in the case of Siemens, we’re only talking about air conditioning controllers but does anyone really think that the boxes that control the elevators, our mass transit systems, our power grids and one day our driver-less cars are really any smarter?

There’s much talk about IoT security. Chip vendors, OS providers and application developers are all doing their best, but entropy is one of those issues that has implications up and down the entire stack. As new IoT (and cloud) security standards get written let’s hope that entropy doesn’t fall through the cracks.

Here’s a link to the full article if you’re interested: https://threatpost.com/siemens-patches-insufficient-entropy-vulnerability-in-ics-systems/122699/

Whitewood netRandom Demonstration

This video introduces the role of entropy and random number generation, particularly in the context of cryptographic security. It demonstrates the random number generation capabilities of Linux (dev/random and /dev/urandom) and shows how the netRandom product from Whitewood can be deployed to improve entropy generation capabilities across corporate data centers and cloud services.

For further information about the netRandom product, click here.

The video is presented by Richard Moulds, Vice President of Strategy at Whitewood.

Key generation – who’s really responsible?

When we think about cryptographic keys we tend to think about closely guarded secrets. Keys are the only thing that separates the attacker from your encrypted data. Some keys really are treated with the appropriate level of respect. Those of you in the payments industry or those that have deployed a PKI, know all too well about the importance of auditing key management processes – some cases of which are no less than full-blown ceremonies.

But I’m focused here on all of the other keys: the billions of keys that are created on the fly, automatically, every second. The ones used in SSL, SSH, file and disk encryption and a thousand other applications. How are they created and who is responsible for making sure that they are good enough to do their job? How do we make sure that the generation of these keys isn’t taken for granted?

When I talk about keys being ‘good enough’, what I mean is, are they truly random? When keys are less than perfectly random they start to become predictable, and predictability is the enemy of all cryptography – it makes the attacker’s job a lot easier.

So, where do these random numbers come from? In almost all cases they are software-generated. The trouble is that software only does what it’s programmed to do; it doesn’t do random things.

Ironically, erratic behavior is normally called a bug. To trigger behavior that is actually random, the software normally scavenges randomness (more properly called entropy) from wherever it can, ideally by sampling some aspect of its physical environment. Entropy can come from many sources, some better (more random) than others. Everything from user mouse clicks, to video signals to timing jitter in the hardware can all yield entropy.

The trouble is that capturing entropy and converting it into statistically random numbers (equal numbers of independent ones and zeros) is not easy. For this reason, few software developers write their random number generators. Instead they use shared services provided by the operating system – for example, one of the most widely used is dev/urandom in Linux. Of course this now means that all applications on the same host compete for the single supply of shared randomness. It now becomes the operating system’s job to gather sufficient entropy to meet that needs.

What becomes clear is that random number generation spans the entire stack from hardware to OS to application, and very often with a virtualization layer spliced in between. The various layers are often ‘owned’ by different people or teams. They are designed and often deployed independently, which raises the question, “Who owns the job of making sure that random numbers are done right?”

The hardware guys have no idea what applications will run on any given box or how much entropy each will require. The OS has no idea how many random numbers will be required or how to prioritize individual applications (you’d like to believe the crypto apps get the best random numbers). And the applications have no idea if they are getting what they asked for or have the ability to raise alarms if they don’t.

The reality is that at each successive layer in the stack makes the assumption is that everything below (the hardware, the OS etc.) is doing its job in creating, capturing and processing entropy. Worse still, the measurements for assessing the quality of entropy and randomness are notoriously unreliable and so in practice there’s no easy way to find out if the various other layers are doing their job. The end result is that the application makes keys but no one can attest to their quality – either in real-time or retrospectively.

It would be nice to think that the security team will save the day. After all, it is their job to take a holistic view. But is that realistic? How many security teams know the specifics of how individual applications are designed and what randomness services are employed? How can they possibly know how commercial software or security appliances work at that level of detail? Could a CISO ever answer the question of how many VMs are running at any point in time, never mind what proportion of them are satisfying the entropy demands of their crypto apps? How many organizations have a policy about such apparently mundane tasks as generating random numbers?

Actually, some really do. They might require product security certifications such as FIPS 140, which includes RNG requirements, and a subset of these invest in dedicated devices such as hardware security modules (HSMs). But now we are in the territory of those special, regulated applications I mentioned at the beginning.

If we return to the mainstream – the millions of SSL stacks whirring away across the datacenter, the SSH keys generated on almost every system, the corporate web of VPNs – we need a generic solution, a solution that deals with random number generation and entropy on a grand scale. It will soon be hard to find an application that doesn’t need random numbers and most will need crypto strength randomness. Entropy sourcing and random number generation shouldn’t be left to individual boxes and VMs to do the best they can. It should be independent of the platform and environment.

Poor random number generation is a basic hygiene issue and it should be addressed through a utility, as a standard of due care.

Crypto – good, bad or something in-between?

When we think about cryptography, we tend to think in black and white. Cryptography feels definitive, binary, clear-cut. Data is either encrypted or it isn’t, readable or not – useful or useless. Digital signatures either validate or they don’t – messages are authentic or not. And digital certificates can be authenticated or they can’t. You get my point, there’s no middle ground – data isn’t half encrypted – there’s no shades of grey.

Regulators jumped on this apparent clarity. Things that are true or false can be easily written into requirements and laws and are easily audited. Think about data breach disclosure laws – the obligation to tell people when you lose their data. In almost all cases only one exemption is allowed – if the data in encrypted then you are off the hook. The presumption is that even though the data was lost its useless to anyone who finds it or stole it, so there’s no problem.

But few things in life are so cut and dry. Other than life and death, most things hover somewhere between good and bad, or working and broken. In the world of IT security, we see shades of grey everywhere; Malware might be spotted, firewalls might stop attacks, logs might get examined and alarms might be responded to. Almost all security tools are situational, far too mushy to be embodied in laws and regulation – instead they land in the vague world of best practice. The bad news is that crypto is no different.

But hang-on – there really aren’t that many choices with crypto, right? We all know the standard list of algorithms (AES, RSA, EEC etc.). It’s a pretty short list, who can name more than 5? Anyone that strays from the list is asking for trouble and it only changes once every 10 years or so – it really isn’t that hard to track. Then there’s key size, but again there’s not much room for choice. They normally come in convenient powers of 2 and there’s only a few viable options when you factor in performance impact and the guidance from bodies such as NIST. So where’s the shades of grey? It’s all about the keys.

In reality there is far more scope for disastrous decisions when it comes to key management. This is when crypto actually does become binary. If the attacker gets the key, the game is up. Security evaporates in an instant – no half measures. Sure, some keys might only yield a single message, but some are used for years, protect enormous volumes of data or guard intellectual property that can bring down a whole company it ever exposed.

Key management is more about processes and people than algorithms and standards and this is where the shades of grey creep in. Attackers can steal keys, control keys, guess keys or calculate keys. Each of these threats deserves a blog all to itself but at a high level I’ll make a grand statement: stealing and controlling keys gets harder over time while guessing and calculating keys gets easier over time. Let me explain. We ought to be able to agree that our enormous investment in security tools and education every year really should be making it harder to steal keys (if not, why are we bothering?). But, on the other hand, calculating keys must inevitably get easier over time if only because computers get faster and attackers get smarter – that’s why we periodically increase key length. This is also the reason for the paranoia about quantum computers, which, it turns out are likely to be very good at calculating keys.

All of these threats to keys reduce the effective security of crypto systems. Even though you may be using AES with keys that each have 256 bit doesn’t mean you have 256 bits of “effective security.” If those keys aren’t perfectly random or if they have been exposed then your effective security could be way lower and, what’s even more important, you would never know. Just because your car has four wheels doesn’t make it safe to drive!

The current debate about the use of encryption by terrorists is another good example of how law enforcement agencies would really like the ability to reduce the effective security of encryption – without changing the algorithm or key length. The challenge of course is to guarantee that the backdoor is perfectly discriminating when it comes to who can exploit the weaker effective security.

Backdoors or not, we’re already starting to see a steady trickle of attacks against keys and it can only get worse. For years, there has been plenty of unencrypted data for attackers to steal, so why bother trying to crack the encrypted stuff. Going forward, even the low hanging fruit will be encrypted. Sometime soon attackers will be faced with a simple decision: go and get a proper job or try to figure out how to crack crypto – my guess is they will chose the latter. The big question is whether they will be successful? The days of taking crypto for granted are over. Key management is an essential organizational process and the tools to generate and protect keys represent critical security infrastructure and should be treated as such.

Whitewood is a subsidiary of Allied Minds Federal Innovations, the division of Allied Minds dedicated to commercializing U.S.federal intellectual property. Allied Minds is an innovative U.S. science and technology development and commercialization company. Operating since 2006, Allied Minds forms, funds, manages and builds products and businesses based on innovative technologies developed at leading U.S. universities and federal research institutions. Allied Minds serves as a diversified holding company that supports its businesses and product development with capital, central management and shared services. More information about the Boston-based company can be found at www.alliedminds.com.

Whitewood