As well as treating applications in Java and .NET, Cryptosense Analyzer can also check the cryptographic security of PKCS#11 implementations in HSMs and elsewhere. We recently added a few of improvements requested by our users.
Including passwords or cryptographic key material in source code is a major security risk for a number of reasons. In the worst case, if the code is public, everyone can read the key. Even if not, access to the code is often easier for an attacker to achieve than direct compromise of the application – the entire development team becomes part of the attack surface. Having obtained the keys, the attacker may no longer need to compromise the application at all, and the breach can go completely undetected since there is nothing in the logs when encrypted data is decrypted offline.
Hardcoding the keys is also a problem for key rollover, and for cryptographic agility. So, we’re convinced we need to get rid of them, but how can we check for them at scale across hundreds or thousands of applications?
A naive approach would be to just review the source code and search for cryptographic calls. However, this is both time-consuming and error-prone: what will the parameters be when this function is actually called? Which provider will respond to the call? Are we sure this isn’t dead code or an unused part of a library? On top of this, a lot of crypto in large applications is called by third party intermediate libraries and components of application frameworks for which source code might not even be available.
A recent NIST paper recommending which steps to take to prepare for the advent of quantum computers proposes that users of cryptography look to achieve ‘crypto agility’ as soon as possible. The idea was further expanded by Gartner in a recent research note, and now crops up regularly. It’s sometimes described as ‘crypto-agnosticism’, but what does it mean, and how does one achieve it?
Designers of cryptographic protocols have talked for a long time about the idea of algorithm agility. The principle is simple: that a protocol will be designed in such a way that its function is, to some extent, independent of the choice of cryptographic algorithm used, therefore allowing you to switch algorithm. You might even have a set of recommended cipher suites that can be negotiated in each protocol exchange – this is how TLS works for example. There are now whole RFCs dedicated to the topic.
Crypto agility extends this idea from network protocols to all of the cryptography in use in an organisation. This means that an organisation can only become crypto-agile when the security team knows all of the algorithms, keylengths, crypto libraries and protocols in use in their applications and infrastructure, and has a plan that would allow them to change if necessary.
Interactive Application Security Testing (or as we recently argued it should be known, Instrumented Application Security Testing) works by adding hooks into a running application and analyzing its behaviour to look for security flaws. It has many advantages and it’s the technique we use to test crypto security in our Analyzer tool.
Modern versions of IAST (like ours) can detect flaws even when the application is executing standard functional tests – there is no need to simulate attacks. This enables these tools to be deployed early in the development lifecycle and integrated into CI toolchains. However, there’s one key feature that doesn’t figure on most IAST checklists: coverage checking.
Suppose I get a nice green report from my IAST tool saying there are no vulnerabilities in my app – how do I know that all the parts of my code where there might be issues were actually exercised during testing?Continue reading →
When we started testing the cryptography in Java applications using our Analyzer software, one of the first results we found was the use of a 512-bit RSA key for signature verification.
At first this looks rather alarming since 512-bit RSA keys are easily breakable by brute force factorisation now.
There are several kinds of tool for testing applications for security vulnerabilities: Static Analysis Security Testing (SAST) looks at source code or compiled binaries and searches for patterns that suggest an issue. Dynamic Application Security Testing (DAST) tools test running code by sending inputs (typically to endpoints in a web application) and observing evidence of vulnerabilities.
At Cryptosense, we wanted to build a tool that would effectively identify and help fix vulnerabilities related to cryptography – something no other tool makes a good job of. We quickly realised that we would have to be able to test both code written by our users (to check the way it calls cryptographic libraries), the cryptographic libraries themselves (to look for known issues), framework components and dependencies (that are often using cryptography in insecure ways) and some kinds of behaviour that can only be observed at run-time (like key values loaded from keystores, passwords in configuration files, random number generators..).
Neither SAST nor DAST allow you to do all this – SAST does not see run-time aspects and DAST only sees the cryptography from the exterior of the application.
That’s why we built an IAST (Interactive Application Security Testing) tool, i.e. we instrument an application while it’s running to see all the cryptographic operations, and analyse these to detect crypto vulnerabilities.
Hardware Security Modules (HSMs) are generally viewed as expensive and painful to maintain. It’s not surprising that a lot of HSM users are looking for a cloud-based solution that would allow them to hand over maintenance to a third party and move to an opex instead of capex model (i.e. rent the HSM instead of buying it).
At the same time, companies looking to migrate their more complex business-critical applications are finding that Cloud Service Provider (CSP) key management APIs (e.g. AWS KMS, GCP KMS, and Azure keyvault as covered in an earlier post) often don’t offer the cryptographic flexibility they need to migrate securely and in compliance.
Responding to these market forces, a new wave of cloud-hosted HSMs is arriving. Equipped with standard APIs like PKCS#11, they offer the promise of flexible crypto services while keeping keys secure from cloud application compromise.
Continuous Integration or CI is a more and more widely adopted software engineering practice. A best practice for CI is to make the build self-testing, and recently this has started to include security testing. Cryptosense Analyzer, our tool for testing crypto security in applications, now integrates into CI.
In a 2014 article “Why does cryptographic software fail?”, Lazar et al. took the most recent 269 CVEs marked as “cryptographic issues” and classified the site of the failure. While 17% of the failures were in crypto libraries, 83% were in the way the applications use the libraries. Up until now, Cryptosense Analyzer for Java applications only treated the 83%. Today that’s changing as we’ve added provider vulnerability testing.