Some standards come with compliance criteria built in – you can’t say you’ve implemented the standard until your code can pass the tests. With PKCS#11, a 407-page standard specifying the most widely used API in cryptographic hardware, there are no such tests. So how can a would-be PKCS#11 user discriminate between a good implementation of the API and a bad one? And how can a manufacturer find compliance bugs and then demonstrate the quality of their product?
Cryptosense at the ICFP ML Workshop
As part of what some are describing as a fantastic programme, Thomas will present his work on well-typed smart fuzzing at the ICFP ML Workshop in Gothenburg in September. The smart fuzzing algorithm is a key part of the first phase of the Cryptosense test methodology. Read more about it here, or if you’re attending the ICFP workshops, why not ask for a demo – both Thomas and Romain will be there.
Well-Typed Smart API Fuzzing
Since I joined Cryptosense in March, I’ve been working on a new implementation of the testing framework that we use to reverse-engineer cryptographic APIs. Last Friday, I gave a talk at the 7th Analysis of Security APIs workshop in Vienna where I explained some of the main ideas of this work. Here’s a high-level summary of my presentation.
When I arrived at Cryptosense I could see there had already been a huge investment in advancing the state of the art in automatic analysis of an API such as PKCS#11. The challenge was to generalise this tool to be able to test other crypto APIs in a scalable way, without reproducing all the effort.