This really isn’t the warning you want to get when signing into a wifi-portal.
Persuant to my last post on cryptography and pixie dust, it’s helpful to read through Matt Green’s highly accessible article “How to ‘backdoor’ an encryption app.” You’ll find that companies have a host of ways of enabling third-party surveillance, ranging from overt deception to having access to communications metadata to compromising their product’s security if required by authorities. In effect, there are lots of ways that data custodians can undermine their promises to consumers, and it’s pretty rare that the public ever learns that the method(s) used to secure their communications have either been broken or are generally ineffective.
CNet recently revealed that Google is encrypting some of their subscribers’ Google Drive data. Data has always been secured in transit, but Google is testing encrypting data at rest. This means that, without the private key, someone who got access to your data on Google’s Drive servers would just get reams of ciphertext. At issue, however, is that ‘encryption’ is only a significant barrier if the the third-party storing your data cannot decrypt the data when a government-backed actor comes knocking.
Encryption has become something like pixie dust, insofar as companies far and wide assure their end-users and subscribers that data is armoured in cryptographic shells. Don’t worry! You’re safe with us! Unfortunately, detailed audits of commercial encrypted products often reveal firms offering more snake oil than genuine protection. Just consider some of the following studies and reports that are, generally, damning:
- N. Vratonjic, J. Freudiger, V. Bindschaedler, J-P. Hubaux. (2011). “The Inconvenient Truth about Web Certificates,” The Workshop on Economics of Information Security (WEIS), Fairfax, Virginia, USA. Available at: http://infoscience.epfl.ch/record/165676
- A. Arnbak, Nico Van Eijk. (2012). “Certificate Authority Collapse: Regulating Systemic Vulnerabilities in the HTTPS Value Chain,” SSRN. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2031409
- A. Belenko and D. Sklyyarov. (2012). ““Secure Password Managers” and “Military-Grade Encryption” on Smartphones: Oh, Really?” Elmsoft Co. Ltd. Available at: http://www.elcomsoft.com/WP/BH-EU–2012-WP.pdf
- A. Kingsley-Hughes. (2010). “Encryption busted on NIST-certified Kingston, SanDisk and Verbatim USB flash drives,” CNET. Available at: http://www.zdnet.com/blog/hardware/encryption-busted-on-nist-certified-kingston-sandisk-and-verbatim-usb-flash-drives/6655
- Steve Thomas. (2013). “DecryptoCat,” TobTu. Available at: http://tobtu.com/decryptocat-old.php.
- For a general overview of Skype insecurity, see: Christopher Parsons. (2012). “Some Literature on Skype Security,” Quirks in Tech. Available at: http://quirksintech.ca/post/28281569850/some-literature-on-skype-security
As noted in Bruce Schneier’s (still) excellent analysis of cryptographic snake oil, there are at least nine warning signs that the company you’re dealing with isn’t providing a working cryptographic solution:
- You come across a lot of “pseudo-mathematical goobledygook” that isn’t linked to referenced and reviewed third-party reviews of the cryptographic underpinnings.
- The company states that ‘new mathematics’ are used to secure your information.
- The cryptographic process is proprietary and neither you nor anyone else can examine how data is secured.
- Weird claims are made about the nature of the product, such that the claims or terms used could easily fit within the latest episode of a sci-fi show you’re watching.
- Excessive key lengths are trumpted as a demonstrated proof of cryptographic security.
- The company claims your data is secure because one-time pads are used.
- Claims are made that cannot be backed up in fact.
- Security proofs involve twists of linguistic logic, and lack demonstrations of mathematical logic.
- The product is somehow secure because it hasn’t been ‘cracked’. (Yet.)
Unfortunately, people have been conditioned by Hollywood and other media that as soon as something is ‘encrypted’ only super-duper hackers can subsequently ‘penetrate the codes and extract the meta-details to derive a data-intuition of the content’ (or some such similiar garbage). When you’re dealing with crappy ‘encryption’ - like showing private keys in plain text, or transmitting passphrases across the Internet in the clear - then the product is just providing consumers a false sense of security. You don’t need to be a hacker to ‘defeat’ particularly poor implementations of data encryption, you often just need to know how to read a file system.
Presently, however, there aren’t clear ways for consumers to know if a product is genuinely capable of securing their data in transit or at rest. There isn’t a clear solution to getting bad products off the market or generally improving product security, save for media shaming and/or the development of better cryptographic libraries that non-cryptographers (read: developers) can easily use when developing product. However, there are always going to be flaws and errors, and most consumers are never going to know that something has gone terribly awry until it’s far, far too late. So, despite there being a well-known problem, there isn’t a productive solution. And that has to change.
The selection of studies were just chosen because they’re sitting on my computer now/I’ve referenced or written about them previously. If you spend a few minutes trawling Google Scholar using the search term ‘encryption broken’ you’re going to come across even more analyses of encryption ‘solutions’ that have been defeated. ↩