portrait

Quirks in Tech

An informal space where I think about the oddities of technology, politics, and privacy. Also some other stuff.

Showing 160 posts tagged security

Backdooring an 'Encrypted' Application ∞

Persuant to my last post on cryptography and pixie dust, it’s helpful to read through Matt Green’s highly accessible article “How to ‘backdoor’ an encryption app.” You’ll find that companies have a host of ways of enabling third-party surveillance, ranging from overt deception to having access to communications metadata to compromising their product’s security if required by authorities. In effect, there are lots of ways that data custodians can undermine their promises to consumers, and it’s pretty rare that the public ever learns that the method(s) used to secure their communications have either been broken or are generally ineffective.

Jul 18, 2013

Pixie Dust and Data Encryption

CNet recently revealed that Google is encrypting some of their subscribers’ Google Drive data. Data has always been secured in transit, but Google is testing encrypting data at rest. This means that, without the private key, someone who got access to your data on Google’s Drive servers would just get reams of ciphertext. At issue, however, is that ‘encryption’ is only a significant barrier if the the third-party storing your data cannot decrypt the data when a government-backed actor comes knocking.

Encryption has become something like pixie dust, insofar as companies far and wide assure their end-users and subscribers that data is armoured in cryptographic shells. Don’t worry! You’re safe with us! Unfortunately, detailed audits of commercial encrypted products often reveal firms offering more snake oil than genuine protection. Just consider some of the following studies and reports that are, generally, damning[1]:

As noted in Bruce Schneier’s (still) excellent analysis of cryptographic snake oil, there are at least nine warning signs that the company you’re dealing with isn’t providing a working cryptographic solution:

  1. You come across a lot of “pseudo-mathematical goobledygook” that isn’t linked to referenced and reviewed third-party reviews of the cryptographic underpinnings.
  2. The company states that ‘new mathematics’ are used to secure your information.
  3. The cryptographic process is proprietary and neither you nor anyone else can examine how data is secured.
  4. Weird claims are made about the nature of the product, such that the claims or terms used could easily fit within the latest episode of a sci-fi show you’re watching.
  5. Excessive key lengths are trumpted as a demonstrated proof of cryptographic security.
  6. The company claims your data is secure because one-time pads are used.
  7. Claims are made that cannot be backed up in fact.
  8. Security proofs involve twists of linguistic logic, and lack demonstrations of mathematical logic.
  9. The product is somehow secure because it hasn’t been ‘cracked’. (Yet.)

Unfortunately, people have been conditioned by Hollywood and other media that as soon as something is ‘encrypted’ only super-duper hackers can subsequently ‘penetrate the codes and extract the meta-details to derive a data-intuition of the content’ (or some such similiar garbage). When you’re dealing with crappy ‘encryption’ - like showing private keys in plain text, or transmitting passphrases across the Internet in the clear - then the product is just providing consumers a false sense of security. You don’t need to be a hacker to ‘defeat’ particularly poor implementations of data encryption, you often just need to know how to read a file system.

Presently, however, there aren’t clear ways for consumers to know if a product is genuinely capable of securing their data in transit or at rest. There isn’t a clear solution to getting bad products off the market or generally improving product security, save for media shaming and/or the development of better cryptographic libraries that non-cryptographers (read: developers) can easily use when developing product. However, there are always going to be flaws and errors, and most consumers are never going to know that something has gone terribly awry until it’s far, far too late. So, despite there being a well-known problem, there isn’t a productive solution. And that has to change.


  1. The selection of studies were just chosen because they’re sitting on my computer now/I’ve referenced or written about them previously. If you spend a few minutes trawling Google Scholar using the search term ‘encryption broken’ you’re going to come across even more analyses of encryption ‘solutions’ that have been defeated.  ↩

Jul 18, 2013

Cellular Security Called Into Question. Again.

Worries about spectrum scarcity have prompted telecommunications providers to provide their subscribers with femotocells, which are small and low-powered cellular base stations. Often, these stations are linked into subscribers’ existing 802.11 wireless or wired networks, and are used to relieve stress placed upon commercial cellular towers whilst simultaneously expanding cellular coverage. Questions have recently been raised about the security of those low-powered stations:

Ritter and his colleague, Doug DePerry, demonstrated for Reuters how they can eavesdrop on text messages, photos and phone calls made with an Android phone and an iPhone by using a Verizon femtocell that they had previously hacked.

They said that with a little more work, they could have weaponized it for stealth attacks by packaging all equipment needed for a surveillance operation into a backpack that could be dropped near a target they wanted to monitor.

While Verizon has issued a patch for its femtocells, there isn’t any reason why additional vulnerabilities won’t be found. By placing the stations in the hands of end-users, as opposed to retaining control over commercially deployed cellular towers, third-party security researchers and attackers can persistenty test the cells until flaws are found. The consequence of this deployment strategy is that attackers will continue to find vulnerabilities to (further) weaken the security associated with cellular communications. Unfortunately, countering attackers will significantly depend on security researchers finding the same exploit(s) and reporting it/them to the affected companies. The likelihood of security researchers and attackers finding and exploiting the same flaws diminishes as more and more vulnerabilities are found in these devices.

In countries such as Canada, for researchers to conduct their research they must often first receive permission from the companies selling the femtocells: if there are any ‘digital locks’ around the technology, then researchers cannot legally investigate the code without prior corporate approval. Such restrictions don’t mean that researchers won’t conduct research, but do mean that researchers’ discoveries will go unreported and thus unpatched. As a result, consumers will largely remain reliant on the companies responsible for the security deficits in the first place to identify and correct those deficits, but absent public pressure that results from researchers disclosing vulnerabilities.

In light of the high economic costs of such identification and patching processes, I’m less than confident that femtocell providers are going to be investing oodles of cash just to potentially as opposed to necessarily identify and fix vulnerabilities. The net effect is that, at least in Canada, telecommunications providers can be assured that the public will remain relatively unconcerned about the security of providers’ products: security perceptions will be managed by preventing consumers from learning about prospective harms associated with telecommunications equipment. I guess this is just another area of research where Canadians will have to point to the US and say, “The same thing is likely happening here. But we’ll never know for sure.”

Jul 16, 2013

How to Dispel the Confusion Around iMessage Security | Technology, Thoughts & Trinkets ∞

There’s a lot of confusion about the actual versus rhetorical security integrated with Apple’s iMessage product. I’ve tried to suggest, in the linked article, how Canadians can use our federal privacy laws to figure out whether Apple is, or the company’s critics are, right about the company’s security posture.

Jul 11, 2013

Copyright © 2014