Showing 4 posts tagged research
Showing 4 posts tagged research
On January 20, 2014 the Citizen Lab along with leading Canadian academics and civil liberties groups asked Canadian telecommunications companies to reveal the extent to which they disclose information to state authorities. This post summarizes and analyzes the responses from the companies, and argues that the companies have done little to ultimately clarify their disclosure policies. We conclude by indicating the subsequent steps in this research project.
The most recent posting about our ongoing research into how, why, and how often Canadian ISPs disclose information to state agencies.
While such research is done in a number of countries, Canada seems to be a hotbed of boredom studies. James Danckert, an associate professor of psychology at the University of Waterloo, in Canada, recently conducted a study to compare the physiological effects of boredom and sadness.
To induce sadness in the lab, he used video clips from the 1979 tear-jerker, “The Champ,” a widely accepted practice among psychologists.
But finding a clip to induce boredom was a trickier task. Dr. Danckert first tried a YouTube video of a man mowing a lawn, but subjects found it funny, not boring. A clip of parliamentary proceedings was too risky. “There’s always the off chance you get someone who is interested in that,” he says."
– Rachel Emma Silverman, “Interesting Fact: There’s a Yawning Need for Boring Professors” I found the third paragraph particularly amusing as someone who often finds watching parliament interesting. I guess I’d be one of the ‘problem’ participants!
Rachel Emma Silverman, “Interesting Fact: There’s a Yawning Need for Boring Professors”
I found the third paragraph particularly amusing as someone who often finds watching parliament interesting. I guess I’d be one of the ‘problem’ participants!
In today’s era of hyperbolic security warnings one of the easiest things that people can do to ‘protect’ themselves online is select super hard passwords to crack, stuff them in a centralized password manager, and then only have to remember a single password to access the rest in the manager. I’ve used a password manager for some time and there are real security benefits: specifically, if a single service that I’ve registered with is hacked then my entire online life isn’t compromised, just that one service.
Password manager companies recognize the first concern that most people have surrounding their services: how do the managers protect the sensitive information they’re entrusted with? The standard response from vendors tends to reference ‘strong security models and usage of cryptography. Perhaps unsurprisingly, it is now quite apparent that the standard responses really can’t be trusted.
In a recent paper (.pdf), researchers interrogated the security status of password managers. What they found is, quite frankly, shocking and shameful. They also demonstrate the incredible need for third-party vetting of stated security capabilities.
The abstract for the paper is below but you should really just go read the whole paper (.pdf). It’s worth your time and if you’re not a math person you can largely skim over the hard math: the authors have provided a convenient series of tables and special notes that indicate the core deficiencies in various managers’ security stance. Don’t use a password manager that is clearly incompetently designed and, perhaps in the future, you will be more skeptical of the claims companies make around security.
In this paper we will analyze applications designed to facilitate storing and management of passwords on mobile platforms, such as Apple iOS and BlackBerry. We will specifically focus our attention on the security of data at rest. We will show that many password keeper apps fail to provide claimed level of protection
A really interesting paper on social authentication has just been released that looks at how facial identification ‘works’ to secure social networks from unauthorized access to profiles/records. The authors note that users of social networks are most concerned in keeping their interactions private from those who know the users. Specifically, from the abstract:
Most people want privacy only from those close to them; if you’re having an affair then you want your partner to not find out but you don’t care if someone in Mongolia learns about it. And if your partner finds out and becomes your ex, then you don’t want them to be able to cause havoc on your account. Celebrities are similar, except that everyone is their friend (and potentially their enemy).
Moreover, a targeted effort to identify a users’ friends on a social network - and examine their photos - will let an attacker penetrate the social authentication mechanisms. While many users would consider this a design flaw Facebook, which uses this system, doesn’t necessarily agree because:
[Facebook] told us that the social captcha mechanism was used to solve the problem of large-scale phishing attacks. They knew it was not very effective against friends, and especially not against a jilted former lover. For that, they maintain that the local police and courts are an effective solution. They also claim that although small-scale face recognition is doable, their scraping protection prevents it being used at large scales.
What Facebook is doing isn’t wrong: they simply has a particular attacker-type in mind with regards to social authentication and have deployed a defence mechanism to combat that attacker. Most users, however, are unlikely to consider that the company has a different attack scenario in mind than its end-users, leading to anger and concern when the defence for wide-scale attacks fails to protect against targeted attackers. While I don’t see this as a security or policy failure, it is suggestive that companies would be well advised to explain to their users how different security inconveniences actually interact with different hack/attack scenarios. Beyond educating users as to what they can expect from the various defence mechanisms, it might serve to raise some awareness about the different kinds of attackers that companies have to defend against. In an ideal world, this might serve as a beginning point in educating users to become more critical of the security models that are imposed upon them by corporations, governments, and other parties they deal with.