[go: up one dir, main page]

|
|
Log in / Subscribe / Register

A report from the Enigma conference

February 14, 2018

This article was contributed by Christian Folini

The 2018 USENIX Enigma conference was held for the third time in January. Among many interesting talks, three presentations dealing with human security behaviors stood out. This article covers the key messages of these talks, namely the finding that humans are social in their security behaviors: their decision to adopt a good security practice is hardly ever an isolated decision.

Security conferences tend to be dominated by security researchers demonstrating their latest exploits. The talks are attack-oriented, they keep a narrow focus, and usually they close with a dark outlook. The security industry has been doing security conferences like this for twenty years and seems to prefer this format. Yet, if you are tired of this style, the annual USENIX Enigma conference is a welcome change of pace. Most of the talks are defense-oriented, they have a horizon going far beyond technology alone, and they are generally focused on successful solutions.

The brief talks are only twenty minutes long with ten additional minutes of Q&A; the conference is clearly trying to emulate TED talks in terms of quality and focus. All speakers go through three separate training rounds where they present their talk to each other remotely to help them polish the talks. Gender and racial diversity plays a big role at Enigma and the conference appeared to have reached something close to gender parity this year; the fact that there were regular queues in front of the female washing rooms was a constant source of comments.

I was attracted to Enigma when I saw the video [YouTube] of Rob Joyce's talk from the first edition of the conference in 2016. Joyce was the director of US National Security Agency's (NSA) Tailored Access Operations department; his presentation was about how to protect against nation-state attackers like him. The talk impressed me with his forceful insisting on basic security practices. So I attended in 2017 and returned for this year's Enigma conference in Santa Clara.

Social influence

It was the research into user behavior that stuck with me this time. A perfect example of a high-quality Enigma talk on this topic was the presentation by Sauvik Das from Georgia Tech. He talked about his research into user behavior based on several user interface tests at Facebook and the adoption rate for various security practices. It is an important topic since the whole industry builds many security systems; companies introduce them into their services but the majority of users ignore them, which leaves their accounts and their data at risk.

First, Das tried to measure social influence on security behavior. He measured the adoption of three Facebook security features: (1) login notifications that inform a user when a login has been performed, perhaps with stolen credentials; (2) login approval, which is Facebook's flavor of two-factor authentication via mobile phone text messages; and (3) trusted contacts that can help with account recovery by identifying a user that has been locked out of their account.

He looked at data from 750,000 Facebook users who had adopted one of these features and 750,000 users who had not started to use any of them. Measuring the social influence was fairly simple: how big was the exposure level among Facebook friends for a given feature? Did 1% of friends use it? 5%? 10%? And so on. At each exposure level, he compared the adoption rate and came to the conclusion that there is a strong correlation and thus social influence. But there was also a surprising finding: early adopters can have a negative effect on the adoption rate of a security feature. If they come across as paranoid or nutty geeks, then their use of a feature will stigmatize it and hinder the further adoption among their friends. But this does not negate the overall conclusion: the more exposure users get to a feature via their friends, the more likely they are to adopt it.

After laying the base in such a way, Das moved on to a series of experiments where he tried to influence the adoption rate of 50,000 Facebook users by varying the exposure to social influence. He would present them with an announcement text introducing the security features. He varied the wording of the announcement text and looked into changes of the adoption rate. One group of people would get the plain announcement. Other groups were informed about the absolute number of friends using a feature; others got a percentage of friends.

There was a positive effect with all the announcements and any form of social information improved the adoption rate even if the increase was not dramatic. The click-through rate of the announcement grew by around 30% and the adoption rate by about 10% if the announcement cited the raw number of friends using a particular feature, which was the experiment that performed the best. 10% is not all that dramatic, but if you put it in perspective with the engineering costs of a feature and the relative simplicity of sharing a bit of information about the adoption rate among a user's peers, then the 10% of additional adoption based on a single announcement has a reasonable return on investment in terms of improved security.

Taking a step back, Das named some characteristics that help users with their security practices. Security systems need to be observable so that people can actually see and emulate the good security practices of their friends. In addition, security should allow people to act as stewards. This is based on the interesting observation that people do not care too much about their own security but they are very concerned with the security of their loved ones. Designing security systems that allow people to act on these concerns is therefore desirable.

Staff security

Das's presentation was part of a series of talks that covered security behavior at the conference. Another example was Masha Sedova's talk that looked into the ways companies can improve the practices of their staff. It would seem that awareness training sessions don't go very far and their effect is quickly lost. Otherwise, the last smoker would have quit by now. But the motivation to quit smoking — or use strong passwords — just is not there, no matter the awareness. Sedova had the numbers to back this up. She came to the conclusion that knowledge is not enough. It also takes motivation, the ability to adopt a better behavior, and then a trigger to actually start.

The harder it is to do something, the higher the motivation needs to be to change a behavior, start a new habit, etc. In practice this means that simplifying the steps to adopt a secure behavior is essential. The secure way to perform a task should always be the easiest way perform it. And adoption will definitely follow.

Where this is not possible, you need motivation. Yet motivation does not always come by itself. She named two typical situations that tend to grow motivation; one is a scheduled audit of a team or a project where people grow more and more motivated the closer the effective date gets. After the audit, the motivation disappears rapidly. The second situation is a security incident that immediately brings a big motivation to change the security practices. Yet this motivation does not stick, it slowly degrades as time goes on.

How can you influence this motivation as a company? It definitely pays to work with these events. Use them to your advantage when trying to introduce new practices. Sedova quickly ruled out money or punishments as an effective means of motivating your staff with few exceptions. For the majority of people, you need positive reinforcements as a way to create intrinsic motivation. Here she named public recognition of good performers, competition in company events, altruism and feedback, access to special events or exclusive swag, etc. as useful elements of a motivation program within a company. Obviously, this is all much more complicated than a simple technical solution that helps using a system in a secure way.

Account recovery

Grzegorz Milka illustrated the problem of offering technical solutions that are not adopted, using observations of account takeovers and recovery at Google. Accounts at Google are constantly under attack; the company goes to great lengths to protect account holders and to simplify recovery. His talk immediately made headlines in various tech outlets with the observation that 10% of Google accounts have two-factor authentication enabled. Some were stunned it was such a low number, while others were astonished that was so high. Whatever your perspective, what I thought more interesting was the general impression that Google has put a variety of methods into practice to prevent an account from being hijacked.

It's a risk-based, in-depth strategy that is complex and tries to keep up with the evolution of the account hijacking business. Devices are fingerprinted, the location of a login is taken into consideration and the known usage patterns of an account are part of the equation too. These observations do not stop once the account is recovered; the user behavior is constantly being monitored and abnormal behavior can trigger a lockout to prevent data loss or spreading to the contacts of a user via phishing scams.

The example he noted was a login from a new device, then a look at the address book, and then an attempt to erase all contacts. This resembles an account takeover where the attacker copies the address book for future attacks and immediately tries to make sure the account owner can no longer warn all his contacts. Google monitors the behavior of the user to try to detect malicious activity. This is necessarily far more complicated than simply providing two-factor authentication.

If you have a simple technical solution to a security problem, that is great. But if humans are not adopting it, then things get complex quickly. Simple technical solutions will no longer do the job and you need insight into social factors in order to understand human behavior and build your systems accordingly.

[I would like to thank the Swiss Cyber Storm conference for paying for my trip to USENIX Enigma.]


Index entries for this article
SecurityConferences
SecurityUser behavior
GuestArticlesFolini, Christian
ConferenceUSENIX Engima/2018


to post comments

"Otherwise, the last smoker would have quit by now"

Posted Feb 15, 2018 16:03 UTC (Thu) by Herve5 (guest, #115399) [Link] (5 responses)

I *loved* this remark :-)
That too is part of LWN coolness...

"Otherwise, the last smoker would have quit by now"

Posted Feb 15, 2018 16:39 UTC (Thu) by dune73 (guest, #17225) [Link]

Thank you. I thought the analogy was quite striking.

dune73 / Christian Folini

"Otherwise, the last smoker would have quit by now"

Posted Feb 16, 2018 0:09 UTC (Fri) by shemminger (subscriber, #5739) [Link] (2 responses)

Like many trainings the message doesn't match the action.
Often the problem is that a site or corporate resource thinks too highly of them self. I don't need a strong password for trivial sites; but my bank won't let me use a long password or many punctuation characters.

"Otherwise, the last smoker would have quit by now"

Posted Feb 16, 2018 8:37 UTC (Fri) by mpr22 (subscriber, #60784) [Link] (1 responses)

My bank asks me to choose a five-digit passcode and a limited-length non-dictionary word, and never asks me to type the whole word. (They also require me to use a 2FA gadget with my chip-and-pin card for some actions, and now that I have one so that next time I move house I can update my address without physically going into a branch, the set of gadget-requiring actions is larger.)

My credit card issuer, which happens to be a tentacle of my bank, asks me to set a six-digit passcode and a limited-length non-dictionary word, and again, never asks me to type the whole word.

"Otherwise, the last smoker would have quit by now"

Posted Mar 3, 2018 17:20 UTC (Sat) by nix (subscriber, #2304) [Link]

They also require me to use a 2FA gadget with my chip-and-pin card for some actions
So does mine. The gadget's battery recently ran out, and they required me to generate a 2FA token with it to prove that I owned it before they'd send me a new one. This did not seem terribly well thought out.

(Thankfully this is an old-school bank that still has things like local branches, so it was easy to pop into one of those and get it changed.)

"Otherwise, the last smoker would have quit by now"

Posted Feb 16, 2018 15:06 UTC (Fri) by robbe (guest, #16131) [Link]

To be fair, this was Mrs Sedova’s idea, as can be seen in her slides.

A report from the Enigma conference

Posted Feb 16, 2018 15:10 UTC (Fri) by robbe (guest, #16131) [Link] (3 responses)

> He looked at data from 750,000 Facebook users…
Did these users consent to take part in his study, or are we again studying unwitting cattle?

A report from the Enigma conference

Posted Feb 16, 2018 15:40 UTC (Fri) by excors (subscriber, #95769) [Link] (2 responses)

They consented to Facebook's Terms of Service, which says "By using or accessing Facebook Services, you agree that we can collect and use such content and information in accordance with the Data Policy as amended from time to time", and the Data Policy says "We conduct surveys and research, test features in development, and analyze the information we have to evaluate and improve products and services, develop new products or features, and conduct audits and troubleshooting activities". The behaviour described in this article sounds like conducting research and testing features and analyzing information with the goal of improving Facebook's services (maybe not directly but as a byproduct of the research).

The Data Policy also says "We transfer information to [...] other partners who globally support our business, such as [...] conducting academic research and surveys. These partners must adhere to strict confidentiality obligations in a way that is consistent with this Data Policy and the agreements we enter into with them", so it seems fine that Facebook worked with a university researcher on this.

A report from the Enigma conference

Posted Feb 18, 2018 1:01 UTC (Sun) by pabs (subscriber, #43278) [Link] (1 responses)

> They consented to Facebook's Terms of Service

It is extremely unlikely those have ever had informed consent. Even the new people who gave informed consent were probably coerced by the FOMO.

A report from the Enigma conference

Posted Feb 18, 2018 1:02 UTC (Sun) by pabs (subscriber, #43278) [Link]

s/new/few/

Two-factor authentication requires even more information

Posted Feb 16, 2018 18:10 UTC (Fri) by NAR (subscriber, #1313) [Link] (1 responses)

I think Google or Facebook tend to know a lot more about its users than they'd prefer, so giving even more information (a phone number) might not be that tempting...

Two-factor authentication requires even more information

Posted Feb 19, 2018 4:24 UTC (Mon) by dune73 (guest, #17225) [Link]

This was named as one of the reasons people chose not to use 2FA in the presentation.

A report from the Enigma conference

Posted Feb 19, 2018 18:07 UTC (Mon) by emptysquare (guest, #101937) [Link] (1 responses)

This is a fascinating article, well-reported, sounds like a great conference. I'm glad LWN went to cover this conference, it's much better reading and more informative than the usual security news.

A report from the Enigma conference

Posted Feb 19, 2018 18:18 UTC (Mon) by jake (editor, #205) [Link]

> This is a fascinating article

Glad you liked it ...

> I'm glad LWN went to cover this conference

We can't really take any credit for that part, though. Thanks are due to Christian for going (and writing it up for us) and the Swiss Cyber Storm conference that helped with his travel costs.

jake

A report from the Enigma conference

Posted Feb 24, 2018 14:15 UTC (Sat) by oldtomas (guest, #72579) [Link]

Very good writeup on a fascinating subject, thanks for that.

Security is a social phenomenon, who'd think that? Yes, at the bottom of all that it's a question of trust. Trust your distro, trust your hardware vendor (HAH), trust the theoreticians who know much more than you...

We geeks get so worked up about the technical aspects of security that we tend to forget that technology is just an instrument to help us in making transparent *what* or *who* it is we decide to trust.

One thing which was somewhat appalling for me was this perspective of the "big platforms" on this problem: users as a somewhat dumb mass you've got to nudge so they do what you think is the right thing (and for that you experiment with them, as if they were cell cultures). This, for me, is dystopia, and is the reason I try (at some cost) to avoid the Facebooks and Googles of this world.

A report from the Enigma conference

Posted Mar 3, 2018 17:17 UTC (Sat) by nix (subscriber, #2304) [Link] (13 responses)

2FA is not all it's cut out to be. I used to love it. I encouraged my parents to turn on Google 2FA using a landline phone as their authenticator in addition to a U2F token, so an attacker taking over their horribly insecure Windows desktops couldn't steal all their email and the like. They did this at my house because that's where we were at the time, and because I cared more about their security than they did, so stewardship as noted in the article seemed like a good idea -- but when they went back home (to an area entirely without mobile phone coverage), they couldn't switch off phone authentication or switch to their phone because it insisted on calling my phone with a security code to do so (which I couldn't pass on to them before token expiry because I was, of course, *on the phone*).

They also can't change phone to their landline, whether they're near my phone or theirs, even if we were to add a mobile number as an authenticator, because doing so requires receiving a phone call to one of the existing phones on the account *and* to the new phone within a few tens of seconds, which given that you have to travel for ten minutes to get mobile phone coverage where they live and that the two landline phones are hundreds of miles apart is never going to work.

My parents now hate 2FA, because its sole purpose seems to be to lock them out of their accounts through half-thought-out mis-security that assumes that everyone has a mobile phone and never leaves areas with good coverage. Eventually their single remaining hardware U2F authenticator will break or get lost (adding a new one requires, you got it, a phone call to one of the phones) and they'll be unable to use their Google accounts at all. The total lack of human support of any sort at Google means that this is unfixable. I find myself a lot less admiring of 2FA myself, as well, at least as long as companies continue to insist that phones are in some way privileged so that a lot of auth requests require the phone rather than or as well as some other 2FA device, and that once you have a phone you necessarily have access to it at all times and will never lose access to it or be unable to receive calls on it.

This stuff probably also goes disastrously wrong for people who use a landline phone as a 2FA authenticator and who move house without porting the number (which in the UK is essentially everyone who moves house and has a landline, since landline number porting here is very much in its infancy).

(Some of this may be old info: we tried fixing this several years ago, then stopped, because if for any reason Google decided that it *needed* phone authentication rather than U2F to let them into their account -- and it seemed to insist on phone authentication for so many things! -- they'd be stuck, locked out of their account forever. So things might be better now and we might be able to fix this clusterfuck, but we don't dare try to find out.

This is, I note, the sort of messup someone who is actually related to multiple Google employees can get into: not even they can help, because they work in the wrong part of the organization and new tech companies other than Amazon have nothing remotely resembling tech support that normal humans can contact. I can't imagine how bad the messes might be that random members of the public can get into.)

A report from the Enigma conference

Posted Mar 5, 2018 19:04 UTC (Mon) by mathstuf (subscriber, #69389) [Link] (12 responses)

I agree that if 2FA were tied to phones, it'd be crap. Luckily, you can usually extract the secret key using a barcode scanner to get the URL. This is what I do to store the secret on an encrypted USB key that I use to get my codes; my phone is actually really "dumb" for these codes.

I don't know what I'd recommend for tech-unsavvy users like my family if they didn't have reasonable coverage and service to their typical locations. Even then I'd recommend some device that is less prone to mistakes than a cellphone for these codes. At least a moderately ciphered note in a wallet or something for one-time passwords.

A report from the Enigma conference

Posted Mar 5, 2018 19:48 UTC (Mon) by nybble41 (subscriber, #55106) [Link] (11 responses)

Google also offers "backup codes"[1] which can stored offline and used in place of other 2FA methods. The problem nix described is easily solved by generating a list of backup codes and using one of those codes to update the phone number (or disable 2FA entirely and start over). Of course, this problem wouldn't have existed if the account had been set up with the correct phone number in the first place. For that matter, an offline 2FA method like Google Authenticator would have been a much better choice than phone-based authentication, given the connectivity issues.

[1] https://support.google.com/accounts/answer/1187538?hl=en

A report from the Enigma conference

Posted Mar 5, 2018 20:19 UTC (Mon) by mathstuf (subscriber, #69389) [Link] (2 responses)

Yeah, those are what I meant by "one-time passwords". Personally I use `oathtool` to generate codes on my machines. For Android, I'd recommend FreeOTP (though I believe a fork has appeared since I last used it) since it's actually FOSS unlike Google's app.

A report from the Enigma conference

Posted Mar 5, 2018 22:10 UTC (Mon) by nybble41 (subscriber, #55106) [Link] (1 responses)

So far as I know, while they both meet the definition of "one-time password", Google's "backup codes" are independent of the time-varying TOTP codes which you would get from Google Authenticator or FreeOTP. There is no app and no URI or QR code, just a predetermined list of 10 static codes, each of which can be used at most once in an emergency. If you need more you have to go back to your account page at Google and download a new list.

A report from the Enigma conference

Posted Mar 5, 2018 23:08 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

Yeah, they're an independent thing, but I store them next to the TOTP key (which is duplicated for backups) as well as ciphered on paper I keep around.

A report from the Enigma conference

Posted Mar 6, 2018 12:38 UTC (Tue) by nix (subscriber, #2304) [Link] (7 responses)

The backup codes are probably what we're going to do. As for the 'correct phone number', this problem arises whenever people using landline phones move house. That's not that rare.

(And an offline 2FA method other than U2F was not, as far as I can recall, available when I set this up. Doubly so when you consider that if they don't have mobile phone coverage a tablet is likely to be fairly useless to them as well. They do have one now, but it gets charged so rarely that it's never working when they need it.)

A report from the Enigma conference

Posted Mar 6, 2018 13:24 UTC (Tue) by zdzichu (subscriber, #17118) [Link] (1 responses)

Why moving house is a problem with phone number? You can move ("port") your number to your new address, even between different telecoms. The days when telephone number depended on physical location ended sometime in last century.

A report from the Enigma conference

Posted Mar 7, 2018 18:37 UTC (Wed) by nix (subscriber, #2304) [Link]

That's *definitely* not true in all countries or with all telcos.

A report from the Enigma conference

Posted Mar 6, 2018 16:26 UTC (Tue) by nybble41 (subscriber, #55106) [Link] (4 responses)

> And an offline 2FA method other than U2F was not, as far as I can recall, available when I set this up.

Google Authenticator, or the equivalent, was an option long before U2F was standardized.

> Doubly so when you consider that if they don't have mobile phone coverage a tablet is likely to be fairly useless to them as well.

In what world are tablets primarily used with mobile networks, as opposed to WiFi? Last time I checked (which I admit was some time ago) integrated mobile connectivity was still an optional feature not present on all tablets.

Anyway, you don't need a tablet for Google Authenticator or FreeOTP; any smartphone will do. In a pinch you could even set up compatible TOTP software on a laptop or PC. It doesn't require mobile coverage; technically it doesn't even require an Internet connection once the software is downloaded. Setup can be completed offline, and consists of either scanning a QR code or pasting in a URI string. Codes are likewise generated offline.

A report from the Enigma conference

Posted Mar 7, 2018 18:39 UTC (Wed) by nix (subscriber, #2304) [Link] (3 responses)

Last time I tried to use a tablet for this stuff Google Authenticator demanded that I take a photo of some auth code (not a screenshot, a photo). This was less than practical since the camera was of course *on* the tablet and it can't take a photo of its own screen.

(I'm sure this was just a simple stupidity that's since been fixed, but I had these over and over again and after the fifth stupid roadblock I just gave up for the time being. It's not like I can do this except when the account owner is around anyway...)

A report from the Enigma conference

Posted Mar 7, 2018 18:42 UTC (Wed) by sfeam (subscriber, #2841) [Link] (1 responses)

Mirror? umm - Two mirrors?

A report from the Enigma conference

Posted Mar 21, 2018 16:25 UTC (Wed) by nix (subscriber, #2304) [Link]

The tablet is not transparent. I think you'd need three mirrors, in a triangle. :)

A report from the Enigma conference

Posted Mar 7, 2018 20:40 UTC (Wed) by nybble41 (subscriber, #55106) [Link]

That is a legitimate annoyance when you're trying to set up TOTP from a QR code displayed on the same device. They should provide the key in plain text which you can copy and paste into the app. The typical reason for not doing this is to mitigate the risk that a rogue app could capture the credentials from the clipboard, but IMHO that decision should be up to the user. Another option, both easier for the user and likely more secure than using the clipboard, would be to provide a link to a URI which opens in the TOTP app. (The current version of Google Authenticator does allow setup via a text key, not just QR codes. I'm not sure whether it supports setup by URI link.)

As for workarounds, if you have a second tablet or smartphone handy you could take a photo of the screen and then scan that. You could also take a screenshot of the QR code and either print it out or display it on another screen, or just run the screenshot through a QR decoder and use the raw text.


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds