Raw notes …
Bruce Schneier:I put the panel together because there’s a lot of work on the human side of security, pscyhology, risk. There’s a lot to be learned from researchers in these subjects, economics, cultural anthropology … it’s a taste of the stuff out there. How do we design systems and social policy in a way that they’re effective? How do we respond to events and rhetoric?
The feeling and reality of security are different. You can feel secure if you’re not; you can be secure even if you don’t feel that way. Two concepts, we use the same word. We need to split them apart. We don’t have the words to talk about a lot of these things.
I view security as a tradeoff. No such thing as absolute; you’re trading off time, convenience, etc. The question isn’t “is it effective”, it’s abou the tradeoff. e.g. bulletproof vests work great; none of us are wearing them. People have different intuitions, and we talk based on our intuitions. Making security tradeoffs is part of being alive … we should be really good at it, since it’s key to evolutionary success. Sometimes we’re really bad. Why?
A lot of it’s the distinction between risk and reality. Work being done on evolutionary psychology – how we evolved based on situations 100,000 years ago.We’re highly optimized for decisions in small group living in the east african highlands 100,000 yeas ago.
_________
Christine Jolls of Yale Law School: focusing on privacy. Almos everything we regard as private isn’t private vis-a-vis everyone. Whether it’s personal informaiotn or your unclothed body, there are some people you don’t mind giving access to it. When peple make decisions about who to give access, undersanding psychology and behavioral economics is key. Brandeis: “privacy is the right to be let alone”. But nobody wants to be completely alone. The work I do focuses on how the law should try to construct the relationship between ourseles and others.
Think about workpalce email monitoring. Employees are typically told that employers reserve the right to monitor email. Employees may make bad decisions in this context. People are aware their email may be monitored; many people don’t process this, and optimism bias explains a lot of this. Example of optimism bias: economist professor Thaler asks students to predict where they’ll finish in the class, and 80% predict they’ll be in the top 20%. Workplace drug testing’s anohter example; most people who are caught just smoked pot with a friend on the weekend, but figure “oh i probably won’t get tested”. Ditto for email accesss.
Another facter: the way people think about the future vs. present. “The priority of the present is like ‘me’ vs. the rest of the world — everything else is later.” People make decisions focusing on what they want right now, so the potential of future email monitoring/testing is lower. This is different from situations where people aren’t aware of the issues; people do know what they’re consenting to, and they still do.
Congress han’t been responsive to these kinds of concerns. Common “judge-made” law, however, often picks up on this. For drug testing, if you sign a form just as you’re being tested, courts will trea thast as consent. If you sign it a year in advance (where these kinds of effects kick in), courts won’t treat it as valuable. The same pattern applies elsewhere. Why is statuatory law so less aware of these issues?
___________
Rachna Dhamija, CEO, Usable Security Systems and Fellow at the Harvard Center for Research on Computation and Society. Think about educating uses. Tips are pages and pages long — even just on “how to choose a password”. We’re starting to see security games. Or fun posters: “passwords are like pants, you shouldn’t sure them”. MSR did a major study: uses have 25 accoutns and 6 passwords they reuse. 3M Yahoo! pasword resets/month. Google password study: 15M accounts, 50% choose from only 1M possibilities.
Why don’t uses do better? “I can’t” — cogintive limitations. “I have nothing to lose!” That infuriates me as asecurity researcher, but on second thought maybe users are right: Consumers Union’s recent survey shows that most people don’t have issues.
Users in their daily lives just wan tto get tasks done, they’re not focused on security. Attackers exploit this by adding urgency. Security interrupts like “invalid certificate”, users see “something happened and you need to hit okay to get on with doing things.” Geting informed consent is hard; users will knowingly install spyware if they think functionality is vlauable. Users are over-confident in their abiilty to protect themselves: “I can always uninstall it.”
Users don’t notice the absence of security indicators. RSA’s response: “You found a weakness, but Gartner’s survey showed most users find our security conenient.” Bruce’s point abou difference
Design assumptions: users
- don’t have perfect memories
- can’t keep secrets
- won’t carry exra devices
- aren’t motivated by security
websites:
- are people too
- benefits must exceed costs of deployment
- are not motivated by security
Currently working on UsableLogin: consistent across sites, only one “codeword” to remember, no single point of failure. Can take it to various sites, converts my codeword the site’s login info.
_______
Alessandro Acquisti
A rational model of privacy descision making. Johnny MySpace debates about whether to publicize his kinks on MySpace, weighs benefits (i might find a lover) and risks (my employer might find it) and decides. It doesn’t work that way. 2004 paper: incomplete informaiton, bounded rationality, psychological and behavioral biases (hyperbolic discounting, optimism bias). Now focus on experiments in privacy decision-making. Subtle variations of framing can lead to dramatic changes in valuation of personal information or willingness to reveal iformation.
1. the herding effect. People were asked questions about ethical q’s and wheher htey engaged in them. We’d then show information about ostensible rates of how others had answerd: low admission rates, high admission rates, high “decline to answer” rates. e.g. have you made a false or inflated insurance claim. We found that after seeing high admission rates for other questions, people would be more likely to admit — even on questions about other kinds of behavior. “the herding effect”: if i see others admit, I’m likely to.
2. the frog effect: do privacy intrusions alert u to privacy concerns, or densitize us? We simulated privacy intrusions thorugh a survey with 30 diff questions at different level of sensitivity and intrusivienss.  e.g. “have you ever left the lights on because you were lazy” vs “have you watched pornography w/o being sure of the age of the people involved.” We manipulated the order: tampe to intrusive, intrusive to tame, pseudo-random, sudden. Alo when identifying info was asked: at the start vs. at the end.  “Frog hypothesis”: people will admit to sensitive behavior more often as they get warmed up (i.e. qustions go from tame to untrusive). “Cohereent arbitrariness”: peole will admit less often as they get warmed (questions go fform more to less senseitive). Results: frog hypothesis rejected. “Have you had a fantasy of torturing somebody?” W/ decreasing order, 60% say use; with increasing order, 40%.
3. willingness ot pay for protection of personal data vs. willingness ot accept $ to reveal personal data. $10 anonymous gift card: your name won’t be linked, usage won’t be tracked. $12 trackable card. We then give subjects an option of a second card, and asked if they want to switch. Do I want to get $2 more to give away my data? vs. Do I want to give away $2 to protect”.   50% of subjects will switch from $10 to $12 card; value of privacy = $1. only 10% will switch from $12 to $10, value of privacy = $0.20.
____
Schneier recommends The Science of Fear.
____
Mike Nelson: people are looking at pscyhology of children online. Are you looking at how kids develop senses of this?
Schneier: danah boyd has looked at this. THere’s a huge generational diff between comfort about saying things online. Parents need cybersecurity training a lot more than kinds.
Christine: children discount the future more, and suffer from more optimism bias.
Rachel in the audience: there was an Annenberg study when COPPA came out on marketers focusing on 14-17 year olds to get family informaiton. A sudy from NC State was that the online behavioral differences from children mirror behavioral differences offline.
Rachna: plenty of evidence that minors are good at subverting filters.
Q for Christine: are you suggesting that the law needs to accomodate that people don’t make rational decisions? Does this extend beyond the privacy sphere?
Christine: Richard Posner claims that judge-made law tends to be economically efficient. Why? Occam’s razor: judges are human beings and understand hese distortions. Here are ohter areas where judges don’t map so closely to behavioral economics. What is it about privacy that leads judges to get it right? Maybe i’s just that it’s had and hey think about a lot.
Alessandro: perhasp one reason statutory law doesn’t reflect this is that legislators are concerned about being paternalistics.
Christine: certainly possible, although it might also be lobbying etc.
Bruce: decisions under fear are highly optimized. you can so somebody a picture of a snake, and their reactions kick in before they realize its a snake.
David Campbell: how should security professionals react?
Rachna: need to avoid training — it’s usually fine to hit “ok”
Alessandro: giving people total control often doesn’t help. so there’s a room for regulation
Bruce: another good book: The paradox of choice
Audience: Dan Gilbert’s TED Talk “what makes us happy” is a great
Q: we’re socialized for certain kinds of risks — don’t give out your phone # to trangers in bars. we’re not socialized to other risks, like FB. Is this self-correcting over time
Bruce: yes
Rachna: yes if harm’s traceable back to the mistakes that leads to harm. most people who have their identity stolen have no idea how/why.
Christine: big distinction betwen knowing the average risk and pesonal risks. i think people will correct for average risk, but due to optimism bias not so sure about personal risk
Audience: there are a lot of people who avoid activities because of fear. have you studied the effectiveness of techniques?
Rachna: seals like truste are effective at helping people with fear, although also easily spoofed by attackers
Audience: trying to reconcile “people don’t pay enough attention to privacy”, but first presentation said that the cost is low. aren’t people being rational? what’s the problem?
Schneier: people are making the right choice, it’s just a bad design.
Christine: its different depending on the context. in the workplace, people make bad decisions all the time. in another context, people would sign away the privacy of their house — “you can come in and repossess furniture even if i’m not home”. udges hae struck this down.
Alessandro: we’re not saying right or wrong, we’re showing that small changes in wording can lead to huge differences in behavior. so privacy preferences are malleable.
Leave a Reply