Notes from Security and Human Behavior
reference as L Jean Camp and A Friedman, A Review of Security and Human Behaviors, Camp, L. Jean, "1156180" (July, 07 2008).
Available at SSRN: http://ssrn.com/abstract=1156180
Photos available at http://www.cl.cam.ac.uk/~fms27/shb-2008/
Bruce Schneier Intro
Security could mean feeling secure and being secure
When language developed, feeling secure and being secure might have meant the same thing
No real word for being secure but not feeling secure
The result is that the feeling and the reality of security get misaligned. A false sense of security or a naivetˇ or in the opposite case, paranoia. Security theater affects the feeling but not the reality. We don't have a word for what changes the reality but not the feeling, which is what we want the CIA to do.
People make decisions based on the feeling and not the reality. If you are selling a policy or product your goal is to make people feel secure. There are two ways to do this.
1. Make people secure and hope they notice or
2 make people feel secure and not notice.
How do you know if security works if there are no events? Does asteroid insurance work?
So there is a third thing: feeling, reality and model
The model is a cognitive model for example the germ theory of disease. Feeling is based on intuition, reasoning is based on logic & data, and modeling is based on simplification,
We get models from various sources: political leaders, industry, science (e.g. global warming).
Models change, you read about models
Are we living in a world where feeling is changing reality but the model will never catch?
Ross: Format is to get to know each other to get people to know each other
Are we over-reacting or not? It is impossible to know. 2 years ago the number of guards at nuclear power plants were cut in half. Was that an over-reaction or an under? Have we learned anything about the agencies who are supposed to warn us? Do we trust them? How about the people who sell books about if we are over-reacting.
My work is deception and demeanor.
Deception is a function of:
Event occurs? interrogator knowledge of event occurrence, does subject know of event, is subject prepared, is there a shared culture, and what is the base rate?
For polygraph tests we are catching two liars for every truthful person if liars are 20% of the population
For polygraph with 5% liars we are labeling two honest people liars for each liar detected
Millions of people entered the airports Sept. 11 and 12 were terrorists
Is this ok? It depends upon the case of false positive.
We have no reliable involuntary sign we are lying. To claim so is the mark of a fool or charlatan
We have hot spots and leakage
HOT SPOT says something is going on that we should get more information about; change from baseline without change in topic
LEAKAGE is when concealed information is revealed but it may not relate to your question,
1. Micro facial expression
2. Verbal or emblematic slip
Why do we get hot spots or leakage?
1. Cognitive load: being inventive, being careful, fabrication, and uncertainty
2. Emotional load (emotions don't fit the situation): fear, guilt, shame, contempt, disgust, anger
The idea that you are the perpetrator because of these hot spots is wrong.
The fear of being disbelieved is just like the fear of being caught.
Q: are some people just good liars?
A: In the past 40 years we have found that 5-10% of the population depending on the type of situation is undetectable to date. My bet is that it is about 3%. It is not psychopaths.
A: Do we know?
Q: We don't know and we can't find out. IQ has nothing to do with lying. There are people with feet on the ground that want to know the answer. But the people who fund it don't want to know the answer. Part of the problem is that it is not rocket science, we do cheap research. It costs as much to process cheap research as expensive research
Q: Explain 3%
A: The real skill that allows you to become an effective liar is to be able to believe a lie when you are telling it.
Q: Do you think actors are good liars?
A; Less than 5% of people for anything
We have been able to identify the expression you see on someone's face 20 minutes before an assault. Do suicide bombers show anything on their face before?
I want to talk about two challenges. The large challenge of conceiving security online as part of larger social risk behaviors on and offline and the coming perfect security storm.
Risk is a social decision. Trust is social. We learn from the behavior of others in many rich ways in off-line interactions ever when we do not think we are sing formal reputation systems. For example: Are there many cars parked here? Is it vacant? It this a popular restaurant? Is this in public view?
Risk is a personal history decision. We use our own histories to make our own decisions: Have I been here before? Is this familiar? Have we met?
Social signals and personal history abound on the net but they are only used for third party commerce decisions. I want to take a moment and advocate for research and interactions that are about sharing information with our own social groups. With whom do we share what information? In terms of reading, I am interested in what everyone in this room writes. On the other hand, I don't care where anyone in this room buys shoes except Angela Sasse. Right now double-click has my history and group information. I don't. I don't have an easy way to share and I certainly cannot share it without first signing it over to a third party. And double-click wants me to make decisions informed by their interests not mine. So, I built a system to do this, http://www.ljean.com/NetTrust/
So security mechanisms are designed to make each decisions distinct from the other, as if our daily lives were filled with highly isolated transactions. But this is not how decision-making works in the social human world.
I am trying to design security mechanisms that learn from the tactile world and that expressly integrate the tactile and the virtual.
Having said that, I find when I test tactile devices very different responses from seniors and students. Seniors I mean people >70
And this leads to the coming perfect security storm. Technologically na•ve asset-rich elders will prove to be an attractive target.
In ten years, seniors will have 1/3 of all publicly held corporate wealth, 1/10 of all and publicly held bonds. Seniors are physically isolated, less experienced with computers, and less likely to be aware that they are taking any risks. Elders may have the wisdom to be suspicious but without the tools to make informed security decisions.
Never have so many have been so vulnerable to so many more for so much. I would like to be able to design mechanisms that can help elders but that means understanding their lack of understanding of the computer.
The underlying reputation system and sources of information are the same in the Net Trust toolbar http://www.ljean.com/NetTrust/ and the ambient interface http://www.ljean.com/images/trustOrbGreenTrans.gif. But the interaction had to be designed for the appropriate group. There is not one human model of interaction with computers and there is no single user. It is critical that we understand users as at distinct people and groups, and utilize personalization to protect and not just exploit individuals.
Uri Simonsohn: Do retracted safety ratings linger?
Existing experimental evidence suggests: anchors, hindsight bias, false-consensus, debriefing paradigm, curse of knowledge (incentive compatible)
People can't ignore information that they have - anchors, hindsight bias, false consensus, and debriefing paradigm, curse of knowledge
We would strongly predict in a real world situation?
A natural experiment:
Consumer reports new rating changed from previous ratings based on Federal studies.
The ranking of car seats before and after two weeks by Consumer Reports ... we told them to run the test at 38 mph but they ran at 70 mph .. at 70 mph it is all random so the ratings were total random noise
This was a big story, so you would have found out about this
Press release received less coverage than retraction, so the argument that people did not know can be minimized
Then used eBay data for car seats that were sold during and after the time period. Simple regression. Tried to find price difference for each drop in ratings. The car seats dropped about 3% per ranking per slot. Then when the information is recalled it goes back to zero. There is a delay.
How generalizable is this? Consumer Reports gave random information so people do respond to experts. This is an extreme case -- car seats usually have no prior experience. Consumer reports is the only institution ranking car seats. They also respond to the retraction. It is hard to find a more emotional item than an infant car seat. Expect marginal bidder to be more responsive than the price bidder. In this case, memory plays no role. People forgot.
Mike Roe Microsoft Research in Cambridge UK
Security of online games and how people cheat and commit fraud
There are very strange assumptions in academic research, do these apply
In security research there is an attacker, there is an attacker who will do some bad thing as described during the threat model
What is the bad thing? What is the threat model?
These tend to be strange in many ways.
1. They are not empirically justified
Often these are kind of hypothetical attack with no pretense that someone will do it
2. Usually technically unwieldy
Models of attackers enable people to prove theorems about them but often the model is of the strongest possible attacker so a weaker attacker is going to be defeated
3. Attacker models are not sophisticated psychologically
The issue of motivation is often not at all considered
This may be acceptable. The things that are causing real grief are the assumptions about the defender not the attacker. These things are often psychologically impossible (password example)
I am not a debunker I am investigator. Being a debunker would imply that one goes into an investigation assuming it is not so. That is not a scientific point of view is "I don't know". It ends up being debunking but that is not the point.
Elegant room that surrounds us. Notice the holes. Why are they there? In order to eat up sound. You have got to speak up if you are to be heard.
I had a MRI not too long ago. This cabin was not filled with methane as some asserted. I am getting more fascinated in how the brain works. Having a PhD doesn't make you smart it makes you educated. Some PhDs have little perception of the real world. As a magician, I don't walk out and take a deck of cards from your pocket and say "I have an ordinary deck of cards" because you introduce the normal deck of cards. You look at it, open the pack, take out the jokers, and shuffle the cards. But I have fooled you. I am not wearing spectacles. I am wearing glasses without lenses. You assumed they are regular glasses because they appear normal. I do another where I begin by speaking into a microphone on the table, make a deal about it, and then walk away with my invisible wireless microphone to the confusion of the audience. Most magicians don't know why their tricks work but they know that they work. I as a magician have always known how the tricks work.
I always started that I would rip a newspaper and then show the newspaper whole. I was sizing up the audience. By the end of the trick I would pick people who would be in the audience and I could call on the audience. I am a conjurer not a magician.
The difference between the people that I expose and me is that those are the people who lie to the public. "I too can talk to the dead but they do not answer". I did an investigation on psychic investigators. She did "cold reading" Why an old father in law would come back and say, "my name starts with t I don't know". She basically did 20 questions. This was a time in history she couldn't have googled him. The people who believe in them don't care if it is wrong. Bending a spoon has not moved mankind one inch forward. People bend these things thru tricks and skullduggery.
The counterfeiting pens on newsprint can show a fake bill. These depends people use are fake. It a mark shows up, then it is amber then the cash is good. Newsprint shows amber. The anti-counterfeit pen only detects starch in paper. The claim was that counterfeit creators use cheap paper. The US Patent office patents perpetual motion machines every year. On the theory that every American has the right to have a patent. We magicians are cunning folks. But there are lawyers and thieves out there that do the same thing. I am appalled but he fact that the WSJ claimed that it is not possible to evaluate paranormal claims. We don't debunk. We investigate. And prove that people are liars, fakes, frauds and buglers.
Q: What about intuition?
A: We have massive evidence that shows that experience is only an indicator of certainty in stereotypes
Q: Sociology is suspicious of tools and auto detection. No pattern in attacks, certainly no caricature.
If we rely on tools, will we de-skill investigators? Have them stop thinking for themselves?
A: Importance is to understand the basis of judgment?
Q: Experience vs. intuition
A: Years of experience not a predictor of skill, only prediction of certainty of guess
False convictions not based on malevolence, but lack of feedback about mistakes
Q: Evidence that lack of information doesn't change behavior
(food labeling had no effect)
A: Can only try to do experiments
Current privacy baseline is bad, but need a multiform problem
A: Padlock use: people don't look at it --> misunderstanding the underlying information
Q: Polygraph techniques
A: Possibly relation with intelligence, and its a beatable system
Issue with funding
Q: Shift in symbol and meaning
Originally it was two keys, and meant SSL was used
Now a padlock, meaning a matching certs
Q: Train the user to respond to a symbol, rather than a technical meaning
The background experts should refine what secure means
Shouldn't expect users to have a full understanding of security
Q: Possibly should stop trying to tell people that there is a good security
A: Have to delegate true contextual information, just trying to give them some help
Computer security is the part of computer science that is best characterized by utter and complete failure. We are the advanced stepchildren of this advance field. Randi sees the real world of being a cesspit of broad and easy deception. I look at the real world and marvel that we are not always being killed all the time, that we keep any money and that any of you are who you say you are.
We could probably learn something from the successes of the real world. Let's look at protocols and queues that we use in the real world.
Computer science isn't the only field with interesting security problems. Most of our metaphors are taken from the physical and mechanism world but the way we abstract problems is unique. And computer scientists are not the only ones who have thought about protocols. Optimized for security and performance.
Needham 1994. Two basic ways to compromise an alarm system
A: interfere with the signal: cut wires, send fake signals, and so alarm systems make it hard
B: overwhelm the capacity o respond t alarms, rattle door wait for police, repeat until police stop showing up
Alarms installers and burglars long understood both but computer scientists don't
Denial of service shows a trade-off that is readily apparent in the real world but not so apparent in a mathematical model. Here the physical world has something to teach us. This is an example of bringing the human scale physical world to computer science.
Blaze looked at the complexity of attacking locks. Abstract lock theories. Regular single key seems pretty secure if exhaustive search is impractical.
Master keyed locks are totally insecure. Easy to use regular key to discover master key. Quadratic in number of pins and heights. Locksmiths already knew this attack. This is an example of bringing the abstractions of computer science to the human scale.
Examples of the change in culture from the nineteenth century approach to the lock-smithing from Hobbs versus the response in 1953 that information should absolutely should be hidden (The Art of Manipulation). Then in 2003: "... If Blaze's master keying paper shouldn't have been published because the only people it will educate are the dishonest who will se it to compromise security ... No we can't call him a moron.." The idea that security is something that could be engineered and openly discussed is something that has somehow gone wrong outside of our community.
Have been a criminologist and I would like to focus on reducing crime. Some criminologists are in fact interested and reducing crime. They are mostly interested in reducing crime but they are most interested in preventing people from becoming criminals. My approach is to change the situations that give rise to the opportunities for crimes. I began in the Home Office working on street crimes. More recently I have move into a broader range of crime and recently moved into Internet sales crime also have looked at terrorism.
At the moment I am trying to reduce the trade in Mexican parrots. These are different types of crimes. I have developed a classification of twenty-five opportunity deducing or situational prevention techniques. The important thing is to take these twenty-five situational preventions and think about the crime we are considering: see Clarke and Eck (2003) Becoming a Problem-Solving Crime Analyst for the table.
1. What is it about the environment in the broadest sense that provides the conditions for this particular problem or offense to occur?
2. Try to understand step by step how this is done? How is this particular crime committed? Sometimes it is harder to understand than others. But the more you understand how it is done the more points of intervention you find to prevent the crime.
3. Select preventive technique(s).
Those are the basic principles of what I do. The little I have done in computer crime it is remarkable how simple the stuff they are doing are. It is much better if you are thinking about computer security it is better to disaggregate the problems and solve them one by one.
CIOs understand that the large breaches and problems are behavioral and issues within the organization. I have been studying Choice Point. They sell data about you and I to just about anyone. The breach involved 160k Americans where they sold the data to organized crime. Was an organizational failure not a technical hack. Drove activity in breach notification laws. People in CA received a letter 162k in CA "I am writing to inform you of a recent crime committed against Choice Point that may have resulted in your name, address and SSN being viewed by businesses that are not allowed to access such information. We have reason to believe that you personal information may have been obtained by unauthorized third parties and we deeply regret any inconvenience this event may cause you"
What went wrong?
1. Perception & availability: killer shark by falling airplane part?
To a person no one had ever imagined that the criminal could be his or her customer it was just a failure of information
Imagining an event will increase its availability and make it appear more likely
2. Risk communication & vividness
Create concrete emotional interesting or exciting situations
Reyes, Thompson and Bower "Journal of Probability and Social Science"
Vivid vs pallid version illustrated that making a description more vivid
One of the examples is Toyota increase availability by internal communication -- make success and failure clear and vivid
Limits of imagination
No one had imagined that the criminal would be their customer
Imagining an event will increase its availability and make it appear more likely
--> scenario building
Vividness in court (experiment with mock juries)
Analogies: a quality audit at Toyota brings the whole team together
Compare information leakages in large banks. Leaks in p2p file music systems coming right out of banks. Found tons of banks. Lots of people looking for that stuff. Banks ranked and showed what was being exploited.
3. Social loafing and intervention
Groups are less likely to react than individuals
Toyota is a good place to look at how to empower people "jidoka" and how to shut the line down if there was a quality problem
Toyota had reversed what was important. It told employees to control quality and management to control productivity
Similarly CIOs argue that their job is to make everyone secure.
4. Align risk decisions
Which risks do you centralize versus decentralize
Choice point took significant loss on their stock and then they were recently sold to Nexus.
Charles Perrow: Normal Accidents and Complexity in Organizations
Something completely different
Given the presence of software in our life. It is amazing that the software has done so little damage. Even in our critical infrastructures.
Cyber financial crime, cyber governmental intelligence, and the potential for cyber terrorism
These are made possible by connecting public systems to the Internet. Strategic intentional damage through the Internet is possible because of the following configurations
1. High homogeneity in the platform
2. Users are unable to demand security or privacy from Microsoft
3. Users who distribute malware rarely suffer from the malware (e.g. botnet)
4. People hide that they were hit by malware
5. Difficult to determining the source of software failure
6. No chance of allocating liability thru courts
7. Cannot regulate highly concentrated industry or highly diffuse users
These are characteristics association with distributed as opposed integrated architectures.
Old-fashioned Ford that even owned its own steel mils and oil production is an integrated architecture. Modern Toyota is an example of a distributed architecture.
Integrated architectures have many advantages if the problem is well understood, mass-produced and allow a linear production series BUT if there are intersections the integration allows the propagation of errors. When Ford changed its Model T the assembly line had to be rebuilt over months.
Modularization has its own cost in particular in the design of interfaces. Time to build is longer because of the extra steps of interface design. Worse yet your competitors may sell their own modules to run on your systems or the supplier may begin to compete with you.
The architecture of complexity by Herb Simon illustrates well the advantages of modularity under uncertainty. If the creator of a non-modular system is interrupted, then it has to start all over. Microsoft suffers continuous interruptions requires extensive patching that users are unwilling or unable to comply. Microsoft is less modular than its tiny competitor Apple and far less modular than its open source applications.
Recent comparisons of propagation errors dramatically make this point. A fault in integrated architectures makes this cascading possible. Third party applications developers can create failures. Mozilla had many error propagation cost until it redesigned its system to be very modular. Open source software is inevitably modular since it is build by diverse software writers. Ubuntu for example is exceeding secure and very reliable. It does not matter how secure an internal enterprise system if it utilizes windows and connects to the Internet. Unfortunately the US government is moving in the opposite direction. Rather than demanding open source we have Windows for warships. Worse yet they recently took the VA open source distributed system for medical records and required that it be centralized on proprietary software. There was a single thread for DB lookups for the entire system. The US Government could save money and increase security by demanding open course. The root of security problems is the architecture that Microsoft has chosen to implement for reasonable commercial arguments.
Many of these papers will be searching under the wrong lamppost for looking at users and their psychology
Experimental security group in engineering.
1. Google Internal
Google created a supercomputer out of component parts. This means that the O/S and apps had to be written. Based on this platform there was an engineering approach to the problem of security policies. We created an environment so that any time you are touching sensitive data we make the use of sensitive data clear and available. We use community logging and visibility to require justification of touching private data. We sought to create a culture of security. This should not be confused with keystroke monitoring; we don't substitute surveillance tools for management. Be clear about what depository needs to be protected rather than having a generic surveillance practice. Physical world problems occur in physical interactions, tailgating thru doors and locking workstations
Identified the nature of the firm with certain attributes - reluctance to standard security
Forgiveness rather than permission
What are we actually trying to accomplish?
Use of sensitive data in the open - no privacy with use of private data
Allows for sharing of code
Allows for open audit model
At the same time, no Taylorism mgmt
2. Google External
There is a consistent theme in prevention. Prevention of damage is targeted as opposed to enforcement via post-event punishment. Prevention solves jurisdictional problems. Criminals tend to be in social networks of criminals
Google external security
Stop the bad guy persona, don't have to find actual bad guy [??]
Don't care who miscreant is, just need to identify online vector of attack
3. Online gaming aspect
In WoW it is a very interesting research sand box on malware because for example key logging attacks people in WoW know that this has happened. Stripped characters are almost always key-loggers.
WoW attack vectors
Have instant recognition if you've been a victim
Q; One issue of creating behavioral research from the federal point of view is informed consent. Permission is not possible.
A: As a businessperson we might provide a service.
Q; Sharks are remarkably scary but falling airplane parts are not. A movie on falling airplane parts would be a failure.
Q: Human bodies and societies all over the world work. They shouldn't work. Why? Why has modularity won out over evolution? A wonderful demonstration on how Google has discovered the anthropological fact that human selfishness and human tendencies to distrust each other can be overcome in an open enough environment where everyone knows a great deal about everyone else and opportunities for fraud are made detectable and punishable.
Q: Was the Choice Point decision a rational move to sell data to organized crime?
A: They closed a small part of a low revenue expensive business as soon as this became public. So it was not a rational business decision.
C: It is so hard to tell good businesses and bad businesses so companies in Europe sell to all comers
C: It is very likely that there is an incentive for companies to engage in somewhat lax behavior. People who have engaged in poor security usually get away with it.
C; Offline versus online crime is feedback. I have been reading books about cyber terrorism for a long time but I wait for it to happen. T what extent is there a danger in the pursuit of the conceptualization of online crime we invite people to commit those crimes?
C: What is the period between the time we discover it and the time people exploit it. As an engineer and a scientist it is arrogance to believe that we understand how to attack systems better than the attacker. The attackers have a full time job we are part time. We build them faster than we can fix them. One way we can tell the difference between attackers and bad guys is that good guys tell people about it and bad guys exploit it.
C; there is an OCED report that just came out and the Symantec semi-annual report. Credit card theft is declining so that it is a minor expense to the companies but use is increasing. Bank account fraud has increased but bank account use has increased so much that banks are making more money. If it is so easy, why doesn't the rate of crime increase more quickly than the rate of overall use?
I have been doing cryptography for a while. Before crypto I was an early workstation design person and then moved away from users and into queuing. Then I moved to collaboration systems and from there to cryptography. So I was usability before I was in security. I was at the very first PGP where we did the very first GUI crypto programs. Whitten and Tygar wrote about the failures. "With just one click you can encrypt a message and with just one click you can sign it." What we need to do is now get rid of that one last click. I was one of the first hires for Counterpane. My old CEO hired me back, asked me for a list of ideas, and number 3 (the one you really want to do) is zero click encryption. We put together a team to start-up, bought the PGP assets and pitched zero-click and almost coincidentally bought PGP.
In the very first paragraph I said we don't need cryptographers but we need usability people. I knew that the known usability standards could improve PGP. "But it isn't obvious that someone should move that mouse..." nothing is completely obvious "Driving a car is obvious" but only because that is a socially embedded system. There are a number of reasons that usability is hard esp. for security. Some loses you can insure away. Some loses you cannot. If you are a victim of monetary fraud it is replaceable. Information losses are irreplaceable. When you are trying to make your system secure you cannot "undo" by definition you cannot make it a safe environment. We spent all this time telling users that they could not break the computer but in security you can break the computer.
These have led me to a design that has it good points and bad points. I have always had a fondness for robot servants. So I wanted to build a security robot servant. So I want to build a security robot servant. So take things out of the hands of the users. Five years ago I gave all my keys to my robot servants. They signed it because they believe it was really me. This is the MITM defense. I run one of these myself and my 75 yr old dad uses it. My dad was freaked out by the server decrypting the email so he called me and asked how I did it. Encryption should be an attribute of the system not a thing in and of itself. It has policies that get executed for you. There are many other rough edges. I have maybe one more minute -- like people have no idea what the lock means,
How do you get from HCI-SEC to Usable Security?
Work on APIs, for those who are not programmers
There are four elements for a study in a hospital. There is a card swipe for the nurse to authenticate himself or herself. And how do the nurses deal with stress.
1. Mechanism studies
classic approach: controlled experiments on use, typing a passcode
2. Social context
nurse orders a change in drug delivery from the doctor, the nurse may be responsible for authorizing a context with which they did not agree.
The doctor should be responsible and liable for his orders even if he does not enter it directly
Ethnography is not an adequate approach to understand this
3. End user security
Let the domain answer construct the system
Who actually has rights?
Anthropology is not necessarily a good servant for engineers
Managing change that results from interjecting technology is not possible
Alternative is "end user engineering" which seems to be a strong complement to peer production"
20% of the CHI research was on end user development
Doing this in security is particularly hard
4. Fundamental abstractions
Are the abstractions in security working?
Do you and your mom agree on what trust is?
other cultures of morality -- particularly the arts - argue that building security is burning moral judgements into the culture
Better ethnographic techniques and models
Engaging post-modern deconstructionists
Love & Authentication - what do you do when you forget your password? Use the same kind of individualization that is used to distinguish people at dating sites.
Virgin America - "how much wood would a wood chuck chuck if a wood chuck could chuck would"
Intuition: avoid memory, use preferences
Do not have to be remembered
Include a long list of likes and dislikes
For a financial institution, faceless adversary is the most importance
Naive enemy: more common, economies of scale
Ex knows you and has reason to attack you
Obtained questions o preferences from dating sites
Correlation check to detect for weaknesses
Company - wanted low false positives, no false negatives
False negatives --> customer support calls
False positives --> automated fraud detection
Computer security and general software failures and I have not produced anything mitigating.
Requirements engineering covers both the development of systems and the actual world
Sometimes it is hard to extract information from ethnography or from
What we offer primarily is a focus on the problem. We talk about architectures and mechanisms, about problem analysis
One critical potential failure is boundary setting. You can have the world's most perfect password but if someone steals ATM machines
(Photo of car tracks around the gate)
But maybe the problem is that they want to delineate certain borders. You have a trace of how people have traveled.
A critical element of separating the problem space from the machines and world. The role of software engineering is to develop a specification that maps a phenomenon that happens in the world into the phenomena that happens in the machine. People spend decades trying to map the real world from/to the computer
In contrast a specification is something produced by software engineers that should define the boundaries between the intersections of that part of the world and the machine that are of interest and that part of the world which are not are of interest
Usability and focused on user-centered approaches to trust
Even though I understand why you are saying what you are saying I disagree that the problems are easy to solve. It's not rocket science but it is a hell of alot of work at the end of the day neither individuals or organizations want to expend the effort to get there. We should talk to the economists and the end of the day.
There is good and bad usability. But there is something about security that attempts to control users
Users interacts with systems to get a context and they want to meet a goal by fulfilling a task which occurs in en environment
They key problem is ask the right question. If you do the specifications right and ask the right question.
1. There are no standard security mechanisms because it needs to specific to task, context, and environment. We are no longer building mechanisms specific for systems. We need more variety of security systems and need to integrate these into the tasks that people do.
2. Security is an enabling process: don't stop the process, fit it in and stop eating all the profits and even better improve the output. Passwords are not bad, but rather optimized for frequent authentication in an employment environment.
3. There is a limit to compliance, use it wisely
The need for security goes away once you have killed the business process.
There is human factors (Winslow Taylor) versus humane systems
human as cog in machine
subjugate human actions to optimize overall performance
eliminate individual characteristics
human as actor in a system
Psychological problem of the organization:
belief that technology "solves the security problem" it is convenient assumption and the technology simplifies it
It puts greater distance between the controller and the adversary - particularly convenient for law enforcement
is it effective, win the battle, lose the war
inhuman: do we want lines of sheep shuffling through airports, inspecting 1000s of certs?
Q: What are the differences
A: Luke says you have to do ethnography. I say you don't have to and if you ask the questions in a way the users can understand. How bad would it be if the server were down for 1 hr, 1 day, 1 wk, 1 month. What if all the data were lost.
Bashar: says you can get valid requirements but you can talk to the wrong stakeholder. Ethnography helps you to focus on the right people.
Angela: You cannot take 20 yrs to identify the stakeholders
Luke: In a lot of domains it is not clear that there are easy questions to ask. When companies have nice information infrastructures then you can ask. For medical people it was not clear what was in the medical profession. What they said they were doing and what we saw they were doing were distinct. So domains outside business it is not easy to find the right questions?
Bruce: If I want my banking secure is there a -i-want-my-banking-secure button. But some users cannot identify the security and privacy controls the want until they have the experience.
Peter: It is a mistake to make sharp distinctions between security and safety. What you are looking is for trustworthiness that encompasses interrelated factors. So if you want a system that is usable from a safety point of view and that has to be designed in from scratch. Because if it was a safety problem does not mean it was not a security problem. Motivation is a specious source for design innovation.
Jon: There is far more stupidity than malice in the world. Most security problems occur as a result of stupid not mean.
Matt: I am not sure I accept this idea that we should not ask users to make an effort for security. We ask people to make an effort in the real world. I have to watch where I park and not leave my bike unattended. I take expensive preventative measures. I am successful because I developed an intuition about causes and effects that is absent in a computer system.
Bruce: We don't have an intuition.
Angela: If we make an effort to make a risk clear to people. People are also prepared to make that extra effort. If people realize they are creating risk but it is the perceived effort for stuff they cannot see. Other people break the rules and see there are no consequences.
Ross; Safety vs. security -- sometimes the battle is over specifications - the UK government "the entire public sector provides a battle between unwanted contact"...my security policy is I don't want to overpay a clerk for my water bill. His goal is that he doesn't want to answer the telephone.
Jon: Seat belts were for people not to die. We made seat belts compliance issues. Smoking declined when we made it shameful. I believe your front door should magically lock itself; the door should be doing the right thing when you get back.
Bashir: Inconsistency is quite acceptable. It is not about creating complete uniform solution but make informed trade-offs.
Q; What is the cost of security. Isn't that part of it?
Markus: My personal cost has a lot to do with my security behavior
Q; Intentionality is a critical issue. The distinct character of security behavior in a wider sense. In the 21st century because of the concept of causality. In disaster research, there is no longer natural disaster.
Bashar: Intentionality is a distinguishing characteristic of security. Intentionality is studied in security and requirements. Intentions and moderating expectations.
Luke: In the more modern philosophies on what it means for people to interact with programs the argument is that we should consider all programs as if they were intentional. The complexity is so great it becomes difficult to distinguish between complex unintentional and intentional.
Q: There are many situations when we accept false and specious assertions of security and make ourselves less safe
Q: You need to consider causes as if they are intentional; one way to avoid birds is to avoid birds.
Jon: In the last year or so there has been a push in the commercial world with a push to hostile insiders and there are daily protection systems. Data show that 95% of data leaks are shown by well-meaning people doing their jobs, for example where something is forbidden and something is required. I want to take intentionality out of it because I am worried about the witch hunts rather than the detection of bad policy.
Angela: We have models for safety. We identify latent failures and remove them from the system. If you hunted them down and put in defense in depth then you would be safe and more secure.
Jon: There are an awful lot of people who are looking at this as a normal accident and apply to security. And there are people who sell by fear. I tend to hang out with people who do not sell by fear. But they don't make as much noise or do as good a news story.
Q; Outside of security we looked at intention versus incompetence -- we have systematically varied the same event when people die or not die and the only difference is that there might be foul play. One line suggests that precaution could have happened or security breaches cannot be ruled out. It makes a huge difference. In anticipating public response, there is a big difference.
Q: If you have a chain of circumstances that is improbable to the point of disbelief this sets people's minds ready to look for malevolence. But there is no reward for safety. The pilot lands in bad weather. If you land in bad weather you get the cheers but if you misdirect then you get the cost. Play Russian roulette because of the system requirements.
Zeckhauser as channeled by Jean: There are too few plane wrecks per mile so we spend too much on airline safety.
Q; Four times the number of people that died Sept 11 died from avoidance of plane flights.
How to study how individuals and communities respond to threats
Shark vs airplane talk - not equivalent emotional event
System dynamic model of community response to a disaster
Explore the changes in wording to manage the effects
San Diego, models and scenarios based on downtown and nearby theme parks. Shows individuals scenarios and then evaluates the differences in responses of the group as a whole (e.g., community response) in terms of fear. He also presented the same carefully prepared scenarios to public safety professionals (firefighters, cops, etc.)
Used the example of sharks -- who would rather be eaten by a shark or struck by lightening. Allan Friedman would rather be eaten by shark. Everyone else voted lightening.
Stimulating community response during a disaster, J Forrester model.
Terrorists events loom large because terrorists are perceived as predatory; predatory behavior is far more frightening
Stocks and flows models of stocks and queues.
Number of adults in San Diego when there is an anthrax attack, how many people are afraid? Fear spikes very quickly but the decline decreases quickly. Even the threat of terrorism can reduce this kind of response. Notices that there was public reaction even on the West Coast people were demanding antibiotics and vaccines. As soon as it was learned that it corresponded to domestic terrorism not international terrorism the curve dropped radically. Two months out people are still in a state of fear. Two to four months later 40-60% were still in a state of fear.
Checked propane explosion, anthrax, bomb, and infectious disease scenario in the public and with public responders. Public responders are much more concerned with the exploding propane tank.
Distance matters - studies after 9/11 showed that geographic distance matters.
In evaluating threats people are more worried Cyber attacks are >anthrax > earthquake
People are more worried anthrax > propane tank > earthquake
If you don't know about cognitive dissonance then pay attention (no don't ;->)
No reward for safety
People are at least as important as hardware to warfare efficacy, ignoring people will negate the potential of hardware
Process for training:
1. Create artificial war
2. Tailor to needs of unit
3. Execute the needs of the unit
4. Measure with process and outcome
5. No-holds-barred After Action Review
Could they create a simulation program?
Training counts, we don't count training.
DARWARS DARPAs training. Shows a computer game. Cannot reproduce this in the notes. You need to get the game. It teaches you Arabic including the appropriate greetings and hand gestures and how to read facial gestures. Enables effective interaction with occupied persons.
Nice photo of everyone with CMU CPS shirt
Malicious humans who will attack systems
Humans are not motivated to perform security critical tasks properly (inadvertent insiders)
Making it more usable there are basically three approaches. It is important to use all of these, not just one.
Make it just work: invisible security
Make security & privacy understandable: visible, intuitive, metaphors
Train the user
We have projects and these are good search terms are human-in-the-loop, privacy decision making, supporting trust decisions, and user-controllable security & privacy
How do you put controls in the hand of end users realistically?
They are looking through the "Handbook of Warnings". They use a model from the concept of warning you have to tell them what to do and thus there is a communication. The communication may also be the corporate security policy that you got three years ago and don't remember. There are ways a warning may fail to be communicated:
- Environmental overload
- Malicious overload
- Lack of capability to act in response
- Attitudes and beliefs in security communication
- Communication processing, comprehension and knowledge acquisition
- Application, knowledge retention and knowledge transfer'
Identify task and identify points where systems relies on human to perform security-critical function
Human in the loop framework: communication --> human receiver --> behavior
Communication may not get to user
Info processing steps for human to process
Capabilities, comprehension, retention, etc
Handy table for analysis
Field studies in privacy
Study shows lab studies framed as anti-phishing warnings, spear phished participants after they placed orders
Firefox warning is very good, others not so great.
Expandable grids for visualizing and authoring computer security policies.
Assess control management. People are terrible about it. So we need to have better mechanisms to empower people.
Detection and making people lie.
Counter-intelligence 101 - note from Al Quaida manual: When captured don't say you are a terrorist.
(Did not make that up - Jean)
How do you generalize from lab studies to the real world? Looking at law enforcement and TSA.
Sometimes having no evidence is a good thing; much of social science lab studies on deception has no ecological value. Much of social science makes the assumption that all lies are equal. Most of it has not validity that it matters. You may not like TSA but they are entrusted to do this. Look at the literature to see if anything is useful for them.
All lies are not the same e.g., tell me your birthday to see if you are lying or not
Signs of cognition or mental effort (Paul discussed this) are correlated with deception but there are other reasons why these occur.
Laboratory: real world
Real world: very high stake, concrete rewards and punishment, get away with murder, punishment for disbelieved truth
Implications: types of correlated with deception
How do the potential impact influence the efficacy of lying?
In the real world people are not randomly assigned to be criminals and terrorists, it is a choice
In the lab it is random
Real world, choose whether to take a risk
Implications, subject must "own" behavior, increase emotional buy-in, and this correlates with deception.
In a laboratory there are undergraduates, guess what 9/11 hijackers were undergrads so it was in theory appropriate
When dealing with crime and terror, you are dealing with people with a different worldview
Implications: age/social skills, no quilt when lying, highly motivated, types of clues again motivated with deception
In the lab interview there is not an interview, scripted, passive and sometimes just a camera
In the real world: the interview, more intense, follow-ups, open-ended
Implications: fatigue, amount of information to score/code/rate, types of clues correlated with deception
Nature of relationship with interrogator
In the lab there is no contention, in the real world there is contention, hostility, control, and power
Convene panel in experts in design and counter-terrorism. You need groups as well as individual rewards. Site down and do post test. Collect behavioral measures (scary) as well as behavioral messages
Using all of this you can sort people out by 86%. Video sensor basically means behavioral information.
Different result patterns with high ecological validity
1. Mistake to do Meta analysis
2. Technological or experimental garbage in/ garbage out, if you don't understand the behavior you can't analyze the fMRIs
3. Be careful about reliance on technology but it is feasible
Work on sitekey to determine if it is useful, determined that 98% of people would be fooled
The process of testing how humans respond to potential or immediate threat
The behavior may change if no threat perceived, it would be harder to do a study if there were physical or medical risk
Behavior may change if people know you are studying security behavior
Is defense in depth is good thing? He questions this fundamental security determination. He argues that another independent layer is NOT always good but can generate six pop-ups instead of a single security prevention action.
Look at the human analog, and make sure one reports if they see a criminal, but is assigning three people better than assigning one person. In this case if one person begins to report they assume that some one else has reported it, e.g. the problem of many hands.
When forced to wear seat belts risky drivers speed up
Jassen, seat belt wearing sn driving behavior, Accident Analysis and Prevention, 1994 April vol 26 no 2 pp 249-2
Here are three major research questions.
1. Can we protect people without inciting them to take more risks?
2. How do we warn people of danger without desensitizing them?
3. How to motivate secure behavior when probability is low and damage is high?
Angela: A comment on the MS culture -- Security warnings are used to move risk from the individual to the corporation.
Stuart: One of the things we are very clear on in advising product groups is that you should not put up dialogue boxes just to make the user responsible.
Bruce: You increase you risk based on the perceived difference not the actual. You should be able to make someone more secure or safer to deal with compensating behavior
Q; How do you get a trade-off from anxiety about one thing to about another. For example, if we don't worry about terror do you worry about your family? Is there a finite amount of worry?
Bill: Being fearful is a very uncomfortable emotion, and communities will intervene at a certain fear level. FaceBook, rabbis, ministers, all intervened to make us feel better on 9/12. Think about our fears about nuclear power. The collapse of nuclear power on TMI. Trade-off between benefits and risk. So maybe nuclear power can be safe again.
Stuart: Did smoking go up after 9/11
George: Did your graph really say people in San Diego are more afraid of anthrax than earthquake? As with risk, how much do you care about income inequality? People say they care immensely. But if you draw their attention to it in a survey it is different.
Bill: We created the scenarios with the idea that they should imagine it happen. How scary is it?
George: There is no probability but this is a measure of badness. So what about artificially raising the issue of risk and thus anxiety with these scenarios?
Bill: It is a methodological challenge. We have created scenarios. Let's pretend this is happening. How would the community respond?
Q; we use the word bad.
Bill: The people who asserted that terrorists are really bad indicated lower perceived risk. Anger decreases fear and thus perceived risk. Male respondents tended to give this "I won't change my behavior"
Matt: Were the answers correlated with similarly irrational and disproportionate behavior? Are they stocking up on shark repellant?
Bill: People went back into the water after the shark attack because the surfers were in there and the weather was beautiful. People are all irrational. I bought earthquake insurance because I wanted the piece of mind if there had been an earthquake.
Mike: Since there are so few terrorists (couldn't hear)
Mark: Assume they did two test runs; we are talking 12 out of 29 million. The key is when you think of layers of security the front line person's job is to fill the queue of the secondary. Just don't have any false negative. A little bit of random is really good. So we are not concerned with false positives. Versus gut versus intuition because it is so small you want to make the decision a little bit better.
Andrew: You seem to be using standard psyche models for terrorists when we are talking about people in abnormal states. Someone tending to commit a terrorist act is very different from people marching off to die.
Mark: Some are not in an abnormal psychological state, and not all of them can you analyze in a lab
Bill: We are subject in our research agendas to the same base rates fallacies and human behaviors that we say we understand in people.
Dave Clark (MIT)
Network embedded in larger social issues --> problems
Future Internet Design - what will the global network look like?
Can we have a goal-oriented vision?
Future perspectives: IPv6 was written in 1991
Not data plane - that's fine
Need security: balance the tussle of various interests
Different levels of security required
Redesign all layers. Assume that the apps are part of the Internet
Insecure end-nodes part of the system
Spectrum of trust
Endpoint A) Parties really trust each other.
Morris virus: what am I supposed to tell my superior? --> Internet functioned as spec'd. Virus went everywhere.
Endpoint B) No trust at all, shouldn't build in trust.
Middle: don't trust, but wish to communicate anyway -- communication without trust
Life has protocols, TTP
Availability: communication attacked by the network
Comcast targets routing
Indirection and control
Indirection is subversion of communication
Vocabulary: control, balance of power,
Need to bring in social science
David Livingston Smith - University of New England
Philosophy of mind
Too used to language, so occasionally suckered by it
The bulk of deception is non-verbal
Twain: spoken lie is a tiny part of deception
Self-deception - increase chance of actually lying
By deceiving ourselves about the humanity of our enemies, war is easier
Is amazing, must buy book...
Tyler took a binomial classification of people from the psyche literature: empahtizers and sympathizers. Given that these characteristics are meaningful in real life (that is, accepting he foundation of the two kinds of people theory) then is brain type correlated with ability to predict phishing type.
Clear linear correlation between test that determines type and detection of phishing sites.
Are we building security tools only for ourselves? If we want to build for a wider community how do we identify and correct for our own biases?
(Note: saw this at WEIS so didn't take detailed notes, sorry)
Basic introduction to the rational agent (Chicago) model versus the bounded rationality model.
Construal processes, examples of survey design when the range varies.
Psychology of judgment and decision-making
Rational agent model: stable, self-interested, people maximize
Bounded rationality: mediocre judgment, unstable prefs, myopic
Constraints on rationality --> systematically irrational
Example: estimating everyday behavior
People will anchor in the middle of a given scale to watch TV
Measures of happiness affected by priming question of dating
Organ donation defaults
Rate of organ donation consent based on if it is opt-in or opt-out.
The highest rate in the opt-in is 27.5
The lowest percent of 85.9 (Sweden) then 98 (Belgium) for countries that opt-out
People recall extreme situations. What do you recall when you think of dentist visit: cleaning, cleaning, cleaning, cleaning, cleaning, root canal, cleaning, cleaning, cleaning, cleaning, implant
Most people do NOT recall the cleaning
People who were asked for any game anchored on their most extreme. People who identified a case as extreme grounded and made account for the extremity of their case.
Scratch - who is sensitive to relative and absolute comparison?
Winning: absolute happiness constant
Losing: happiness is relative
Summary: cognitive capacity, people recall extreme views, relative vs absolute comparison
The classic economies research assumes that there is a stable willingness to pay and willingness to accept with individuals being of two or three consistent types
1. Privacy is like any other choice: the situation influences how much people care about privacy.
2. Weighing of choice depends on contest: Hsee, Loewedstein, Blount, Bazerman 1999
Tversky, Slovic & Kahneman 1990
3. Low consistency and predictive validity of the tests, which predict "types" of people
Implications: Concern for privacy can be influenced by contextual factors
Procedure: ask people for identifying information, people will give them that
Ask a series of questions ranging in intrusiveness
Code, and then destroy email address
Manipulation: vary contextual factor between - Ss
Dependent measure: differences in disclosure rates between conditions.
In the first survey we trivialized the behaviors to make them seem light-hearted: smoking pot, sleeping around, driven drunk
Second survey the behaviors seemed negative: relative to baseline, participants in BAD website were more likely to expose information
Q: What urls did you use for the badness text?
Only answer if you have NEVER engaged in extreme behavior. So we had extremes that people have done (white lie) and have NOT done (murdered someone)
We predicted that not answering a question doesn't feel like you are admitting something. So by allowing information to be extracted by not admitting is the same in information sharing and economic terms but the way you answer greatly changes the amount of information shared (I have never cheated vs. I have cheated)
People don't naturally think about privacy. If you ask them point blank they will say that they do.
But if you ring alarm bells in some way they will become concerned.
Paper and pencil survey
CMU students asked for email, then 24 questions
3x2 condition mixed design, strong assurance, weak assurance, no assurance and privacy flags
There was little difference between no and weak assurance. Weak assurance increases disclosure slightly. Strong assurance that is a more clear flag, makes you think of it, greatly decreases the willingness of participants
New technologies that create privacy. Objectively they decrease people' privacy and then they mute privacy concerns. For example, physical isolation when sitting a computing
People are getting cues that drive them in exactly the OPPOSITE direction we should be driving them in
One of the impacts of the Internet is to confuse otherwise clear social categories. For example, students use the concept friend in a different sense, so that someone who they have connected to on mySpace is not a friend, as I would have called a friend
Angela: I was looking for the benefit is you disclose, if you are using an economic model
George: People have a human need to share, look at the musical FAME,
Angela: One thing people do about reasoning about privacy is that there is no benefit then why would you disclose. But we have had some in-depth studies looking at the process model and
George: We manipulated the number of people who skipped a question or answered a different way
David Livingstone: monkeys given options of working to be alone work to be alone. Private space is private space that is valuable because we are being observed. I gave other monkeys the option of giving monkeys - we like snooping and work hard not to be observed at the same time. My worry is that you measures are working against you. The problem is that empathizers are going to be interested if you are being phished. So the interest in the details of the local web page will not correlate if you are being phished.
David Livingstone: In general the concept of empathy and sympathy as two gender types has been widely rejected. The difference between being detail-interested and global-interested. This is more cultural than gendered. Japanese are to the right of nurses in Scotland on the scale. For example, concentric circles cause an illusion about size, and Japanese show greater illusion than any western person of either gender. Being a local processor makes you more likely to detect a phishing site then there should be huge cultural differences. For example, the Japanese ought to be very bad at it.
Tyler: If you go back to the first thing you said, I think the experiment addresses that because you just identify phishing. The other point is that this is really about designing indicators more broadly so as long as we reach the entire population it doesn't really matter
David: The male/ female dimension is going to be completely overwritten by the culture issue
Zeck; If we have a session on privacy, security and phishing then the panel design itself may indicate to us information elevation is bad thing. So maybe as we are participating in this we are believing information exposure is wrong. But is it wrong?
George: The point is that if we are privacy types and we reveal some information that should not change the type of person.
If a factor is positively correlated with cost and negatively correlated with benefits and the result is the opposite of what we expect then the theories themselves are not supported, we would argue that they are flawed.
Zeck: I agree that your experiment is remarkable but we do not know if people are over-revealing or under-revealing we are not ready to regulate.
George: It seems that we could have a regulatory regime that required indicators to people where privacy is more critical then this could be used to inform policy,
Zeck: Regulation is a blunt tool
What we are selling to some extent is fear. That is, take the smaller certain loss (the cost of security) as opposed to the larger stochastic loss (taking a risk by not investing).
Changing the fear sell into a greed sell by making security into a service: we take care of your IT and you take care of your business
ROI models indicate that when you invest in security you will get a real business return
But security is essentially a fear sell then you have to push fear hard. Then making someone feel not scared trumps everything. FUD - fear uncertainty doubt - companies try to make it a really significant issue. In public policy we have seen much of the same thing using terrorism to get people to accept policies that you would not use.
Inference theory -- people tend to judge the reason for an action by the results of that action. So if you close a door and noise lessens then the experimenter will have closed the door to lower the noise. Why will people say Islamic terrorism is different? They say the motivation is to kill us all. But bin Ladin's aim is not to kill us all, he has well written aims. Since the effects of people are to kill people then it is assumed that the goal of terror is to kill people. But the stated goals are primarily to get US troops out of Saudi territory.
The rules of fear. How does fear change historically and in other geographic spaces. The central rule of fear that pertains both to online and terror increasingly fear has become disassociated from its object. We increasingly worry about the fear of something rather than that something. Every western police force has the same mission statement, as if the same person wrote them all:
1. Fight crime
2. Fight the fear of crime
This is an interesting distinction. Most police forces spend more time fighting the fear of crime than actual crime. It is harder to fight actual crime and the actual sources of people's insecurity. Essentially reducing the challenge of security to that of impression management is what is happening online and what is happening in modern policing. When there is a bomb there is an intact crime scene with the guys in the white suit. So they leave the crime scene up for two weeks to make it seem like Someone Is Doing Something.
Impression management encourages us to produce fantasy documents, like security and TSA documents. It is as if one person wrote every risk management document: We are committed to keeping our citizens.students.customers safe. The problem is that these fantasy documents not only do very little to provide real security but they also contribute to us doing things that are expensive and arbitrary.
"Invitation to terror" the main problem facing us -- is that terrorism is increasing distinct from the response
We beg to be terrorized by being vulnerable. Bush said after 9/11 "American the vulnerable". Churchill would never have said "Britain the vulnerable". Do not begin with the sensibility of weakness and powerlessness. Once you begin thinking that way then everything becomes one big target. By spending money to reinforce we are inviting others to terrorize us.
The major problem that we face is that the impact of terror is defined by our response to it. If people do not want to be terrorized then they won't be. If people are otherwise encouraged to wet their pants then they will be traumatized. Look at Israel, the UK and then the US responses to bombings.
Instead of a vulnerability response we should have a resilience response. We don't need to reorganize our lives but we learn to live with it. We don't acquiesce.
Vulnerability-led responses intensify the power of attacks. It also can perversely increase the scope of the threat thus requiring more investment in responses.... An example.. "License to hug" dealing with children and the way that children are meant to be experiencing pedophiles means that they are vulnerable to every adult and should respond by being terrorized. Because checks are introduced parents do not trust other parents who have not had a police check. By the end of next year one out of three adults in the UK are going to be police checked. This means that we not only worry about pedophiles we are now worried about the FAR FAR larger number of people who have not been police-checked. Today that is a calculation that people make.
We are wearing down the resilience of the community by creating these new false security issues.
All my colleagues have an addition to passing on emails that give virus alerts.
Bruce drew a distinction between a feeling of security and actual security. This can be mediated if we make sense of actual and feeling through creating meaning. To what extend can we give security meaning. To the extent the kind of advice means something to them and corresponds to their lived experience. How do we collectively discuss the concept of meaning so that security is something that means something to us rather than a set of procedures on the back of a fantasy document.
What I want to talk today is work on threat assessment. This aligns well with what others have said today.
DHS has two folkways of threat assessment. Two major ones are
1. Human behavior is unpredictable -- militate against worst possible attack, mini-max strategy
2. Zero sum game assumption -- value to opponent is inverse of loss to us, mitigation depends mostly on potential cost to us, weighted by capability
Perspective taking is not the same as empathy.
Understand that "what is the worst that can happen to me" is not the same as "what my opponent wants"
Think about a terror organization CEO he has values, options, beliefs,
He has goals, resources, and missions he wants to address
We develop these things using proxy analysis using people who would be considering area experts.
This is an overview of a methodology that uses an event tree to determine what terrorists decisions are and about how they might attack it. We then scale the alternatives with attributes that are designed to measure each of these objectives. We also make explicit and incorporate uncertainties. Some of these are uncertainties that are inherent in the context of the problem and some are lack of information about the actual terrorist. Ultimately we use a Monte Carlo analysis to come to a set of conclusions
We develop beta distributions not about both our own and the teorrists uncertainty abotu their ability to run the attack.
Consider three objectives; strengthen Al Qaeda, operational expenditure, instill fear and do economic damage and kill people
Some types of attacks kill alot of people but don't instill alot of fear
Te final result is a risk profile on the attack probabilities
Your chances of dying in a terror attack are basically nil
If you include London Bali Madrid and everything else there are about 250 per year. In the US the FBI after spending years and tons of money found no Al Quada in the country.
I think therefore they are "What really bothers me is the things I am not seeing. I know they are here and there are thousands of them." this is policing by the "I think therefore they are" policy. It is not responsible.
2 billion foreign nationals have entered the United States since 9/11, many from Canada.
If you go to Winnipeg you won't find any. Does this just prove how clever they are? Does it prove they have been succeeding? It proves they have not been trying.
If you blow up a hotel in Jordan then your sympathy collapses. Now in Jordan and Pakistan, Al Quada has support in the single digits, particularly after Bhutto assassination. The operations they have pushed have alienated the populace. People don't like being blown up. Now they are basically isolated and in Pakistan.
The response to terrorism has been greater and more destructive than what terrorism has done in the US. The costs of terrorism include counter-productive policy. The human and economic costs for the war of Iraq have been vastly greater than 9/11. The cost of fear has been tremendous. People worry themselves to death. For example, 20 yrs after Chernobyl The radiation killed 50 people. But the big health cost has been in anxiety suicide, inability to work, depression, and reduced life expectancy from fear and anxiety not from increased background radiation.
Because the TSA has increased the problem of flying so fewer people are driving short haul. TSA has been responsible for more deaths than Al Quada per year - not that these are morally equivalent but in terms of numbers and lives the people are just as dead.
Terror fear has created an industry of fear. There is a reporter's duty to report things as they are, but this is not happening. People sell things to the government. Entire industries hype fear and fear is costly. If it is illegal to cry fire in a theatre then why is it legal to cry terror?
Michael Chertoff said last year that there would be an attack in the summer of 2007. Why is that irresponsible behavior not actionable?
Three scary things:
1. Atomic terror
There are twenty independent variables, and if each chance were 1 in 3 then you chances are 1 in 3.5 billion. But your chances are not 1 in 3, these are very difficult barriers much lower chance than 1 in 3.
2. Cost and benefits
We don't know the cost and benefits of homeland security. What is the value of homeland security?
The value of a saved life is 1-10M then it is acceptable in general in regulation
And in homeland security the cost is about $180M per saved life ten to 20x other arenas
3. What should we protect?
What needs to be protected from terrorists?
The number of targets is essentially infinite
The probability of one target being hit is roughly 0
Protecting one target means transferring risk to another, given the WTC is gone
So we could spend infinite money to protect
I am representing not only all the arts but also humanities. I will try to do it. Amazing photos. GO LOOK!
I took photos of TerrorTown (trademark) where people are sent to train SWAT departments and how to assault on bioweapons and radiological facilities. The entire town was bought. Some people who used to live there now play in the training. Some of them are play victims and some of them are running around saying Allah Ackbar.
I am a well-informed private citizen. I try to function as a canary in a coalmine in democracy. What am I allowed to see?
DisasterCity (TM) in College Station Texas but this is no longer used to train fire fighters. There are huge amounts of money involved. In all of these there are students observing the exercises and narration. This a theatre exercise.
Training in HAZMAT suits seems like a good idea.
What is real? WMDs? Are police officers militarized?
Bruce: "Why Terrorism Doesn't Work - Int'l Security - 7% of insurgencies are successful. http://www.schneier.com/essay-176.html
Frank; The real terrorism is not Al Quada but it is the 2nd or 3rd generation Muslim who decide to put a bomb somewhere. There is a lack of meaning of both sides. When Bush says, "Why do they hate us?" He is saying he has not got a clue what is going on. And if you look at the jihad websites they don't know what they want. If American pulled out of Arabia and Israel ceased to exist nothing would change. The jihadists have subjective intent and a kind of dynamic force.
Q: If terrorists are being spent, why aren't the politicians declaring victory?
John: First they are afraid of regeneration. But as soon as the head of the CIA says that they are pushed back, then you don't want to predict things getting better.
Bruce: Running on fear of terror is just as good as running on fear of communism. There is cognitive dissonance. How did we win when we never really lost? SO inviting victory creates a situation where we need to have an uncomfortable conversation
Q: IRA is a good example. They lost public support but it took a public truce to believe it had come to an end. By giving the failed IRA credit for coming to terms it became really believable.
John; There is a danger that if we declare we have won
Q: The REAL IRA still blows up schools but they have no public support
John: What happens when you say this to real people.
Paul: The economic aspect is huge. There is a motivation to make this never-ending. The domestic preparedness industry is huge.
Richard John: DHS is moving the war on terror by making it more about disaster preparedness
Economists try to understand the tradeoffs
Chicago school --> formal micro modeling
BUT: models assume rational agents,
Facebook study: no connection with reported privacy prefs and FB behavior
Also: larger networks have fewer people visible
Over time, people made their profile less visible
Framing example: $10 anonymous gift card or $12 identified gift card
Four conditions of subject design: endowment conditions
Endowment: $10 gift card with study then choose cards
Endowment 2: $12 gift card with study then choose card, that is they can switch or not switch after the first experiment
First endowed with privacy then endowed with the extra $2. In theory these should all be the same if you believe in the Chicago school. The selection was very strongly a function of which card they began with. Half the people who started from an anonymous condition changed (they would be willing to pay $2 to protect privacy?)
In one Facebook study the visibility of a network is a function of the size of the network. Around 60% of people make their profiles visible
Changes to visibility of 2007 - 2008 of FaceBook profiles for CMU decreased 6% it is extremely linear even if the change is small. Fewer people were making their profile visible.
This illustrates a behavioral economics of privacy.
For example, why do people seek bad reputations? For example people who post themselves drunk and wild.
"30 reasons a girl should call it a night"
What we think is a bad reputation is associated with a good reputation. Because fame is associated with the bad as well as good and it is easier to be famous for badness
Jean (not out loud) these are culturally age appropriate behaviors. She gets drunk, falls in bushes. She is 20. We frown and shake our heads. We are 40+
Privacy engineering. broadly writ.
Whole system approach, where the whole system is privacy
Privacy and security are not a zero sum game
Hard cases: crime investigation vs crime prevention
Sharon Beshenivsky case - found cop murder through tracking license
Millions of people take pictures every day. Some of them are brown. Please do not shoot them
If we had computer security that is meaningful we could have privacy if we had privacy to implement with the computer security.
Pandora's Cat is out of the barn and the genie won't go back in the closet.
Vote in a paradigmatic problem: voting presents total-system problems with requirements that interact at the system level. It requires confidentiality and privacy, integrity, system integrity, audit trails, nonrefutable dispute resolution and forensics. Every step is the voting system is a weak link. Instead of defense in depth we have weakness in depth.
Privacy exacerbates all the other design difficulties. Unfortunately the entire process is a problem. The entire process is vulnerable: voter registration, authentication, voting, authorization, counting, citifying, resolving disputes, internet voting is really really really risky.
The voting machines code is proprietary, the data formats are proprietary, there are no audit trails, and there is no security. Diebold provided software to 17 counties in California and in NONE of those counties was it the same software that was certified.
There are however functional cryptographic mechanisms
-identity based cryptography
-attribute based encryption is basically crypto that enables communication with everyone and exactly people with those credentials (enrollment is obviously not the crypto problem)
BUT the problem with for example that Diebold has used the same crypto key for a decade and is built upon Windows XP so anyone who is capable of a Google search can alter the internals of a Diebold machine at any time -- election audit etc
A comment on global versus local approaches. All our problems and not just computer security need techniques of holistic system development but not only computer security.
We need a holistic approach to
Energy: future oriented versus shortsighted
Agriculture: natural/ industrial
Health care: prevention vs cure
Systems: principled vs unprincipled
In fact, one voting machine company has five convicted felons who cannot vote working on and controlling the code on the machines!
Railroads in the 1840s were not my original interest. I feel into it. Railroad investment in the 1840's was the subject o real capital investment. By far this is the greatest technology mania in the last five centuries. It is the only case where private investors in pursuit of profit invested more in public infrastucture than the nation state invested in defense.
Cumulative investment in railways was 6 trillion dollars.
Maybe $100B was spent in building out fiber in the US. This is 1% of GDP.
There are delightful comparisons. US the telecom mania had a one year peak from 98 to 01 then creative accounting managed to keep it going on for another year
Railway construction similarly was a few years and then construction ceased and market manipulation kept it up for another year
There was a pretty strong collapse in both cases
One of the issues in perception of risks. 9/11 was terrible we lost 3000 people. We loss that many Americans in car accidents every month since.
During the 1840s more people died in London horse accidents than died in all railway accidents in the entire country but people were fixated on railroad casualties so there was a drive for safety regulation.
MIT manages to practice extreme price discrimination, from 0 to $40k a year. That has been understood to be good.
All the stuff we hear about net neutrality is not new. Social reaction to railroad and railroad practices were reactions to price discrimination. Railway price discrimination was so extreme that the third class would be right behind the engine and not even have a roof on the car (!!!!). The first class was complete luxury. They tried to make third and second-class something you would afford if you could in any way. The result was very serious regulation that may have been counterproductive.
There is an economic incentive to price discriminate and human dislike of it. People will accept some price discrimination but reject others.
An entire series of very appropriate but impossible to copy cartoons about price discrimination in the railways in the post-mania period (post 1840). One about how heavy the resulting regulation was.
Privacy is the opposite of price discrimination.
We Dilbert types are good at it and like it but most people are not.
Security management is risk management. Understanding experts from other domains
P Lio, L Bianchi, D Korff: DNA privacy in forensics vs genetics and genomics
M Gill - inside the head of the fraudster thief and murderer
He goes in and interviews convicted criminals and then he gets fascinating insights from criminals
P Wilson - the Real Hustle
By milliliter printer ink is more expensive than Champaign; Peel off tag, open box, put in tin foil bag, could zap tag yourself, go thru side of gates to avoid ringing, go thru gates with lady with stroller,
Fake ATM scam, the $5000 ring scam, lots of scams exploit
Understanding people is key to security (I agree! See
APP CRASH LOST NOTES
Matt: Election officials criticize people who try to fix election machines by pointing out errors by claiming that they were undermining democracy by showing the machines were flawed.
Peter: If you write something plain that says honestly "This is terrible documentation" and "These flaws have been know for ten years and are not fixed" the companies and SOS screamed.
The fact that the SOS had the vision to demand that these machines be fixed or not used is incredible. The vendors have al along been in bed with the developers (?) and often the election commission. The election officials were completely swallowing the kool aid. The main claim of the voting companies is that anyone who could view the code caused a risk. His arguments were completely specious but the election officials do not know. The only solution is openness and scrutiny but the fact that CA was able to reclaim the voting system is amazing.
Matt: We were very careful to negotiate in OH and FL that the vendors did NOT get to preview the report.
Q: Whom do we want to protect information from? The photo of the woman who was drunk is an example of who cares what she thinks.
Alessandro; Privacy as a need. It is selectively revealing information to other people. The ability to compartmentalize is very interesting.
Andrew: It is the issue of control and our ability to control what is happening. The Chicago School is opposed to privacy because it impedes privacy information.
Alessandro: There is information that you would share with a stranger but not with a friend, like what band you like.
Andrew: Reduction in crime is a certain amount of crime but it is premeditated and it is mostly shifted.
Randi: Identification of criminals is a function of releasing crime.
Andrew; If we took all the money we put into CCTV and put into cops or lighting we may have had different impact.
Bruce: The counter argument to keeping flaws secret is that it is inherently
I try to think about the kinds of environments in which our own ancestors grew up and what kind of risks they had to face.
It was discovered some years ago that pigs when allowed working for tokens to which they could exchange for food. So they would work for tokens and then bury them not exchange them.
There are distinct aspects of our culture that come from our own history. We cannot help making sense of what we see now through the lenses of our evolutionary and cultural history. Neither the Internet nor terrorism exists in our evolutionary history. It is common to quote Einstein: Everything has changed now except our way of looking at things.
How are we seeing things thru the lenses of the past not only in two broad issues: terror and net.
We have been discussing them in the same breadth. But these are very different kettles of fish. The problem of Internet security is that people are too ready to trust to assume that every thing is okay. While on the other side we are too ready to accept restrictions and impositions and too ready to humble ourselves in the face of the policeman and the machines because we overestimate the threat.
What does the Internet look like to someone who has evolved as us? And what does terrorism look like for someone who has evolved like us? And what is it about those two things that bias us to be over or under secure? I am thinking about domestic issue. The problem about the Internet is that it is much too homely. It is home computing. We have personal computers as if they were friends of ours. They put us in control. In a sense all the triggers are being pressed to say that this machine is part of our friendly environment. It will be as difficult to get us to take precautions against the Internet, as it will be to get us to take precautions about being bitten by our own family dog. Terrorist acts are alien creatures in an alien environment. It is outside of our domestic environments. We have no control over it. It is members of out groups. These people do NOT want to manipulate our reputation and take our money (and we have no evolutionary connection to money) but these terrorists want to take our lives. So no wonder we over-respond to this. What are we going to do?
We labeled this session how do we fix the world? We want to change our illusions about the safety of domestic computers and the lack of trust in public spaces in which we could be attacked by terrorists. We want to make home computing seems more dangerous than it has seemed to be. We want the internet to look like something that is likely to bite us. Our default assumption is that there is a padlock on it. We ought to have a shark instead of a padlock we will be bitten. Natual selection has changed the way we deal with informaiton not to make us happy but to make us fit. Many things which regulate us make our lives quite miserable because the cost of this in the long run
Make what is seemingly safe into clearly risky in ways which correspond to the ways in which we dealt with in the past. Nature has designed us not to be happy but be safe. OTOH we know we can let our guards down to pain and fever because the world has changed.
In terrorism the world is not as we imagine it to be. It is as if we were assuming saber tooth tigers are about to attack us from the next valley. We can afford to exchange our illusions for ones in which we are reducing the alarms. We need ot think about what these threats mean to us as evolved creatures, how they evolved in reality and to what extent we can provide technical and social features that shift our concerns so that we are not over-cautious or under-caustious.
In both directions we need to get to a point where security is not in the front of our minds.
George: Is it possible that our response is how long they have been around. Israel and Ireland the populations did get used to terrorism. We are not "used to" Internet threats.
Nick: In Britain the attack on the underground it evoked the great British spirit and we were traveling on the trains the next day. I would like to say as a result of that the level of government operations but in fact the level of paranoia and over-reaction is just as bad as in the US. I am not sure that we have not learned the less from history.
In terms of the Internet the newness of the Internet perhaps we will learn to be more trusting and less free with our information. As everyone comes on line everyone will learn.
Bruce: Young people are more facile in dealing with Internet misinformation.
Nick; Physical devices are much more social companions and they have personalities and reputations. You can see that the computer lacks a sharp edge. I think because of the very deliberate attempt by Jobs to make computers friendly they have succeeded too well and it is a paradox.
Chip; Nick focuses on the user. What about Microsoft and sloppy code writing. Might we attack the problem by looking at code and thus making the net as safe as it appears.
Nick: We need to persuade people that code is not safe at the same time we are making it clear that the code has the potential for hazard. The kind of thing we are liable to loss is likely at the worst to be money. e.g. don't care about being burglered by a human agent. I suspect alot of people in this room lost money in the stock market in the past 2 weeks. But if 20% had loss less in a robbery with a stolen TV we would be much more down in the mouth.
Ralph: (couldn't hear) Internet threats don't touch the user (??)
Peter: Chip (i.e., Ralph) is ignoring all the things like spam, phishing, and everything else that do affect you.
Q: Whenever the user gets blamed the system is abusing the user. How do I take something that sits on my lap and make it scarier? How do I take the face of terror and make it friendly?
Nick; We can take away alot of the clues that scare you. Everything is set up to a place to say this is a war zone. The very least we can do is stop playing in the terrorists hands. We have made the fear in airports. It is a very interesting question? Why do they attack airports? Because people defend airports. Airports have become the iconographic place where the theatre of terror is played out. Make it less obvious that we are playing with them.
Rachel: Making computers scary won't happen. Vendors won't do it. I am aware of dangers but it was in my face having that fear in my face when online I would hate it.
Nick; Give me the analogy of the car -- they feel too safe. You don't feel you are too safe because you have other people's lives in their hands. Like bad drivers they can be frightening. Young people just wander the net and press the buttons and I have to always clean up the computer after they go one it because it is wildly infected. They get talks in school and are completely unaware.
Jean; Peer pressure plays a critical role in stopping drunken driving. People who clean their cat food cans for recycling then participate in organized crime and spam by not patching their machines. How can we use peer mechanisms?
Nick: Peer pressure is a way to do it. But it has to work very differently in different places. In the US it is quite strong, in Denmark it is very very strong. We have rogue users of net in states. For example peer pressure would not work very well in Russia and not at all in Turkey.
There has been much discussion today about the need for more computer security. I don't know if we need any of it. There was no empirical evidence. You have very vivid imagination and are experts. Imagine we are we are standing in front of the US Congress and we see so much going into terror research and so little going into computer security. We want to say "give more to computer security". There is a single center on the study of terror in KSG. They have as many people in that Center as in the room. They have lots of ways that terrorists can kill you. My least favorite is the one where a tanker is blown up in Boston harbor. These are very frightening events and they certainly argue effectively for resources in part because of the potential effects of the attacks. Where are your numbers? Where is your scenario?
Why is this room so unconcerned with terror and we are so concerned with computer security? It is because they worry about terror and you worry about computer security. I study accidents and my friend Jean Camp quoted me as saying that we have too few airplane accidents. I agree. The dollars involved in actual accidents are very salient. The dollars in accident avoidance are not so salient.
Should we be spending more to avoid accidents or should we be spending less? Imagine you have to keep spending your money the same. Should we spend money on voting systems or padlock systems or cubs like Jean was talking about. I originally saw this when we talked about saving lives through environment policy for example we were spending 10k more times than other areas which make no sense.
If you are right about terrorism then the biggest cost is that they are consuming anxiety. My concern about computer security is I ask Allan Friedman what to do. We do that in medicine. We do that in terrorism. Most of the people in this room are their own computer security experts. So you know all the problems with computer security. So you have anxiety about it. One of the worst things about anxiety is that anxiety is very nonlinear with probabilities. Prospect theory here -- if we have made a system 100x safer my worrying about it will not decrease 100x. That puts us in a pretty hard position with respect to policy.
Paltring (from the word paltry, meaning low level deceit or fooling) is the big problem with computer security. When my bank added a new system where I saw my picture the bank told me it was the best possible security. This is all paltering - tricking people. I don't much about computer security. But in an area I know - the subprime crisis -- these were not Russian criminals. This was Swiss Bank saying "This fund has adequate capital and concerns about its liquidity are not founded." And two days later it closed.
If you have a symbol that proposes that your bank is safe that places a negative externality on me because my bank is just as safe but now I need to get the symbol in addition to being safe.
The problem of differentiating levels of risk. I would like to se more differentiation of risk. Let me give you an example where society has gotten this wrong. We are looking at sulfur dioxide and soot. They are both regulated by power plants and are considered interchangeable. Soot is much worse - it gets into your lungs. S02 is a good thing for global warming. Soot is the worst thing we can do for global warming. But we lumped them in one big category.
There are some areas where your taking less caution makes things better for me. My son goes to Central Park at night because bunches of people go to Central Park at night. How about your burglar alarm? Does it decrease burglary or just make the burglar come to my house? Before we could afford one we got the old metal plate with the fake alarm sign. It worked well as signaling.
There has been something about hackers and what hackers will do. In England there are many many radical imams and how terrible the country is. In the US Muslims earn more than the average American. There are one or two recorded comments about this. Our domestic Muslim population is happy. Most of what we have been happening here is security through technology. I want to endorse something that jean talked about -- the goal should be to catch them before they do too much damage and what we need is collaborative filtering. Someone put up a figure with Bayes Theorem one swallow does not make a summer. One event does not change our conclusions but three events so it changes. So I hope you will worry about systems like that. I have done some work on eBay, which is crude, and many things are wrong with it and it works well. I only buy things and have never been ripped off -- Yehprum's Law says some systems which should not work do. I met some Dutch civil engineers. They explained that Dutch engineering was superior because the American approach to risk was to over-power the system. We need 2 4x8s and throw across three 4x8. The Dutch would calculate exactly and put in 2 4x6s. Their system is much more exact and over-powering an inexact amount but the Dutch is better because sometimes our back of the envelope calculations are two times off, so we get it wrong more often. Don't throw resources at it just build it better?
Q: You could blow up a LNG tanker and take out the city. But if you could bust the transformers, which you will very soon be able to do, that is a bigger security issue than blowing up Boston.
Z: Lou Branscomb is very big on the fact that we can take down the power system pretty easily. My point is that we don't know.
Mike: Without putting the numbers down we are not justifying that the level of the problem is commensurate with the level of worry.
Q: I suggest that economists are not so on the ball
Z: I did not say economists were on the ball I said that other people were not.
Q: My suggestion is given the terror insurance costs my supposition is that economists make people more afraid.
Z: My guess is that you do not like economists. He moves in to sell insurance after a disaster. Insurance prices and NY real estate show that economists are wise to this game. Because the market gets something wrong does not mean the economists get something wrong.
I don't have a huge amount to say because everything I wanted to say has been said by others perhaps write a few time. We do have some numbers of cost of crime we need more numbers. Describes having founded WEIS and seeing economics of security grow into a field.
When I was working on the second edition of my book it became apparent that the psychological aspect of the dialogue was hugely important. So we have an enormous increase in deception-based crime and as the systems improve the attackers go to the human element. I have a couple of book chapters online.
The big question I think for us now is what do we do now? How do we move forward? How do we instrument forward motion? How do we build a security and psychology community? We have to evangelize and talk about new problems. What more do we need to do?
Peter: You need to do what you did with economics is to carry it forward for the next ten years.
Dave: In doing interdisciplinary research one question my timid pre-tenure colleagues ask if they can get tenure. Now that you have been running the conference where is it getting published? Are people succeeding? Is it a successful publication?
Ross; Computer scientist are judged by different measures. So we have pre-proceedings. It was important to CS people that we have proceedings. So we tastefully embed the pre-proceedings in a USB stick. Now that people have written papers and gotten them into journals it seems rather successful. What we have done to boost that is to have a number of edited books.
Q; One of the major questions about the Internet is trust. Who can you believe in? Who can you trust? You can come to the problems in different ways? The issue of trust is a good one to investigate. One of the most useful ways this discussion can move forward is if we pose a research question for everyone and then demand their answer. That makes the conversation easier.
Brashir: One of the things that this workshop could do is to help use understand the relationship between these problems and the solutions. And then we should intertwine the problems and produce solutions. Engineers believe in the lifecycle but in fact the lifecycle is much more dynamic than iterative
Dave: I have been thinking about the possibility of focusing on crime prevention and while I do not object but what I want to know is what is the scope of crime. Do the people in this room with the range of skills believe they have the skills for financial fraud and state-sponsored espionage?
Andrew: Following on Richard I wonder how much paltering has been going. We have people have been talking about phishing and pharming. A good start is a vocabulary and a guide to terminology that you are likely to see in the talks.
Ralph: what we do not see is the potential for those if we are fighting a war. A missile can sink a ship. Inability to fight is a major issue.
Alma: I want to simultaneously respond to you. When security people use the word security we may not mean the same thing. To focus on security can mean anything that falls under stopping bad things from happening to allow good things to happen. You get the misconception that it is all about the crypto
George: To an outsider if you want to attract outsiders you have to tell them what are the interesting problems are. One thing is if I see the picture when at my bank. But if it did not appear if it were not there. Can psychology contribute to knowing dogs that don't bark? Richard Z has the question of how to allocate funds across computer security. We need a clearer picture to enable people with disciplinary tools to get engaged.
Rachel: I really want to focus on online crime. The specific problem I am interested in alot is what is the spread of infections by browsing. The humans we are thinking about and how they interact. The only humans we have to worry about are not the victims but also the middlemen and perpetrators.
Nick: We should have an agenda to come up with something but we need to more clearly define the problem.
Zeck: One thing that would help me alot is if we had to write a ten-page paper and define "by security I mean....". In the last half dozen years due to serendipity I worked with two teams of computer scientists. Each of the young people I was collaborating with got tenure, that is the way it gets done. Here is a problem, work with me.
Paul: The issue of age just touched the relativism of security and safety maybe that is more in the political science realm is but what do we want relative to what is safe and secure.
Ross; America is the center of gravity in behavioral economics. We won't hold all these in the future.
ALAS!!! MISSING MY FLIGHT!!! NOTES END HERE!!