Decisions. We make them every day. It all starts when we wake up – our brain begins its daily routine, the adventure towards known and unknown. We decide on what to have for breakfast, what to wear, take a bus or drive or walk… Some of such decisions are trivial and routine based. They do no longer require any cognitive effort.
Everyday I get up, shower, have my cereal and drink a cup of coffee (I wish my breakfast would look like the one on the photo…)
However, some of our decisions require more. For instance, when we want to paint our house. We have to decide what paint color will be the best – therefore, we consider the decision outcomes. Maybe it should be pink because of the newborn child, or maybe white because it goes well with everything, or we want to stay trendy and use a different color on each wall?! But the color may not be the only factor influencing such a decision. We may also consider how much does the paint costs, or its brand, reputation of the store, an opinion of the store adviser or our friend. The factors taking part in the decision-making process can be countless. But what does it have to do with privacy?
Well, our privacy decisions are very similar to the decisions on our day-to-day activities. The assistive character of technologies, the fact that they are present in almost every aspect of our life increases the number of privacy decisions. Due to the ubiquitous technologies frequently, we may not even be aware of the privacy choices we make.
Although I would like to, in this post I am not going into the details of rational decision making, the theories of behavioral economics, or various psychological biases that accompany the decision-making process (trust me, I will be writing a lot about it in the near future!). Here, I would like to emphasize how difficult it is to decide upon our digital information privacy. As I have tried to demonstrate in my previous post about the history of privacy, privacy is a complex notion that requires consideration of the context. Consequently, the matter of privacy decision-making is also complicated.
I promised not to talk about the theories, however, I must mention at least one. Some of the past research showed that rational decisions may be based on the cost and benefit calculus. For instance, according to the theory of expected utility (ExpU) (normative theory of how people should make decisions) each of the possible decision outcomes is weighted, and its average values are being compared and a preferable option is selected (this is a very simplified description). However, as it has been demonstrated by Kahneman & Tversky, such theory produces faulty predictions in many real-life situations. It is also unsure whether ExpU is a suitable theory to use to predict people’s choice.
Protecting privacy requires careful balancing, as neither privacy nor its countervailing interests are absolute values. Unfortunately, due to conceptual confusion, courts and legislatures often fail to recognize privacy problems, and thus no balancing ever takes place. This does not mean that privacy should always win in the balance, but it should not be dismissed just because it is ignored or misconstrued.
[…]privacy is a form of protection against certain harmful or problematic activities. The activities that affect privacy are not necessarily socially undesirable or worthy of sanction or prohibition. This fact is what makes addressing privacy issues so complex. In many instances, there is no clear-cut wrongdoer, no indisputable villain whose activities lack social value. Instead, many privacy problems emerge as a result of efficacious activities, much like pollution is an outgrowth of industrial production. (Daniel Solove, 2006)
In the case of privacy, the application of this theory is even less plausible. Not only people often do not understand privacy as a notion (or their perceptions of privacy significantly vary) but the possible outcomes of privacy decisions are hard to recognize. For instance, the prediction of potential risks related to information disclosure may be very cumbersome because the harms that may result from such activity are often abstract or intangible. It is much easier to evaluate a decision outcome as risky or bad if we know that it will result in financial loss, harm to our family or friends, etc. As indicated by Daniel Solove in Taxonomy of Privacy, privacy harms may be invisible, not tangible, which makes it difficult to recognize them even by the juridical system. And if that is the case, imagine how cumbersome it is to predict what will happen to us, when we sign up for new applications, share information on social media, and so on.
One common pitfall is viewing “privacy” as a particular kind of harm to the exclusion of all others. As illustrated throughout [this] Article, courts generally find no privacy interest if information is in the public domain, if people are monitored in public, if information is gathered in a public place, if no intimate or embarrassing details are revealed, or if no new data is collected about a person.
If courts and legislatures focused instead
on the privacy problems, many of these distinctions and determinative factors would matter much less in the analysis. (Daniel Solove, 2006)
The difficulty of privacy decisions increases due to the choice provided by digital services. The wrong design choices have two-fold effects. If there is not enough choice, we may struggle with the understanding of, for instance, whether we can sign up for an online dating service without providing all personal information from our Facebook account. On the other hand, we may have too much choice, and as a result, become unsure what data we want to display on our personal profile because we don’t even understand what such data is.
This is why it is necessary to gain a better understanding of people’s privacy decision-making process. This will enable the building of better privacy choice architectures, providing people with transparent, easy-to-understand privacy information compliant with legal requirements, such as the GDPR, that may improve people’s decisions and risk awareness. Also, an in-depth understanding of the decision-making process may benefit legislators, allowing them to identify new privacy risks and harms.
What do you think about privacy decisions? Do you feel insecure about some of the outcomes of your decisions? Do you have any fears while you sign up for various online services? I would love to hear your opinions!
Briggs, R. A., “Normative Theories of Rational Choice: Expected Utility”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2017/entries/rationality-normative-utility/>.
Solove, D. (2006). A taxonomy of privacy. University of Pennsylvania Law Review, (477), 477–560.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.