Ethics of Persuasive Technology (new article)
A while back folks in my Stanford lab wrote an article on the ethics of persuasive technology, which we didn't publish. Because the Center for Ethics of Persuasive Technology is now forming, we've decided to share it online. It may be the most accessible article ever written on captology and ethics. The formatting below is not pretty and the article is not edited, but the content is good. --BJ Fogg
---------
ANALYZING THE ETHICS OF PERSUASIVE TECHNOLOGY
2005
DANIEL BERDICHEVSKY
B.J. FOGG
RAMIT SETHI
MANU KUMAR
Stanford Persuasive Technology Lab
Authors' addresses: Daniel Berdichevsky, 1006 Wall St. Los Angeles, CA 90015; B.J. Fogg, Ramit Sethi and Manu Kumar, Persuasive Technology Lab, Stanford University, Box 20456, Stanford, CA 94309
________________________________________________________________________
1. INTRODUCTION
When a doctor tells you your blood pressure is too high, you may modify your lifestyle to compensate: less salt, more exercise, fewer freedom fries. In a sense, the doctor has committed a persuasive act. We are unlikely to question the ethics of her having done so. But what if the doctor were out of the picture? Suppose a device at your bedside not only informed you of your blood pressure each morning, but shifted colors to convey the likelihood of your suffering a heart attack. Would such a persuasive technology be as ethical as a doctor giving you advice in her office? What if it were co-marketed with a treadmill?
The objective of this article is to describe and test a framework for analyzing the ethics of technologies that change people’s attitudes and behaviors. We focus on computerized persuasive technologies—the field known as “captologyâ€â€”though similar considerations apply to non-computerized technologies, and even to technologies that change attitudes and behaviors by accident.
This idea of unintended persuasive outcomes is one reason we are revisiting this topic four years after the publication of an earlier piece by one of the authors, “Toward an Ethics of Persuasive Technology.†[Berdichevsky & Neuenschwander, 1999] Since 1999, persuasive elements have become common enough in both hardware and software—and especially on the web—that designers may not always be conscious of their persuasive nature. They may take them for granted. This telephone beeps to remind you to check your voice mail; that search engine changes its logo every day in a continuing narrative to pull you back more often . But even when persuasion is incidental to other design motives, it requires ethical attention—particularly because end users may not be expecting it and are therefore caught with their defenses down. [Fogg, 2002]
Why pay special attention to computerized persuasive technologies? The chief reason is that while non-computerized technologies can certainly be persuasive, only rarely can they stand on their own . A television with no infomercial to display will not convince you to buy new cutlery. A whip without someone to wield it will cow no slave into obedience. Even a carpool lane requires enforcement.
What makes computerized persuasive technologies more interesting than these examples is that they can persuade independently. Computerized persuasive technologies are also dynamic, changing in response to different users and their inputs. They allow persuasion—and the persuasive experience—to be simultaneously mass-manufactured and user-specific. For instance, a wristwatch that encourages you to keep running by congratulating you on the specific number of calories you have burned is completely self-contained. This leads to questions of agency: do you blame the wristwatch if a runner suffers a heart attack trying to achieve a certain pulse?
Clearly not. Persuasive technologies, like computers in general and even like atom bombs, are not autonomous moral agents. By this we mean that the technologies themselves are not responsible for what is done with them. There is no “there†there to bear a moral burden . Instead, responsibility is distributed between the parties that create a persuasive technology, those that distribute it, and those persuaded by it. Andersen [1971] diagrams a similar breakdown in responsibility between persuader and persuaded, though without reference to a technological intermediary.
Furthermore, ethics is not just about right and wrong, it is also about the systematic study of virtue, duty and obligation. Berdichevsky and Neuenschwander [1999] developed an ethical framework for evaluating persuasive technologies and offered a corresponding set of principles—in a sense, “dutiesâ€â€”to guide their design. We will summarize both of these here, then consider how they apply to specific real-world products.
First, the framework hinged on the distinction between persuasive motivations, methods and outcomes—both intended and unintended. When talking about persuasion in this way, it is important not to confuse the motivation of a persuader and the specific intended outcome of his or her persuasive act. Motivation refers to the reason why a given person wants to persuade you of something. Consider three designers of a software product to promote condom use. One might want to reduce unwanted pregnancies. Another might be driven to fight the spread of STDs. Yet a third might simply have a financial interest in a condom manufacturer. The three designers have different motivations that vary in an ethically relevant way, yet share the same persuasive intent: to increase condom use.
Most of the methods employed in persuasive technologies are ported over from more traditional forms of persuasion. A car salesman might flatter prospective buyers to win their trust; Fogg [1997] found that people also respond to computerized flattery—even when they know it is coming from a computer! A politician looking for votes might try to frighten the electorate into choosing her over an opponent; computers, too, can scare users into compliance, as anyone who has succumbed to a virus hoax and deleted an innocent system file can confirm. A personal trainer might adjust his message about why to keep fit based on a specific client’s age and gender; similarly, networked persuasive technologies can customize their approach based on the vast amounts of information available about us online.
The outcome of any given persuasive act is what it is actually meant to achieve, what a person is (at least in theory) persuaded to do or think. Sometimes there are unintended outcomes: perhaps someone who has been persuaded to use condoms discovers that he is allergic to latex and develops a skin rash. Enough people are allergic to latex that it is reasonably predictable that this might happen. A designer of the persuasive condom software has a duty to at least inform users of this potential outcome—just as the designer of the wristwatch mentioned above should warn users of the reasonably predictable risks associated with vigorous exercise. Preferably, the watch would also be sensitive enough to a wearer’s physiology to discourage him from pushing himself too hard. On the other hand, suppose that in trying to open a condom, a persuaded user slips, smashes into a bureau, and dies of head trauma. This is not a reasonably predictable course of events, so the designer would not be ethically responsible for it.
In their previous work, Berdichevsky and Neuenschwander [1999] emphasized the duties of those creating persuasive technologies. This does not mean, however, that we can neglect the responsibilities of the persuaded party. Most importantly, the user of a persuasive technology, assuming he or she knows it is persuasive, also ought to consider the reasonably predictable outcomes of being persuaded by it. To play a persuasive slot machine that encourages gambling without thinking about the stakes at hand—including the reasonably predictable outcome of gambling beyond one’s means—is morally irresponsible.
Nevertheless, most of the responsibility for the ethics of any given persuasive technology does appropriately fall on its creators. In light of this, we have refined the guidelines proposed by Berdichevsky and Neuenschwander [1999] into a smaller set of principles to help designers think more critically about their work in this field—and that can be used to analyze the ethics of existing persuasive technologies.
The equivalency principle suggests that if something is unethical in the context of traditional persuasion, it is also likely to be unethical in the context of persuasive technology. This applies to motivations, methods and outcomes.
The reciprocal principle suggests that the creators of a persuasive technology should never try to persuade a user of something they themselves would not consent to be persuaded of. They must also regard users’ privacy with as much respect as they regard their own .
The big brother principle suggests that any persuasive technology which relays personal information about a user to a third party must be closely scrutinized for privacy concerns. This distinguishes between “big brother†technologies, which share information, and “little sister†technologies, which do not. A big brother might be a web site that transmits your purchasing history to a telemarketing firm, while a little sister might be a motivational scale that keeps your weight private while encouraging you to reach your weight loss goal.
The disclosure principle suggests that the creators of a persuasive technology should disclose their motivations, methods and intended outcomes. This allows users to assume their share of the responsibility for these outcomes, and reduces their vulnerability to persuasion that they might not otherwise notice.
In addition, the reasonably predictable principle reemphasizes that the creators of a persuasive technology must assume responsibility for all reasonably predictable outcomes of its use.
With this framework and these principles in hand, we are ready to begin analyzing specific case studies.
2. CASE STUDIES
2.1 The Amazon Gold Box
Web sites have become noticeably more deliberate in integrating persuasive elements since the late 1990s. You need not go far to find a very explicit example of this—just check out the “Gold Box†in the top-right corner of the Amazon front page. The idea, which debuted in 2001, is simple: as an Amazon customer, once every 24 hours you gain access to your Gold Box, a special area with 10 heavily discounted products. [Walker, 2002] While visiting, you are assailed with a sense of urgency and high stakes. You can either buy products immediately or “pass forever†on them. A line at the top of each product page declares, “[your name], you now have [60 or fewer] minutes to take this offer!†Simultaneously, the graphic of the Gold Box itself tracks your remaining time in bold blue letters peeking out from underneath its lid.
Godin [2000] reports that Amazon has the technology to build relevant product recommendations, to encourage navigation to particular areas of its Web site by surfacing the relevant links at just the right times, and even to determine which members of a community are “sneezersâ€â€”early adopters who are especially good at spreading the word about a product to their friends and family. The Gold Box is one way that Amazon continually improves this technology (which arguably benefits both Amazon and consumers.)
Let us consider the motivations (such as we know them), the methods, and the outcomes related to this persuasive technology. One motivation for the Gold Box seems clear: to sell products, and, what’s more, to “cross-sell†products from departments, such as housewares, that a given customer may not frequently visit. Another is to gather helpful data about customers. The methods employed include imposing time pressure, demonstrating social proof (through the “stars†reflecting average user ratings for each product), and customizing product selection based on purchasing history. The most obvious intended outcome is for the customer to purchase products, but another is to make the experience compelling enough that he or she will return to Amazon.com daily to peruse further Gold Box specials.
Using the equivalency principle, we should ask ourselves, would these motivations, methods and intended outcomes be considered ethical if they were taking place in a non-technological context? The answer would vary from person to person, but assuming that these factors were not hidden, they would probably pass muster. Of course, it is hard to conceal the desire to sell something; by contrast, the gathering of information can be less apparent and thus more open to debate. This relates both to the disclosure principle—like any other persuasive motive, method or intended outcome, the collection and application of customer data ought to be spelled out, never masked, to minimize ethical concerns.
The Gold Box’s storage and manipulation of customer data also raises the red flag of the big brother principle. In 2000, there was significant debate about Amazon’s newly changed privacy policy: customers were notified that their data could now be shared with other companies, and that it would “of course be one of the transferred assets†in the event of an acquisition. [Wired, 2000] The result? It is reasonably predictable that customer data gathered through the Gold Box might end up elsewhere—even if this is not a direct motivation behind the Gold Box’s design. For many consumers, particularly in light of recent government initiatives that aim to gather increasing amounts of data about every U.S. citizen, this may be disconcerting. [Cha, 2003] This speaks again to the importance of full and ongoing disclosure (as opposed to the intermittent e-mail spelling out changes to a privacy policy) which would allow people to opt in or out of persuasive technologies on an informed basis.
2.2 Real Care Baby
Schools take part in the “Baby Think It Over†program in order to persuade teenagers not to become parents before they are ready. The centerpiece of the program is a realistic simulation of a human infant, the Real Care Baby, which cries frequently and has other basic needs that require active “parental†intervention. A teacher in Australia notes that most of her students “really look forward to caring for the baby but by the time the weekend is over they are glad to give it back.†[Maitland Mercury, 2003]
Again, we should consider the motivations, the methods, and the outcomes related to this persuasive technology. The producers—and especially those who adopt the program at their schools—are clearly motivated by a desire to reduce teen pregnancy rates . The primary method is a realistic simulation. The intended outcome is for teens to change their attitude toward having babies at an early age and to reduce risky behavior accordingly.
Let us apply the equivalency principle. Would these factors be ethically acceptable in a non-technological context? The motivation itself seems uncontroversial. With regard to method, while students have carried eggs and sacks of flour to learn about the challenges of childcare for generations, computer technology makes this simulation much more realistic. Better simulations are certainly not intrinsically unethical; however, they may result in more dramatic outcomes, both intended and unintended. For instance, it is reasonably predictable that measures teenagers might adopt to avoid prematurely becoming parents could include birth control or even abortion. Many people would find these outcomes ethically problematic. It is also reasonably predictable (though perhaps less likely) that teens who use the Real Care Baby might develop lasting negative impressions of child-rearing that could follow them into adulthood.
The designers of the Real Care Baby satisfactorily disclose their motivations, methods and intended outcome. They are not trying to sell the baby as a conventional doll to young girls in hopes of secretly persuading them not to have children. Though the Baby does report users’ success or failure in caring for it to a specific classroom teacher, the data does not go any further than that; it is therefore not enough of a big brother technology to raise alarm.
2.3 “Relate for Teensâ€
“Relate for Teens†is a desktop application designed to help teenagers deal constructively with problems they may encounter, ranging from gossip to learning disabilities to domestic violence. Among other things, the program seeks to change how teens think, training them to see how their actions will have consequences: “If I hit this guy in the face for saying my mother’s a drunk, I’ll end up in the principal’s office and my mom will be angry with me.â€
Created by the company Ripple Effects, the “Relate for Teens†software uses rich multimedia experiences—animations, sound effects, interactive games, and video clips of true stories told by teens—to make the experience engaging, memorable, and persuasive. In addition to training teens to recognize cause-and-effect relationships, the company’s underlying motivations include improving their overall social and emotional lives. Its intended outcomes are in line with this: to persuade teens to identify and express their feelings, to seek help from competent adults, and to avoid using violence.
The methods used and these intended outcomes would be laudable in the context of traditional persuasion (such as a teen meeting with a high school counselor, or listening to a guest speaker who suffered the consequences of drug use), so in terms of the equivalency principle “Relate for Teens†stands on ethical high ground. Ripple Effects also does well in light of the disclosure principle. Not only does the software come packaged with clear instructions about the program’s goals and methods, described in simple documentation designed for decisionmakers, the software itself uses both audio and text to communicate these things directly to teens who use the application.
“Relate for Teens†admirably observes one aspect of the reciprocal principle. The program allows the technology’s adopters to remove topics they judge inappropriate for teens in their care, in this way showing respect for different points of view and not forcing all of its adopters to agree with the entirety of the company’s moral philosophy. As the company describes it, “Educators can combine topics in different ways to meet their—and their students'—specific goals, needs, and constraints.†[see web site]
Although the software tracks the topics and activities the teen has experienced, this data is not available to anyone except the user. Teachers, administrators, and even the company Ripple Effects cannot retrieve this record without the teen’s password. This limitation makes certain that big brother concerns are satisfied. Furthermore, to allow for even greater privacy the software interface has a “hide†button always accessible; this button allows teens to quickly conceal and password protect whatever they are viewing, a useful feature for teens who are exploring a sensitive topic, such as alcohol abuse, and don’t want approaching peers or adults to know.
Because this software addresses over 300 topics, many of which are sensitive or controversial, it might be difficult to outline all the reasonably predictable yet unintended persuasive consequences of using “Relate for Teens.†However, our discussions with the makers of this software suggest they have done their homework, thinking carefully through all its implications and incorporating “best in class†intervention principles. Also, as specified earlier, the company encourages people who buy this product to customize its content. This type of local customization is a significant step toward avoiding unwanted side effects of using persuasive technologies in general and “Relate for Teens†in particular, since when it comes to persuading people (even teenagers) how to live life, one size does not fit all. Understanding the local context matters.
2.4 “America’s Army: Operationsâ€
“America’s Army: Operations†is a free first-person online shooter game brought to market by the U.S. military in July 2002. Over one million people have since successfully completed the “basic training†component and played a total of over one hundred million missions, which include seizing airfields, raiding enemy headquarters, fighting in swamps, and combating terrorism.
The apparent motivation behind the game is to reach and possibly attract potential recruits. The intended outcome, at least as explicitly stated on the program’s web site, is for users to reach “insights into what the Army is like.†Another intended outcome is probably for users to contact the Army to learn more about opportunities in the armed forces. As Belida [2002] puts it, “Controversial or not, the bottom line for the Army remains whether the game will help boost enlistments.†Much as in Baby Think It Over, the method used is a realistic simulation, which, because it takes place in an online, team-based context, also employs social proof to pressure people into participating and conforming.
Let us apply the equivalency principle. With regard to motivation, most people accept the need for military recruiting; however, there are also those who would object to it in any form, technological or conventional, particularly when it targets members of younger demographics who may not realize other opportunities available to them. The same applies to the implicit intended outcome of persuading suitable candidates to enlist. More ethical concern centers around the method of simulating combat situations. Given the ongoing debate as to whether violent computer games propagate violent behavior, or make it too easy people to refine potentially dangerous skills such as sharp shooting, some find it unseemly for the U.S. military to produce this kind of game. Belida [2002] recounts that one lawyer in Florida recently threatened a lawsuit on the grounds that “Operations†places his children at added risk.
Though not conclusively proven, it remains reasonably predictable that players might break rules of conduct within the game or grow more comfortable with violence. To the military’s credit, it does try to persuade users to behave responsibly. Players who fail to live up to acceptable codes of conduct—for instance, those who shoot other American soldiers, or willfully target civilians—immediately “find themselves in a virtual jail cell.†This distinguishes “Operations†from other video games where violent impulses go unchecked.
Disclosure is an issue. Some users may believe they are simply playing another computer game, without realizing that they are being exposed to actual military training and imprinted with an unabashedly positive image of the American armed forces. If the real motivation of the game were to persuade potential recruits, but its producers claimed otherwise, this would be ethically problematic. However, the program does seem sufficiently open about its designers’ intentions—although it might do well to remind users of them more frequently during game play. And while it would appear to raise big brother concerns, the Army acknowledges this and allows users to register under screen names that it claims cannot be traced back to their actual identities.
3. CONCLUDING THOUGHTS
Our intent is for the principles demonstrated in this paper to provide a nuanced model for analyzing and debating the moral desirability of new persuasive technologies. After all, it is reasonably predictable that such technologies will continue to grow more common, especially as they catch on in commercial products—and equally predictable that some will be less ethical than others.
REFERENCES
“About America’s Army.†Online: http://www.americasarmy.com/about.php & http://www.americasarmy.com/faq.php?section=Parents#parents3
ANDERSON, K. Persuasion Theory and Practice. Allyn and Bacon, Boston, 1971.
ANDREWS, L., AND GUTKIN, T., The effects of human versus computer authorship on consumers’ perceptions of psychological reports, Computers in Human Behavior, 7: 311-317 (1991)
BELIDA, A. Army Uses Computer Game to Recruit Soldiers. November 7, 2002. Online: http://www.digitaljournal.com/news/?articleID=3372
BERDICHEVSKY, D., AND NEUNSCHWANDER, E. (1999). Toward an ethics of persuasive technology. Communications of the ACM, 42(5), 51-58.
CHA, A.E. Pentagon Details New Surveillance System. The Washington Post. May 21, 2003. Online: http://www.washingtonpost.com/wp-dyn/articles/A17121-2003May20.html
FOGG, B., AND NASS, C. Silicon sycophants. The effects of computers that flatter. Int. J. Human.-Comput. Stud. 46 (1997), 551-561
FOGG, B.J. (2002). Persuasive Technology: Using Computers To Change What We Think and Do. San Francisco: Morgan-Kaufmann.
GODIN, S. 2000. Unleashing the Ideavirus. Dobbs Ferry, New York: Do You Zoom.
“Privacy Group Drops Amazonâ€. September 14, 2000. Online: http://www.wired.com/news/politics/0,1283,38753,00.html
HOVLAND, C. AND WEISS, W., The influence of source credibility on communication effectiveness, Public Opinion Quarterly, 15, 635-650 (1951)
“Relate for Teens–Ripple Effects Flagship Social Learning Program.†Online: http://www.rippleeffects.com/education/software/teens.html.
“Students Think Over Teen Pregnancy.†The Maitland Mercury. May 30, 2003. Online: http://maitland.yourguide.com.au/detail.asp?class=news&subclass=local&category=schools&story_id=230994&y=2003&m=5
WALKER, L.. More Retailers On the Web Look A Lot Like Amazon. The Washington Post, August 15, 2002., pp E01.
0 comments:
Post a Comment