Preface

This book was inspired by the recognition that risk has new dimensions in electronic commerce and pushed forward by my experiences with the those who are evaluating and assessing these new dimensions using inappropriate traditional business practices as models.

Each month produces a new model for success on the Internet. First Yahoo! was a search engine and now it is a portal. First it was marketing and now it is service. Yet somehow many consultants and businesses offer a single approach to all these opportunities and all the various businesses. I had a most illuminating conversation with senior consultant from a major consulting company. I asked her to enumerate the times when, after reviewing their own work, the consultant's solution was found to be wrong. I asked her to explain the company actions in response to this discovery of error. There were none. Such flawlessness for a single integrated solution in the face of rapid change of businesses and wide variances within the business community was stunning, too stunning to be believable.

Thus this book is based on some unmentionable facts in business: Internet commerce is a change in business; some businesses will do everything "right" yet be destroyed; some businesses will make mistakes yet thrive. There is no single right model for Internet commerce. There is no single right answer that can be bought with certainty by paying the most expensive consultants. For every business there are different choices. Although some businesses have some obvious price advantages (e.g. Big Lots) and advantages in consumer confidence (e.g. Crayola), these businesses are not destined to succeed. When the business is in code ,not in concrete, there are an infinite numbers of forms. There will be an explosion of business models in the near term.

There are of course some excellent consultants, who offer many solutions and endeavor to create not a marginally customized product, but a truly tailored solution. As with all that is excellent, these are rare. The vast majority of customers get a set of viewgraphs and documents prepared for the mythical generic firm. The generic firm is as mythical as the unicorn. The unicorn was a transmutation beginning with the rhino and resulting from the textual drift inherent in of hand-copied scribal documents. As scribal copying embodied human error and inherently resulted in slow transformation of documents , so mass production embodies the understanding of economic actors as generic units. The mythical firm is a creature of technological limits, beginning with common institutional constructs and resulting from the requirements of mass production. The mythical firm results from the need for a standard compatible with the mass-production model of business and business consulting. In the information age everything is malleable, customized, individual. Thus the texts that propose generic solutions are flawed. Texts that educate and enable individuals to make their own tailored choices are needed for the post-print age. Thus I have endeavored to create one of these: a text that explains some basic parameters.

There are some stable factors in electronic commerce, the towering power of the browser bookmark being one, but every business has its own model. The browser bookmark promises that a site will be the first place checked for news or shopping. But if that site is badly designed, offers bad service, or is unreliable it is unlikely to get a second look.

Expecting a single monetary form to emerge form Internet commerce is a reasonable as expecting a single paper currency to come off the printing press. Movable type created fundamental changes in knowledge production: standardization, the ability to compare systems of knowledge, and specialization of intellectual labor. With the printing press, complex markets for intellectual goods developed, contracts proliferated, paper money expanded, and the age of quantification took flight . (Crosby, 1997; Eisenstein, 1979).

At the beginning of the age of print there was no clear single path on which to direct those beginning at vastly different positions. Similarly there is no single path forward through the age of electronic information. Every business person and consumer has a risk profile that is a function of (to name a few variables) market, market position, and risk aversion; because of this diversity this book is meant to be not prescription but rather descriptive. Just as there was no single way to re-organize business is response to the wealth of paper, forms, and currencies made possible by movable type, there is no single way to minimize risk for every party in light of digital information.

The interdisciplinary nature of this book hints at the magnitude of the changes ahead. The current disciplinary structure was built on the print mode of learning and teaching. Just as a Mater's of Information Science degree is a creature of modern change, a future student might get a Bachelor's in Trust. Similarly some modern business structures will be as useful as the medieval guild as the next century approaches and passes. This magnitude of change requires that a detailed book be focused on the near term. Thus this book inherently has a near-term focus, especially with respect to the Internet commerce systems examined. Trust, risk, privacy, security, and reliability are as fundamental to information commerce as Arabic numerals are to paper modes of commerce. Thus trust and risk are the core of this book.

This book is meant to empower individuals to be their own contractors when shopping on the net, constructing an information business, or building a virtual addition onto their current business structure, to encourage shoppers to tread on the Internet instead of in the mall, and tell them how to keep their hands tightly on their virtual purses. The Internet has a power to intimidate that is unfathomable for someone who has seen the vast bulk of digital silliness that was the early days of the Web. This book should remove any residual intimidation. Should it fail to do so, a quick tour of Usenet should eliminate any residual awe for the denizens of the modern Internet. Decades ago, the Internet was inhabited only by researchers, intellectually engaged gentlepersons with shared norms of behavior and common interests.. Now, it's everyone, all the myriad human foolishness, wisdom, joy, and grief flows through the wires every day. There was a widely used acronym on the Internet, IRL, which stood for "in real life". Now the Internet is real life. Sign up or miss it.

It is my contention that Internet commerce will truly come of age in the Christmas season of 1999. Allow a digression into personal experience to explain this entirely qualitative, rather unfounded projection. First I find that I tend to be a moderate early adopter, the first (or third) to try out new technology. Second, I am rare among technical researchers in Internet commerce as I am the one who actually does the family holiday shopping. I attribute this to gender role differences. During the holiday season of 1998 I did my shopping one Saturday morning while my children played downstairs. I had the list, my credit card, and dogpile. (Dogpile is a metasearch engine. That is, it is a search engine that searches other search engines.) As I have been shopping on the Web for nearly four years I was one or two years ahead of the curve. Thus I predict that in the next holiday season working parents and the elderly across the globe will discover this saver of trouble and time, leading to a more relaxed holiday season for everyone (except, of course, the retailers who have not adapted to Internet commerce.) I found in my shopping no price (dis)advantage, as the difference in price tended to be absorbed by shipping costs. Shopping on the Internet gave me a price equivalent to the discount store, with no taxes paid, and home delivery.

This leads to the second, more mundane, inspiration for this book. Three years ago at the First Usenix Workshop on Electronic Commerce, I realized I was perhaps one of three people in the room, by a combination of gender, class, and age, who actually shopped. I was the representative of every parent who has the experience of holiday shopping. I was the single person there who understood at a visceral level the need for shopping without catalogues, phone calls, or expensive personal assistance. I live in the gap between mythical SuperMom and actual working parent. That is, I am an actual working parent who needs life to be friction free to meet the demands of the mythical SuperMom. The time crunch and the need for schedule -friendly remote shopping that is oblivious to interruptions will drive Internet shopping. The aging of the population makes a trip to the mall less an effortless jaunt and more a day's event. The reorganization of the modern family demands, and the technology allows, Internet commerce. Together these forces point to inevitability. This is the ideal moment to thank my family. First the incomparable Shaun McDermott, a truly wonderful man. A patient and supportive man most supportive in that he is a wonderful father. y daughters, Adonica and Amelia, who have made their own contributions to this book by immeasurable contributions to my life. And finally, Wilson, who taught me many lessons I will not forget.

Certainly my early academic mentors deserve acknowledgment. I would never have started the program of study, much less the book, without the support of Michael Feldman. Early on Hudson Welch and James Morris were endlessly intellectually engaging. I am deeply indebted to Granger Morgan for following his own dreams and beginning the department where I had the honor of studying.. Pam Samuelson provided irreplaceable insight into the subtleties of the law, and despite a schedule that is frightening even in retrospect, always found time to provide detailed comments. Mary Shaw has offered valuable time and insights from her technical and personal wisdom. Bennet Yee has given both professional counsel and patient consideration. I wish his office were still across the way, rather than across the continent. Finally my dissertation advisors, without whom this text would not have come to fruition, Marvin Sirbu and Doug Tygar.

To my friends who started virtual and ended up more than actualized: Phaedra Hise, Charlotte Chen, Robin Schoelenthaler, and of course Pip. Laura Painton and Tse-Sung Wu: Thank you. Rosy Chen shared her heart, wisdom and office. Milind Kandlikar provided passionate occasional doses of perspective. Indira Nair for whom mention is necessary but not sufficient. Donna Riley shared her rare gifts of strength and kindness, bestowed with a discerning wit. Ian Simpson provided continued intellectual engagement. Richard Field offered his very relevant expertise and the kindness of his heart in reviewing and commenting on my work. Cathleen McGrath offered engaging debate or empathy, as appropriate, over uncounted cups of tea. Phoebe Sengers reminded me to like myself, and hold my work just dear enough.

Barbara Slater, Andrew Russell, Denise Murrin-Macey, Patricia Steranchak, Janice Trygar, and Victoria Massimino assisted in a many ways, the greatest of which has been in the sharing of their company and friendship.

At Harvard, Jane Fountain, Susan Cooper, Rob Jensen, and Lewis Branscomb have provided moral support and given me the gift of their time. Harvey Brooks was kind enough to be a reader, and gentle in communicating his sharp insights.


Introduction

Consider a dollar bill. To hold one is to have a tangible experience, at the higher denominations a feeling of near-term wealth. Newly minted bills have a unique texture and even a distinct odor. A dollar is the measure of money. It is the most readily accepted monetary form on the globe. To exchange that for a machine-readable data stream seems a great leap. It is not.

The value bound to the paper abstraction of wealth is not a result of mass hysteria or a widespread delusion, as an examination of the purely physical components of paper and ink might suggest. Rather it is a reflection of trust that is widely shared and built over centuries. The dollar is worth as much as there is trust in the solvency and continuity of the U. S. Government; trust in the ability of law enforcement to prevent counterfeiting; trust that a merchant or bank would not knowingly pass on a counterfeit bill; trust in the foundations of the American economy. These trust decisions are deeply embedded and unexamined in daily transactions.

Trust in American monetary instruments is not an eternal national constant. American commercial instruments were marked by early failures; the Continental being the obvious example. 1 In Internet commerce people are once again embarking on a long-term trust commitment. Internet moneys are both unlike and like the dollar. It is one thing to build on the trust of generations past on a monetary instrument and, another to be among the first to take the risk that trust implies. The adopters of the Continental were not made whole by the eventual global adoption of today's greenback.

Internet moneys are like the modern green and historical Continental dollar in that all are based on invisible trust bindings. The trust binding value to the dollar depends on the physical difficulties of reproducing the paper monetary instrument and a centuries-old governance system; Internet commerce depends on the difficulties of calculating mathematical functions and decades-old networks. An Internet commerce system may require trust in the merchant's goodwill as well as his technical competence. Another system may require only faith in risk management of major financial institutions.

In this text the trust relationships in electronic commerce are examined and illuminated. The focus is on trust, but it is equally on risk. Trust is the positive view of exposure: whenever there is trust, there is risk. I focus on these two interrelated topics: trust is risk.

The focus here is on trust as well as risk not only to stress the continuity of the evolution of money from gold bars to bytes but also to provide the most broad explanation of Internet commerce. This focus further distinguishes this study from a consultant's, who might consider risks in a specific scenario to the mythical generic firm.

The determination of risk can be found in an examination of who trusts in Internet commerce transactions. Who will pay, in terms of both money and data, if trust is misplaced? When the inevitable early failures occur, who will be at risk? Who is liable? In many commerce systems there is a trusted third party. Who is this trusted third party? Why is it necessary to trust this party? What exactly is this party trusted to do? Answering these questions means understanding risk allocation in electronic commerce. Answering these questions requires understanding security, record-keeping, privacy, and reliability.

There is no single currency or transaction system which is certain to dominate the future Internet. The answers to the previous questions vary across the multitude of protocols proposed for electronic commerce on the Internet. However, an examination of a broad range of these protocols makes clear that in electronic commerce, there is considerable opportunity to lose both money and data. Customers can lose money and privacy. Merchants can lose money, proprietary information, and reputations. There is much to be gained. It is worth the necessary risk, but only that risk which is necessary. It worth extending trust , but narrowly.

In this text I translate from the technical protocol to the financial risk. There are three basic sources of risk: security failures, data misuse, and reliability failures. This book placed to illuminate the space defined by these three axes. I do not attempt to address every possible risk inherent in electronic commerce. Electronic funds transfer can magnify the weaknesses of cash control systems (Fischer, 1988; Mayland, 1993). If a company has problems with cash control mechanisms and misplaced trust, electronic commerce can make it worse. This is obvious, and is not the focus of this text. The purpose of this book and set of system evaluations is to illustrate risk allocation when a customer, merchant, Internet Service Provider, or commerce service vendor misplaces trust in others, not within their own organizations. (Note that I refer to sellers of all goods but Internet commerce systems as merchants; I refer to those who offer commerce systems as vendors.)

Vendors, banks, consumers, and merchants have different interests. Market and legal mechanisms will assure that the needs all are met in the long term. But one takes risks in the short term. Today the legal environment is uncertain. The market requires information to function, and many are functioning without any better sources of information than the vendors themselves. Thus there are systems which place risks on participants that might better be left with the vendor. This text should provide the tools to determine the sources of risks, what risks are of greatest concern in a few specific systems, and how to evaluate other similar systems.

Understanding risks in Internet commerce requires integrating an understanding of money, network technologies, information security, and the potential for data appropriation and misuse. Thus this books begins with definitions and discussions of money, the Internet, security, and privacy.

In this book I consider the Internet as a framework for commerce. Much of the argument for Internet commerce is essentially information on the growth and population of the Internet. The history of the Internet is included, as it is more than academic. There was at one point an alternative vision of the Information Highway -- citizens as consumers of 600 channels with feedback limited to a single button labeled "BUY". Instead the open Internet has prospered. With respect to shopping and selling, the open nature of the Internet creates trust issues. An open Internet with millions of "channels" has far different trust implications than a centralized broadcast model with orders of magnitude fewer choices.

In short this text addresses the terrain of Internet commerce, rather than trying to lay out a specific path or roadmap. Here are identified the avoidable hazards which are likely to be found on the road to Internet commerce. And thus we begin by considering the nature of the Internet.


Chapter 1: The Internet

This first chapter illustrates the importance of the nature of the Internet. It include a brief description of the protocols which are the core of the Internet and give the network its characteristics. Understanding these protocols, along with the understanding of money, will provide the foundation for understanding Internet commerce. This description is written for the lay person, with use of analogies and examples.

What is the Internet

This text focuses on protocols suitable for commerce on the Internet. Why the Internet? The complete answer to that question depends on the set of questions here: What is the Internet? Where is the Internet? Who's out there? Why Internet commerce? What distinguishes Internet commerce from telephone and mail order commerce?

The Internet is a set of networks which are connected using protocols which are open, portable, and enabled the entire research community to share information. That the protocols were open means that the there were no secrets about how the software works. That the protocols were portable meant they could function on more than one operating system.

Software under the corporate tradition is protected by patents, secrecy, and licensing prohibitions against reverse engineering. Software under the Internet model is very different, these differences have important implications. Open software progresses faster than proprietary software. This is because the body of developers is larger. The code or protocols is available to all hobbyists, academics, and every person who can who can study the code, improve it, and share the results. The code has an installed base and is available to all start-ups who would add functionality. Thus, those solutions which are most likely to keep up with the rate of change on the Internet are those that are as open as possible. Thus a popular innovation will not leave your site behind.

Notice open does not imply a lack of security; in fact the opposite is true. More closely controlled code requires a greater extension of trust than open code. Because software is open, and any one can examine or modify it, it is often presented as less secure. But no one modifies the code that a particular site using. Rather modifications extend the menu of options for the software. The modification of the code is relevant for upgrades. The ability of anyone to examine the code there is not likely to be widespread disagreement about its functionality or features. Open code is examined for security flaws by a community of impartial but expert observers. Greater transparency means a lessor need for trust, both in the stock market and the software business.

What are its Origins?

What is the Internet? And who's out there? For those for whom the Internet has exploded onto the scene in the nineties it may come a surprise that the Internet has been developing for decades. The Internet began as the ARPANET, a United States government project for connecting scientific research sites.

The tools for networking networks of computers were developed by scientists and researchers for use in their own non-hierarchical heterogeneous computing environments. The techniques developed were designed for distributed support, using an iterative process which included seeking and considering comments from the user community.

Although the ARPANET connected only a couple of hundred computers at that time, it created the core of compatible inter-networked computers that became the Internet. By 1983, all the networks connected to the ARPANET used the same protocols (TCP/IP) for communication.

After the release of Berkeley UNIX 4.2, TCP/IP was included in every UNIX workstation. The UNIX standard created a commercial opportunity for network products. Although the vast majority of these machines were not initially connected to what we now know as the Internet, the ability to inter-network networks became a standard feature for high-end operating systems.

In 1986 ARPANET became NSFNET. Eventually the protocols that ran over networks existing at the same time, e.g. the IBM/VMS-based BITNET, ran over the Internet wires as well. The students, researchers and librarians were all now connected.

The purpose of NSFNET was to connect all the supercomputers. As part of connecting the supercomputers the regional networks were also connected. The T1 lines connecting these machines were the first Internet backbone.

In 1990 the first commercial email provider, MCI Mail, was connected to NSFNET. Also in the nineties the National Science Foundation began to reduce subsidies, and gave the responsibility of the NSF backbone to commercial providers, thus enabling a commercial Internet without the limitations borne of Federal funding. As long as the Internet was funded from tax dollars its primary purpose should reasonably be research and not the enrichment of corporations or domain name speculators. As the Internet became increasingly commercial the support for the Internet from research funds became increasingly inappropriate.

Along with commercial email providers, commercial information providers came onto the Internet. Early adopters of Internet technology for information marketing included Dow Jones and Dialog (Cerf, 1993). Thus began Internet commerce.

By 1990 the growth of the Internet was too profitable for information providers to ignore. However, the market remained primarily technical individuals, as access to information on the Internet required either some understanding of UNIX or proprietary software provided by an Internet service provider (ISP). The figure below, based came from annual Internet Domain Survey, illustrates extensively the user community has expanded. (Internet Domain Survey, 1998.)

Note that the left hand axis represents millions of Internet hosts2. Thus an estimate of forty-five million Internet users is a reasonable lower bound because it assumes that no one shares computers and that the survey located every host.

Figure 1.1

Exponential Growth of the Number of Computers Connected to the Internet

A year before the connection of MCI Mail, a European researcher, Tim Berners-Lee, became concerned with effectively transporting the images, postscript files3, text and data files necessary for collaborative physics throughout Europe. The protocol he developed for collaborative physics is the underlying technology for the World Wide Web. The Web allows consumers to search for information on the Internet with a straightforward easy to use interface developed first by Mosaic (i.e. the browser) . Easy access to information has been the greatest driver of Web growth.

The World Wide Web is a critical element in emerging markets. With the Web any person could access information easily. Mosaic made it as easy as point and click, Lycos made searching as easy as point and search. These tools dramatically lowered the threshold for technical knowledge to connect to the Internet, send, search, and obtain information. Although the Internet began as a specialized US Government project, it is now global. The Internet domain survey has expanded to include ninety countries.

Where is the Internet?

The Internet is on the desktop and in South Africa. The Internet is global and American. Determining the scope and population of the Internet with any certainty is both an art form and an open research question. Attempts to determine profiles have included hosts counts, mailed surveys, phone surveys, and voluntary Web surveys. This section is limited to the number of host. Because actual machines are easier to count than users. The user-to-machine ratio may vary between and among institutions and households.

Another way to investigate who users are and what they are doing is to consider domain names. These are relatively easy to count and their growth is clearly exponential. A domain name is the part of a URL that is to the right of "www" , or the part of an email address that is to the right of the @ sign. A domain name is a mnemonic for humans who would rather not remember emails by Internet Protocols address, such as: Sue_Smith@128.196.93. Domain names are entirely for user interface. Because domain names are the only ubiquitous form of identity information on the Internet, a detailed discussion of domain names is included in Chapter 2.

As described in the following chapter IP runs on IP addresses. When a device needs to communicate with, for example, a web server and knows only the domain name of the server, the device must get the corresponding IP address before communication can begin. Domain names are not limited in number as IP addresses are. The number of IP addresses is limited by the design of the system. The number of domain names is limited by the human ingenuity ( and, from the evidence, human silliness).

Any number of many domain names can point to a single IP address, so that a single IP address can represent many domain names. Although each domain name must point to exactly one IP address. These IP addresses maybe of a single machine, or of a class of IP address that represents an entire network.

A domain name consists at least of two parts: the top level domain name and the second level domain name. Top level domains are .com, .net, .org, .mil, .gov and .edu. The second level domain is immediately to the right of the top level domain, e.g. "harvard" in harvard.edu; "chicken" in chicken.com; "despair" in despair.com and "slashdot" in slashdot.org.

Conflicts occur most often at the second level, where for example an early adopter might own mcdonalds.com by virtue of having this as a last name. Then the fast food chain would find itself pre-empted. One of the major issues in electronic commerce today is ownership of domain names. If a person starts a business and makes the business grow, can another take away their domain name by previous ownership? There is no definitive ruling on this topic. A domain name may be a extension of intellectual property whether the company owning the corresponding second-level phrase (e.g., "mcdonald's") has registered the domain or not.) Domain names may be a raw material, subject to "gold" rushes. Domain names may be important political speech, as in the case of http://www.gwbush.com/. In the case of gwbush.com the domain name could be considered important political speech, the property of the Bush campaign which was stolen by the commentator, or valuable electronic space which was claimed first by an innovative entrepreneur.

There has been something of a domain name rush to get the best, and sometimes the worst, domains names. Some of the domain name selections seem rather strange. For example, if a confused user searching for the meta search engine www.dogpile.com accidentally typed www.dogpatch.com the page loaded will be http://www.nwnexus.com/. This means that the domain names for Northwest Nexus and dogpatch point to the same IP address.

Domain names are assigned. Only the assignment of IP addresses and the Domain Name system are centralized. In all other ways the Internet and the protocol on which it depends are decentralized.

There are three top level international domain names: net, com and org. Addresses in these domains are currently assigned by Network Solutions Inc. (NSI) of Virginia. The domain name system is the only It costs $50 to register a domain name, and $30 of that goes to a fund controlled by the National Science Foundation to support the Internet for the public interest. There are three top level domain names that are US-specific: mil, edu and gov. Assignment of second level domain names in the mil and gov domains are controlled by the Department of Defense.

It is likely that assignments in the edu domain will go to EDUCAUSE (http://www.educause.edu/). EDUCAUSE is a non-profit consortium of higher education institutions that encourages the use of information technology in higher education. It was formed by the merger of EDUCOM and CAUSE.

There are also many top level domain names that are geographically bound, these are called country code top level domain names (ccTLD). Each nation that cares to have its own two-level country code top level domain name may have that domain. Examples of these include "fr" for France and "uk" for the United Kingdom. Domain names are registered by continental or national entities.

Every domain name must correspond to an Internet address. IP addresses must be unique for the Internet to function. IP addresses are assigned but there is no authority which requires that these assignments are honored. In Asia domain names are assigned by the Asia-Pacific Network Information Center (www.apnic.net) . In Europe domain name assignment is handled by the "Reseau IP Europeens " (www.ripe.net/). In the United States IP addresses are assigned by the American Registry for Internet Numbers (www.arin.net).

At this point it seems possible that the .us domain name will be supervised by the U. S. Postal Service. The us domain is still being handled by the original research-support institution, ISI. The us domain name appears to be used primarily by K -12 schools, which do not qualify for edu domain names , and municipalities. One reason its use by municipalities is popular is that many of the big city com domain names were bought in the domain name gold rush in the1990's. Some big city names, for example, boston.com, were bought by location-specific businesses before the city registered.

The graph below shows that the distribution of the purposes of the users on the Internet has changed over time.

Figure 1.2

The graph above presents the percentage of domain names registered in the in the different top level domains since January 1995 through January 1999 as reflected first by public registration levels and then by the Domain Name Survey.

The org domain is non-profits, for example, sierraclub.org. When looking at this figure it is important to keep the previous figure of absolute group in mind. For example, clearly the number of universities has not declined. Yet the percentage of domains on the Internet that are universities has decreased. Similarly, the number of domain names registered to non-profit organizations have more than doubled in absolute number over the time period. The graph shows that the number of nonprofit organizations has expanded exponentially with the number of overall domains as is necessary to keep the appearance of an approximately constant percentage in the figure.

The mil domain consists of addresses for the US military. The military's share of total domain names has not significantly decreased, staying at roughly 4.5%. Given the rate of growth of the Internet this illustrates that the military has built upon its early commitment with aggressive deployment of Internet technologies in the Internet.

The edu domain is populated by universities. The number of registered educational domain names appears not to have changed dramatically over the period depicted in the figure. It has declined in percentage terms. In the years covered by the graph the number of edu domains more than quadrupled, rising from 1,133,502 in January 1995 to 5,022,815 in January 1998. The phenomenal growth of international, network and commercial domains over the same period accounts for the create the relative percentage decline in the educational domain.

The predominant commercial domain is com. The network domain was originally for network service providers: the IP address registrars above; ISPs; and providers of other network services. Shortly after the inception of the net domain it was discovered that net domains provided more fertile hunting grounds for ideal domain names for late adopters. Particularly for companies which missed the chance to obtain the .com name of choice the net domain provided a second chance. The net domain now serves three markets: traditional businesses moving onto the net, modern net-created business opportunities (e.g. Web hosting) and personal interest domains (e.g. http://www.momspace.net/).

The top area of the figure gov (United States government) and geographical domain names. The increase in regional names, including .us, Asian and European domain names, is reflected by the sharp drop in the total percentages of the other domain names. After July 1995 registration of net domains increases so dramatically that there appears to be a leveling of regional domain name use. Again considering absolute growth registration in international domain names continues at an exponentially increasing rate as shown in the table below.

The growth of hosts on seven continents from the Internet Domain Survey.

Region

Hosts in

Jan. 94

Hosts in

Jan. 95

Hosts in

Jan. 96

Hosts in

Jan. 97

Hosts in

Jan. 98

Hosts in

Jan. 99

North America

1,685,715

3,372,551

7,088,754

11,216,036

20,302,652

33,702,867

Western Europe

550,933

1,039,192

2,699,559

4,352,152

5,537,049

9,300,942

Eastern Europe

19,867

46,125

168,142

238,580

443,191

694,723

Middle East

6,946

13,776

44,484

58,930

103,925

211,824

Africa

10,951

27,130

84,715

104,838

199,958

284,912

Asia

81,355

151,773

672,495

1,006,664

1,661,034

3,089,659

Pacific

113,482

192,390

475,505

647,948

916,538

1,066,398

Regional Growth of the Internet

The customer base on the Internet grows as the number of countries and connections grows: exponentially with time. Although the coefficient of growth varies across the continents, the shape of the growth curve remains the same for each region. It is the expectation of the future of these growth curves as much as the current magnitude that so excite the providers of content and commerce services.

Who is on the Internet?

Profiling the domain is no trivial task; and the profile of the typical Internet user is more difficult. Thus any survey-based discussion of Internet users is subject to gross generalization. With this caveat I now consider just such gross generalizations about users. Keeping in mind the tendency of Americans to deviate wildly from any norm, take this discussion for what it is: an examination of trends by a long time user with an academic bent.

First, the number of female users continues to be smaller than the number of male users. However rate of growth of the number the rate the the rate of growth for the number of female users is growing faster than the rate of growth for male users. Cultural and economic factors appear to drive this gender imbalance. As the percentage and numbers of female users increases, as they will, eventually gender distribution will stabilize at a level that reflects the differences in incomes and free time between men and women. This will have a number of effects aside from the obvious one of increasing the importance of Internet commerce since women still do most of the shopping in America.

Certainly men and women, as well as any gross generalizations can apply to billions, will use the Web differently. Will the threshold at which men and women choose to trust their purchases and data to the Internet be different? At what point will Internet commerce break the threshold of the average shopper? How will the purchasing and marketing decisions of men and women differ and how will theses decisions be similar?

A survey by the Pew Research Center for the People and the Press noted that the Web is multimedia not only in the technical sense but also in the sense that it used differently by different people. This survey classified Web users into four basic types according to what they did while on line: researchers, political expressives, home consumers, and party animals.

Researchers use the Web for professional purposes. They augment the workplace with news and radio, but the primary interest in their Internet is for work-related purposes. (Having been in computer science environments I might point out that an simple count of the pure number of packets to researchers might place them in the party animal department, but this is only an artifact of the bandwidth required by multimedia applications. Video and audio require far more packets than text.) The early markets for Internet commerce are for researchers: books, computer hardware, software, and educational opportunities. Researchers will find that the combination of Internet commerce and the service economy enables them to spend less time at the mundane tasks of life and more time in the lab. If not for issues of privacy, researchers would be the ideal target for integrated Web service sites, which lower the overhead of managing one's life by offering house-keeping, grocery shopping, and delivery services. However integrated services currently sell data about the customers as part of their revenue. Researchers, and others aware of the resale of data, are less likely to embrace such services.

Political expressives go on-line primarily for the political information and opportunities for organizational and discourse that the Internet offers. Political merchandise can be obtained readily on the Web. For example, posters from Nelson Mandela's original presidential campaign can be ordered off the Department of Communications page at the South African government's Web site. The political season is now accompanied by political Web sites that offer information, candidates schedules, and sell goods. Political expressives are committed individuals. The time-saving qualities of Internet commerce are an advantage for this group. Another significant advantage is the ability to evaluate a company or product according to the political information readily available on the Web; searches for products can be easily correlated with evaluations of company performance. That labor practices can be reviewed immediately before an athletic shoe purchase could prove a beneficial to New Balance shoe merchandisers who are well-known for their fair labor practices. Political expressives need mundane goods too.

Home consumers are the obvious point of interest for Internet commerce. The Web's advantages for home consumers are time and convenience. Shopping can be done while the children are napping, or even absorbing their television ration downstairs.

Party animals are a major target of entertainment sites on the Web. It is a reasonable supposition that suck.com is not aimed at the research audience. Similarly reams of sites exist for even the most obscure show, cult movie, and bad habit.

The Internet offers different advantages to different groups of users. For young people, for example, it provides endless playmates and a variety of games and chats. It allows them to discuss potentially frightening topics, such as sex, religion, politics, and drugs, in the safety of their homes. The anonymity of the Internet allows young people to explore new personalities, cultural subcultures, and new roles. Users can change their gender if they wish to. Teen-agers can safely hurtle obscenities, and debate adults with heady feelings of anonymity and equality. All these things draw high schoolers and young collegiates on-line. Once there, they can shop at all hours; without the need for permission or transportation to go to the mall. It is simple to implement a site aimed at teens so that shopping and chatting can co-exist. (A parallel disadvantage is , of course, that they can also impersonate their parents and order merchandise that is later innocently disavowed by cardholding adults. )

Users change their profiles over time. Early users will play, because young people play and there is no reason young people should become suddenly serious when presented with a keyboard. Some of these young people will go to college, and if previous trends prevail they will become both more intense party animals and political expressives. Perhaps they will even manage to graduate from college in the process. This will lead to a move into the researchers group at first jobs or graduate schools. When people have families the extra time will gained from use of Web-enabled service (one hopes) be spent playing with their families. These people will then move to the home consumers group in terms of Internet commerce, although they may generate the bandwidth demands of researchers at work.

How does the Internet work

Why Internet commerce? Why play now when the hazards are undetermined and systems untested on a large scale? Certainly the obvious answer is, "That's where the customers are" as illustrated in the previous discussion of Internet growth. But Internet commerce offers the potential to greatly reduce transactional overhead and remove the constraints of geography and time.

Understanding how the Internet supports varying information markets requires understanding the layers of the Internet. Discussing Internet transactions requires understanding of different network applications (news, Web browser and clients, chat) as well as layers of protocols underneath these applications.

Changes in markets, especially information markets, depend on the nature of the Internet. When publishers and advertisers pay to provide information, they are paying for attention span, increasingly referred to as mind share. In the information economy attention span is going to become an increasingly valuable commodity. University Professor and Nobel Prize winner Herb Simon stated that the most valuable products in the coming years will be those that decrease information flow: filtering, rating, organization and evaluation products. At the time of that statement the Web was still an obscure engineering feat, but its power in organization of information has since become apparent.

The Internet Protocol

A protocol is a communications standard. A protocol defines a series of messages and the syntax for evaluating those messages. The beginning of any datastream identifies the protocol used for formatting the data for transmission. With humans , for example, the greeting will denote the tone of the following conversation. With Internet connections the protocol will define the nature of the connection: streaming high bandwidth content, store and forward text, chat, etc. The receiving machine identifies the protocol and therefore knows how to parse the rest of the data. People use standard sets of exchanges and closures for conversations that are not all that different in function than protocols. When a greeting in a human conversation is businesslike, or friendly, or aggressive the participant who receives each of those types of greetings has information on what is to follow and how to proceed. When a network protocol is described each message has a purpose and a form.

Consider if a human greeting was defined as if it were part of a formal protocol. Included might be standard forms for identification, mood evaluation, and topic introduction. For example, the mood evaluation query, a.k.a. friendly greeting, might be defined as:

query-> How are

e.g. How are you today, Professor Lia?

Protocols may look complex but are only abstractions of simple, and at the best, graceful, underlying standards.

Image greeting the Queen of England with a non sequitur, such as, "FISH!" She would not know how to respond. Essentially this is what happens when network protocols are not interoperable. When systems lack interoperability, the connection is there, but neither machine can make sense of what the other is saying. The machines are, in a sense, speaking different languages.

The fundamental technology of the Internet is the Internet protocol . The Internet is the network of networks that are connected using IP. There is only one Internet as distinguished from intranets or internets of which there are many.

IP is a connectionless protocol. That is, in IP the routes by which each part of a message will travel to reach its destination is not predetermined and the resources for message delivery are not reserved. In contrast, telephone networks have traditionally been connection oriented. Connection oriented protocols establish a point-to-point connection, from one phone to another, when communication is requested. This gives the connection-oriented protocol the ability to ensure quality of service before the connection is established.

Sending information using only IP is not unlike sending a postcard. A postcard is excellent for a discrete message. Of course it is no way private --despite legal controls it could be read by many. IP provides only the addressing, and a best effort in delivering the data.

The figure below illustrates the analogy between packets and postcards. On the Internet data is broken down into packets. The packets are dated and each sent independently. The packets are not encrypted; there is no virtual envelope protecting the contents. Each packet is addressed. Other protocols, including cryptographic protocols and the transmission control protocol, add other features as illustrated by analogy in this figure.

Uses of Protocols

IP addresses have two parts: a netid and a hostid. The network identifier is like the state and city in a postal address: it identifies an area. The host identifier is like your house number: it identifies a specific destination in a general region. IP addresses look like this:

An Internet Protocol Address (IP)

Considering at the sheer length of an IP address it would seem virtually impossible that there would be a shortage. After all, the address are 32 bits long, suggesting that there are 232 possible Internet addresses. In fact, the addresses are separated into different classes of networks: A, B and C. The higher the letter designation the shorter the network ID. This means that different size networks could be easily connected to the same network.

Network ClassNumber of Networks
A 12716,777,21400/126
B16,38365,53410128/191
C2,097,152254 110192/223
Classes of Internet Protocols Networks

The table above shows how many networks of each class it is possible to have. Looking at the table and then returning to the postcard example it is clear that Sandia National Laboratories has a class A address and Lawrence Berkeley Laboratory has a class B address. This is because the address is within the appropriate minimum/maximum range shown in the table. The number of networks and the number of hosts is limited by the size of the IP address; it is 2(number of bits in netid). The number of nodes is 2(number of bits in hostid) which is the same as 2(32 - number of bits in netid).

All this means that the number of Internet network addresses is a little over two million, rather than the excess of four billion that the simple length of the address suggests. Since the network addresses were separated into classes so that all machine could have individual addresses the number of networks which can have addresses is limited. So, despite the simple observation of the length of the address, it is possible to have a shortage of IP addresses.

Considering that IP was developed in 1974 (Cerf & Kahn, 1974) when there were sixty-two computers, not networks but computers, on the ARPANET allowing two million networks to easily connect shows foresight and the grace of fine design. The next version of the Internet Protocol, Ipv6 will have mechanisms to address the shortage of IP addresses.

An IP address provides a guess on the size of a network. Of course such a guess does not always prove correct. A network may be connected to the Internet at only one point, so that it really needs only one address, and host routing can be handled behind this point of connection. This is very common with commercial sites that have firewalls 4and extensive local area networks. Or a connected institution may use dynamic IP addressing5 so that they need fewer addresses than machines. So a large internal network does not require a correspondingly generous IP address.

Coincidentally with the expansion of the ARPANET there was BITNET. BITNET was an early network that consisted of dial-up terminals and IBM mainframes. (The UNIX-based NSFNET grew to embrace and obliterate this concurrent network.) By 1985 BITNET had its first exponential growth in mainframes, to seven "conference machines". Mainframes ran distribution list and forerunners of Usenix groups and could be considered ancestors to today's servers. In addition there were hundreds of machines which could connect to BITNET including machines at Yale, University of Maine, State University of New York -Stony Brook, Brown University, Harvard, MIT, and Tufts University. BITNET allowed users from hundreds of machines to run terminal emulators to come together and chat synchronously. Daniel Oberst offered the followed evaluation of the network in the BITNET monthly newsletter: "BITNET is still by and large a voluntary, cooperative network that only exists to the extent that people work together..."

That this comment came from BITNET illustrates an important point: networking is connectivity, is sharing, is trust. The following rough description of Internet routing shows why this previous evaluation remains arguably applicable. Routing illustrates is an excellent example of trust. Routing is how a machine, given that a machine has an IP address, connects to others. Routing is, specifically, how a packet (small information chunks, like postcard greetings) gets from machine A to machine B. Routers are special -purpose machines which direct packets. Routing is also a function of a general purpose desktop machine. Here the primary focus is on the mechanisms, not the machines. However, envisioning only a router as performing the function of might make this explanation easier to follow.

Routers keep a list of all the machines or hosts to which the router is directly connected, and a list of machine to which all the physically adjacent routers are directly connected. A network to which neither the router itself nor none of the physically adjacent routers are directly connected is a remote network. For remote networks, the router keeps a continually updated list of the first step of what it believes to be the shortest path to that remote network. Routers do not store complete paths to remote destinations, they store only enough information to send the message to the next router, under the assumption that the next router will direct the message properly, and so on until the message reaches its destination. The trip between two routers is called a 'hop' regardless of the physical distance between two routers the distance between them is still one hop. Thus routers trust the message to other routers.

Routers are always updating their beliefs about the network. At any time a router may receive a broadcast from another router about that routers stored information. The receiving router always trusts the received information about routes and updates itself appropriately. Because of the constant updating it is quite possible that in a message consisting of several packets will travel on different paths and possibly arrive out of order.

MachineConnected To these NetworksConnected To these Machines
122.46.77.32 (me) 122.46, 113.22122.46.77.31, 122.46.77.35
128.22.36.81128.25, 126.14, 115.22128.25.233, 126.14.122, 115.22.004
128.14.46.98114.7128.14.56.33, 128.14.77.22
113.22.88.45 default
Router Database

Notice the addresses on this table are imaginary numbers, the point is to illustrate the type of knowledge a router would have.

Consider the network of routers on the Internet as analogous to a social network. Imagine a world with no central information for people in various regions, in other words no telephone directories. Imagine searching for a person, say Gene Eric Person in San Francisco, in the way a router searches. First you would go to your address book. Consider a region a subnetwork and a person a machine to make this analogy function. You would know all the regions to which you can send directly --Chicago, Dallas, Charlotte. You who also know the regions to which your direct contacts can send to. Here is how a page in your address information would work if it was analogous to a router. In this case you would send your message to Gene Eric Person in region San Francisco to Cathy. She would look at her address book and send it to someone in San Francisco, and that person would send it to Gene Eric.

NameLocationCan Connect To
CarlosChicago, MilwaukeePittsburgh (1)

Philadelphia (2)

CathyDallas, Austin, SFSan Francisco (3)

San Juan (8),

Austin (1)

CatlinCharlotte, Atlanta Jacksonville (2),

Miami (3)

CarterOklahoma Cityalmost anywhere
Your Address Book

If you did not have him in your address book you would send the message to Carter, knowing Carter is well-connected and is likely to be able to get a message to any location. Carter is your default router.

Imagine later you get an updated address book from your friend Carlos. Carlos notes that it takes him one friend, one hop, to get to San Francisco. You notice that it took only three hops to get to San Francisco through Cathy. You would update your address book as so:

NameLocationCan Connect To
CarlosChicago, Milwaukee, SFPittsburgh (1)

Philadelphia (2)

San Francisco (1)

CathyDallas, AustinSan Juan (8),

Austin (1)

CatlinCharlotte, AtlantaJacksonville (2),

Miami (3)

CarterOklahoma Cityalmost anywhere
Your Address Book

You may find also that Carlos is two hops separate from Carolyn who lives in Portland. You would not add Carolyn to your address book because you are not trying to build a complete and global database of every location. You just want to update your information on the best way to get to any location from your location. But the next time you looked for Gene you would send your postcard to Carlos instead of Cathy.

The address book is your local routing. If the person you wanted to contact was not in your address book you would call someone who would be likely to know him or her because of their location. That is, like the router, you would make your best guess from your most recent information about your network of peers as to the shortest path to the person you want to reach.

There are three critical observations from the above discussion of routing: there is no optimal physical location, there is no single point of failure, and routing is a cooperative exercise in trust.

First, there is no optimal physical street corner on which to reside. Customers will come from all locations, and the appearance of a web presentation depends upon the path between a browser and the information. It is not possible to be adjacent to every browser - there is no ubiquitous next door location. Yet there is valuable real estate in the Internet: not on the network but on the customer's desktop. A good place to be on the Internet is in the Web surfer's bookmarks. The ideal place to be is in the bookmarks that the user trusts.

An ancillary implication of this lack of optimal physical location is that there is limited possibility for monopoly control of distribution6. In traditional media markets there is limited competition. Consider newspaper, television, and radio markets. The ownership concentration results from expensive or exclusive distribution channels. Most towns have one newspaper because the start-up costs are too expensive in a market with an existing newspaper. The newspaper chicken-and-egg problem is that one must have a subscription base and a distribution network to have a paper. Only one radio or television station can exist at one wavelength. On the Internet there is no single advertising venue. There is no reason that multiple competitive search engines as well as multiple meta-search engines can not continue to thrive and compete. Because there is no center to the Internet, because of the routing, there is no way to ensure that every person entering a market sees one product first.

Second, in a connectionless network each packet is delivered independently, so that there is no single point of failure. So if packets are not getting through on one route the following packets will try a different more likely route. Packets are routed independently of each other, and therefore are not stuck repeating previous mistakes. Because there is no central directory of addresses, there is no single point of failure. This means connectionless networks are survivable, that is, hard to disrupt. One business implication of this is that such networks are reliable.

Third, routing is an exercise in social cooperation. Social networks break down from lack of cooperation. Routing could similarly break down. The widespread routing failures that I know about (there may be classified information but a routing failure tend to be noticeable) have thus far resulted only from errors in router configuration, not from malevolence.

Transmission Control

Protocol

Postcards are perfect for short bursts of information -- yet they would be a terrible way to send a novel. The pages would need to be in the correct order. Every page would have to get through. If the recipient's mailbox got too full (if, for example, the recipient went on vacation) it would be important to know this and stop sending for some time. It would be important to ensure that the writing did not get smudged, torn, or covered with postmarks. The transmission control protocol, TCP, provides all of these services for messages traveling through the Internet.

TCP provides orderly and reliable delivery of data by providing flow control, sequencing and error detection. When packets are lost TCP backs off (which is a fancy way of saying, "slows down by slowing the transmission rate"). Recall the postcard figure that shows some of the functions of TCP.

Flow control means that TCP prevents the recipient mailbox from overfilling or the load from crippling the mail carrier by sheer volume. Sequencing means the postcards are numbered, and can be ordered into a coherent document. Error detection means there is some certainty that what is sent is what is received. (The level of error detection TCP provides is meant only to detect random network failures; however, and can be easily defeated by malicious action.)

TCP provides a virtual connection that is unlike a traditional connection in two ways. First the information transmitted via TCP/IP does not all flow along the same path, as routers along the way are constantly updating information about optimal paths. Secondly, information that is transmitted later or by later arrivals cannot prevented from sharing resources being used by earlier arrivals. To contrast with a traditional connection, if enough phone calls are in progress that a phone companies local switches are being used to capacity, the next person requesting service will get a fast busy signal. A phone call cannot connect if the connection is already in use. With TCP , on the other hand, the service is at capacity the service for everyone slows down as others began to use the same resources, but no one is refused access simply because simply because for arriving late. Many Internet users have noticed this, particularly on the East Coast, where the Web slows noticeably in the afternoon as those on the West Coast begin their day.

TCP transmission begins with a three-way handshake: the caller, or initiator, calls; the receiver, or respondent, replies; then the caller verifies that the receiver has replied. This initiation resolves several important issues are resolved. First, the amount of data the receiver is willing to hold and organize is determined. This is called the window size in TCP and is analogous to the size of the mailbox in postal service delivery of a postcard. Returning to the postcard analogy: consider the acknowledgments in TCP short postcards in which the reader tells the writer how many pages have been successfully sent. The window size tells the writer how many can be in transit at any time. If the reader tells the writer through an acknowledgment that the first 150 postcards of the novel and the window size is say 50, then have been received, then the writer can have sent up 200 postcards in the mail and reasonably expect that all these cards will be received.

Second, the speed of the replies ("acknowledgements") determines the time-out, after which a packet can assume to be lost. In the postcard example this is analogous to the time it takes for the receiver to receive a postcard, to send an acknowledgement back trip , and the time it takes for the acknowledgement to be deliver by the postal system. The diagram below shows the beginning of a TCP/IP connection.

The Beginning of a TCP/IP Connection

I go through this example because it provides important illustrations of trust on the Internet. TCP requires trust. TCP?IP is ubiquitous. TCP/IP requires cooperation.

By 1991 the TCP/IP protocol suite consisted of about one hundred implementations and there were more than 700,000 machines using TCP/IP to connect 4,000,000 users. (Cerf, 1993) TCP/IP remains the core protocol suite on the Internet, connecting all forty million users in 1999. When people say that the Internet is inherently unreliable, they are referring to IP transmissions, as TCP, IP's almost constant companion, provides reliability.

The trust implications of the Internet, not the Internet itself, are the focus of this work. The previous descriptions were required for understanding and answering the question: What are the trust features of the Internet and Internet-like networks (i.e. packet switched)?

Begin with evaluation of information. Whether one is on the Internet for amusement or commerce, questions of how to evaluate information on the Internet is important. Thus a brief tutorial-style set of questions to use in determining how to evaluate a Web page is placed in the remainder of the this chapter, early in the text.

I also discuss here the effect of the Internet on the practice of setting prices because pricing illustrates how the nature of the Internet can change what would seems to be unalterable facts in off-line commerce. Flat prices are a basic part of the American retail tradition: changes in price discrimination hint at the fundamental changes to come. Currently prices are set on all retail products except at the high end. Prices on houses and automobiles are subject to bargaining; but those on food and entertainment are not. This is primarily because determining what each individual is willing to pay requires haggling, which is quite expensive off-line. On-line data about purchasing practices and searching results in the ability to price for the particular customer.

Barter markets and town markets enable participants to have face-to-face interaction where negotiation is possible. Internet commerce can bring back face-to-face negotiation in its virtual form. How else will money change with the electronic market? To begin to ask the question requires asking why money is what it is. Therefore an extremely brief history of money is included to place information money on the evolutionary timeline. But first I continue my consideration of protocols, the stages of a transaction and the scope of a transaction, all of which can be a function of the monetary form.

Layers of Protocol & Stages of a Transaction

The hypertext transfer protocol (HTTP) is a protocol that provides seamless delivery of different types of data, and since the Mosaic project, through a user-friendly graphical interface. HTTP is the data formatting protocol of the World Wide Web. It allows users to easily publish and obtain information on the Internet. Browser used with HTTP provide a simple user interface which highlights other files using color or graphics. Browsers using HTTP catalog locally available applications for file display, and automatically provides the text, sound, or graphic using these local applications.

ProtocolConnectsBy Providing
Internet commerce ProtocolsConsumer to Merchantpayment, possible delivery verification
Hypertext Transport ProtocolApplication to Applicationlocation and presentation
Transmission Control ProtocolMachine to Machinereliable delivery of multiple packets
Internet ProtocolNetwork to Networkdelivery of packets between networks
Hierarchy of Protocols on the Internet

With the development of the Web, the Internet became fully capable of supporting user-friendly distributed commerce, just as previous protocols had enabled functionality from simple communication to file transmission. Table 1.6 above illustrates how Internet commerce protocols have built on previous protocols, which has in turn expanded the pool of possible merchants and consumers. Without the ability to locate goods, consumers would not shop on the Internet. Without the ability to easily present goods, merchants would have difficulty selling their wares on the Internet even if they could be located. Of course, Internet commerce does not depend entirely on HTTP as some protocols include options for users with only email with no HTTP capacity.

The Internet supports a range of business functions, not simply payment. Every transaction, on or off the Internet, has multiple phases: discovery, price negotiation, final selection, payment, delivery, and dispute resolution. The Internet can support many types and all stages of Internet commerce (Sirbu and Tygar, 1995). Understanding how necessitates understanding why.

HTTP works on a simple client/server request and response mechanism. The Web is indeed the "killer app." Not only is it the killer app in business terms, it could have "killed" the Internet by having many short, and therefore potentially ill-behaved, connections ill-suited for TCP/IP. The investment to ensure that the Internet will succeed and thrive has been made despite the sub-optimal design of HTTP offers promise that the informal governing structure of the Internet can handle future problems that may arise.

Internet commerce has increasingly become possible with the advent of the World Wide Web. The Web is growing at many times the rate of overall Internet host growth. The Web allows the consumer to locate information of interest on the Internet without requiring any technical expertise.

All Internet commerce protocols can be used with the Web. In addition, some commerce protocols (Mastercard, 1995; VISA, 1995) are comprehensive and include the ability to transfer funds using only email. (For a detailed discussion of network protocols see Schwartz, 1987 and National Center for Supercomputing Applications, 1995).

Commercial Transactions

Despite the demographic and geographic diversity of people on the Internet, all electronic transactions will share some features. What elements of Internet commerce will every transaction share? On the individual level, probably nothing more than that all possible buyers will have the same number of chromosomes. At the business level, however, transactions share a structural similarity.

To understand business implications requires defining the scope of an electronic transaction and the market structure for information. These issues have each been the subject of entire texts (e.g. McKnight and Bailey, 1997) so clearly only an introduction will be presented here.

Every transaction has multiple stages, from discovery to dispute resolution. The scope of a transaction limits the capacity of a transaction to provide reliability. If a protocol considers only the transmission of payment, then discussions of reliable verification of orders will arguably be biased against that protocol. However, customers would agree that delivery of good is a critical element of all transactions. Because theft is theft to the consumer regardless of the framing of a protocol designer the discussion of reliability is appropriate for every protocol, just as discussion of anonymity is appropriate for every protocol. From the perspective of the customer, if money is stolen there has been theft. If goods are lost, there has been failure. To discuss every protocol only according to the definition of a transaction as provided by its designers would be of limited service. For risks considerations it is appropriate to consider the entire transaction, and not limit the discussion to the framing provided by the designers.

The stages of a transaction are:

  1. 1. account acquisition
  2. 2. browsing or discovery
  3. 3. price negotiation
  4. 4. payment
  5. 5. merchandise delivery
  6. 6. dispute resolution
  7. 7. collections and final settlement

Most Internet commerce protocols do not include all of these stages explicitly. In many ways comparing Internet protocols is like comparing apples to oranges. Yet such comparisons need to be made for consumers deciding among very different commerce protocols. Thus the use of consistent language, notation, and transactional scope is itself a subtle but real contribution to the understanding of Internet commerce.

Assume the transaction begins with discovery, since most merchants do not have accounts per se with every customer. Both for the sake of consistency, and to reflect the strongest interest in electronic commerce research, discovery is assumed to happen through the Web, so that every transaction begins with information that can be obtained through standard HTTP requests and responses.

Transactions begin when the customer obtains the means of payment, .i.e. account acquisition. Depending on the commerce protocol this may mean signing up with a transactions provider (e.g. First Virtual), obtaining a credit card account (e.g. SET) or purchasing digital coins (e.g. Digicash).

With these assumptions in mind, consider how each stage of the transaction is enabled or altered on the Internet. Product discovery is enabled on the Internet through advertising and electronic word-of-mouth. Product information is dispersed through Web pages, distribution lists and Usenet groups. The Web enables individuals to locate specific information and search by product or company name. Using search engines, such as the World Wide Web Worm and Lycos corporate Web, which often exist solely for the purpose of distributing product information, can be located. With distribution lists, or dlists, individuals who share a common interest form a closed group and transmit messages of interest, including product announcements and evaluations, to all members of this group. (It should be noted that distribution lists are usually motivated by discussion, with product announcements accounting for a small fraction of the traffic.)

Usenet groups are topical discussion areas open to all. The title of the group conveys the subject; for example, rec.pets.cats is for those who like to talk about their cats or cats in general . Usenet groups members announce new products but such product announcements are secondary to discussion. Direct advertising across Usenet groups is considered offensive by Internet users. A business that decides to advertise by sending many messages to many Usenet groups and lists is likely to find more sworn enemies than new customers, as this violates the social ethic of the Internet. Distribution lists, Usenet groups and the Web overlap. URL's ( which stands for uniform resource locator) are sent over distribution lists and posted on Usenet. Web sites connect to archives of Usenet groups and discussion lists.

Price negotiation is supported by email and electronic data interchange. Information about goods can be delivered on-line. Customer support can be offered on-line through email and via Web pages.

Payment is the core issue in Internet commerce and the protocols which are examined are concerned with payment. There will evolve as many electronic payment types as exist paper payment types today. The following chapter on Money discusses how digital money differs from paper monies.

Merchandise delivery is simple on the Web-- for information goods. Otherwise delivery is difficult to ensure. The anonymous purchase of goods which must be accompanied by a delivery address is of limited use. The purchase of goods which are not delivered is not a reliable transaction, no matter how smoothly the monetary transfer flowed. Delivery guarantees can be integrated into payment for information goods; otherwise the situation on the Internet does not differ from mail room.

In part because of the issue of dispute resolution Web commerce can be superior to telephone orders. The techniques used to bind payment to merchandise delivery on the Web can be used to bind payment to receipt delivery. So that, although the box may not be delivered, the customer at least has a binding promise. While this does not address issues of outright fraud it will simplify dispute resolution by decreasing cases of miscommunication.

Collections and final settlement are both more simple and more complex in electronic form. The issues of collection and settlement are tightly bound to the nature of money and are thus clarified in the next chapters discussion of money and reliability.

Every phase of a commercial transaction has associated costs. The ability of an Internet commerce protocol to reduce transaction costs depends on its ability to address these costs. Figure 1.6 shows distribution of costs in a credit card transaction (Sirbu and Tygar, 1995). The rate of adoption of Internet commerce partially depends on how automation can decrease the cost in the figure. The Internet allows administration of customer orders, payment or payment authorization transmission, and production of an invoice to be automated.

Cost Distribution in a Credit Card Transaction

In addition to cost advantages through automation, the Internet allows services to be provided continuously, around the clock, around the globe, in multiple languages, and in multiple currencies. Catalogs of merchandise can be easily found by interested shoppers at negligible cost to the merchant, and can be updated immediately as prices and inventory change.

Internet commerce was initially primarily used by those already familiar with catalog marketing. Increasingly diverse types of business ventures are now on the Internet. The table below shows examples of businesses on the Internet and corresponding paper information markets (Goradia et al., 1994).

Market StructureElectronic ExamplePaper Example
Publisher paysWWW catalogsMail order catalogs
Advertiser paysLycos, YahooFree weekly papers
Club paysClarinet, Site license softwareCorporate library
Customer subscriptionWeb magazines, dlistProfessional magazines
Customer pay per itemFirst VirtualStorefront sales Customer pay for timeAOL, CompuServeRental items Mixed ads & customer paymentProdigy, Netscape business sitesNewspaperStructure of Information Markets

The standards that will determine how money and information flow around the Internet are being determined now and some of the fundamental decisions about the risks businesses and consumers will take are being integrated as technical details in technical specifications. Examination of those specifications and enumeration of the risk is particularly timely while the standards are still in flux.

Evaluating Information On-line

Product discovery is the greatest current commercial use of the Internet. Yet the lack of validation of services and the uncertainty of the quality of information are serious issues in discovery and shopping. How trustworthy is information provided during discovery?

When a business is presented on the Web there is no tangible information about that business. Slander, fraud, and misinformation are not confined to the Web; but relative anonymity and the lack of a need for physical presence makes misinformation easier. Those who lie can hide behind Web sites with noble faces, including words like "Justice", or "Consumers for Freedom" in their names. Because of the importance of open discourse, American judicial traditions of respecting speech, and the patchwork of jurisdictions over the Web the reader, not the state, must take the responsibility for detecting what is false, and not accepting information on the Web at face value. From the other side is the responsibility of the creator of a Web page to show why that page should be believed.

Misinformation is the hazard in any medium of communication. In this section I discuss ways to evaluate a site to determine whether it is reliable. There is no certain test to determine from afar the quality of a product or trustworthiness of site. However, some factors can signal falsehood or identify misrepresentations.

A competent computer science student who has been insulted, been ill-served, lost money or had his or her affections dismissed can easily put up a web site illustrating how the previously beloved, the ex's business or the ex's employer is an enemy of good -- and with some small skill can make that site look quite believable. On a larger scale an activist, during the 1996 Presidential race (I believe a Forbes supporter) made bogus Clinton, Dole and Buchanon attacks through sites at www.clinton96.org, www.dole96.org, and www.buchanon96.com, sometimes clearly mocking, sometimes subtle and vicious The casual Web browser was likely shocked by the policies and quotes found at these sites; understandably as these fabrications were beautifully and professionally presented. In the 2000 Presidential election the Bush Campaign has requested that the FCC force removal of the www.GWBush.com site on the basis that it has the same look as the official Bush Campaign site and may confuse voters. The URL for searching on domain name ownership is http://www.networksolutions.com/cgi-bin/whois/whois. This should provide contact information for the domain name owner.

Companies and organizations have so far been able to respond to angry disavowals such as www.netscapesucks.com (which includes the Sucks 500). Just as the Web has been the mechanism for angry students in the past, it will increasingly become the mechanism of angry customers and employees in the future. In learning how to deal with this companies can take a note from those who have been subject to harassment on the Internet long before the explosion in the com domain.

The first thing to consider when evaluating information is the source. Can you determine the source? In my hobby space, Mom's (http://www.momspace.net/) I clearly identify myself as the creator. In the Presidential race examples above; however, only a search of registered domain names would identified the source of the bogus political pages. Is it really a nonprofit organization fighting for right, or a talented undergraduate at a technical school? Who is the source? Look for the ability to contact a physical person, not through email but with a street address or phone number that connects to an actual human. This information provides jurisdiction information; that is, there is a forum in which to sue should things go wrong.

When evaluating information look at the domain name. Does the it end in edu, com or org? This may identify an irate undergraduate pretending to be an organization. Technical schools are particularly hazardous (cmu.edu, mit.edu, etc.). A real world non-profit organization, such as the Sierra Club, will have a domain name ending in ".org". A real company domain name will end in a ".com", or less likely a ".net" (or even less likely a geographic name). The absence of a top level domain may be a very good indicator of a bogus organization - but its presence is meaningless.

When evaluating an evaluation, consider the tone. Especially look for loaded link names. If a link says "Evil Smith Hobbies, is owned by the vile Bob Smith: he who stomps worms, hates flowers" and connects to Bob's page, look for evidence in Bob's own words. That the link naming him as a worm-stomping flower-hater goes to his page is no evidence that he does in fact loath flora & fauna. Is the "evidence" of Bob's practices fake email from Bob? It is very easy to write text and pretend it is an email, or to edit an actual email. Which Bob Smith is it? Search the Web for Bob's own words. If an organization advocates a truly hateful idea or truly loathsome policy it is probably documented somewhere, if not openly advocated. If a contribution to a hate organization results from a purchase there will likely be mention of it elsewhere, certainly as a tribute or a shopping suggestion on the hate organization's page.

Always look for links to independent sources. Who links to this page? Is this just a page with many headers that it will get selected by search engines often? Or do verifiable organizations link to this page? If such an organization does link, does that constitute a general endorsement of the information contained therein (e.g. "more information here"), or a specific statement of a single example of cooperation (e.g. "Smith Hobbies Haters & The Society to Loathe Bob joined us in this lawsuit for free roses.")? None of the Clinton, Dole or Buchanon pages linked to other organizations. They were self-referential. They should have linked to political parties and, more important, the political parties should have been linked to them -- and would have if they were real! Who links to pages that evaluate businesses? Virtual Better Business Bureau stickers should connect to the Better Business Bureau, and references to Consumer Reports positive evaluations should link to the appropriate story in Consumer Reports. The ability of watchdog groups to use bozo filters7 makes the link far more important than the image (which is trivial to copy).

Of course consider the content. Information too ridiculous to be believed should not be believed. If the things claimed are too bizarre to be true, they may well be false. The alternative: make claims reasonable and believable. Being outrageous to lure customers is not as effective on the Web. A huge $20.00 Levi's Jeans sign and no $20.00 will probably not work as well on the Web as in real life. There is no price to leaving the virtual store. There are no large plastic "Going out of Business" banners on the Internet, because these would decrease trust and not lure customers.

Beyond evaluating businesses and organizations, this checklist can serve to determine if an irate consumer or angry gadfly is presenting reliable information. Consumers can effectively provide information and companies can respond. There are social and technical mechanisms with which to respond top harassment or mere editorials.

First, the mythical Smith Hobbies can copy the Meta Data of the complaint page so that searches that result in the complaint page will result also in Smith Hobbies response page. Meta Data is the information about the Web page to help spiders, search engines, and bots to classify pages. To view the Meta Data for any page just select 'view source' from the browser menu, for example a page on parenting might have the following Meta Data:

Of course, this also works for those who are unhappy about a business. If a business treats a consumer badly that consumer can take action to ensure that all searches that find the company find the irate consumers page as well.

When a browser hits a page it is a trivial matter to determine the referring page. If the referring page is a complaint page, use simple commands and direct that browser to a response page. This is a way to respond to complaints without pointing them out to individuals who would not be aware of them.

Pricing and Quality in Internet Commerce

As discussed in the last section, quality of information is a critical element on the Internet. This is an issue both for customers evaluating merchants and merchants evaluating customers. Customers want to be certain.

Companies that use the Internet are often attempting to capture consumer surplus: the difference between the amount a consumer would pay and the price actually paid. Companies can come closer on the Internet than off to charging every customer the most that customer would conceivably pay for any item the customer purchases. Anyone who has ever found a bargain has experienced the joy of consumer surplus.

A current leader in real-time price discrimination is books.com, which offers customers the ability to easily compare prices. If Barnes & Noble or Amazon offers a lower price on a particular item and the buyer chooses to use books.com automatic price comparison feature then books.com automatically matches the price. On average books.com has a slightly higher price than Amazon or Barnes & Noble. The consumer who does not bother to compare will sometimes pay the higher price. (Often; however, the prices are the same.) The consumer who shops at books.com and always compares pays the lowest price if the other servers are immediately available. Of course sometimes the Barnes & Noble or Amazon sites are not available. In this case books.com charges its usual price, regardless of whether it has searched for the object before and has some knowledge of a lower price. Books.com offers price discrimination between those who compare prices and those who do not.

There are social as well as business implications to pricing in Internet. First, discrimination in markets is not inherently bad. For example, in clothing, upscale stores discriminate against bargain buyers by pricing them out of their stores. Bargain buyers go to TJMaxx instead. This better suits both buyers who will pay top dollar for selection, timeliness, and atmosphere and buyers who want lower prices. Similarly high feature or brand name Web sites can charge more, as Amazon's continued success in the face of books.com strategy illustrates.

Second, pricing discrimination on the Internet cannot be based on the demographics on which socially destructive price discrimination is based. For example, in traditional markets with price discrimination, women pay more for cars. This argues for women to shop on-line for cars. Mortgage approval rates vary based on ethnicity. Yet Web pages offer the same mortgage rates to all. If the perceptions expressed by Fukuyama (in Trust, 1995) are widely shared, the variance in mortgage prices is a function of trust. That is, the lender and seller have less trust in certain demographic groups. This is expressed as higher rates, higher frequency of credit refusal, and higher prices.

The trust of the customer in the merchant is as much an issue on on-line markets as off-line market. On-line; however, trust is likely to be based on browsing habits and credit lines with no gender or ethnicity being an issue.

In Internet commerce customer trust is the critical variable. The more a customer trusts a site the higher the price the site can charge for what it sells. This does not suggest that price discrimination is a matter of customer betrayal; rather price discrimination is a function of merchant reliability. Customer trust is belief that the merchant will fulfill the terms of the transaction (e.g. deliver quality goods in a timely fashion). Any customer on or off the Internet has a price/reliability sensitivity. Thus the lower price of quality second-hand goods. Price discrimination may mean offering a lower price to obtain a sale rather than offering a targeted higher price, based on preferences exhibited by the customer at the site at the time or purchase.

Mistaken attempts to capture consumer surplus can lead to lost sales, sometimes very sour business relationships that can last a lifetime. Consider a real life example. I went shopping intent on buying a new car but ended up not doing so. I very much wanted a yellow Geo Metro convertible. (This should not reflect upon other recommendations in this book.) I investigated the price. I was willing to pay slightly above dealer cost and sign a long term maintenance contract. The offer I was making was fair. When I found no takers for my offer, I bought a used VW Beetle. This is an example of imperfect price discrimination based on mistaken trust. The salesmen whom I met had greater trust in the veracity of their gender-based evaluation of me than in my ability to set a price. I had the option of sending in a male friend to make the deal, in fact a possibility I investigated. Yet I decided the business relationship was too sour. This has negative social implications, of course, but also has negative business implications for the merchant. I would not seek the friction involved in attempting to buy from a GM dealer again, based on my almost arguably unrepresentative experience. On the Web it is even easier to leave the lot. On the Web there will be some merchant to take a fair offered price.

Internet users can respond to price discrimination by using the power of the Web to search; thus the market in mortgage rates and offers for mortgages in which there is a perception of discrimination. In any such market the Internet will have a distinct advantage in terms of customer trust over traditional marketing mechanisms. There is similarly a significant market for information on automobiles.

The user who would rather have a listing of selected books rather than the option of a second price comparison will go to Amazon. The most price-sensitive user willing to do price searches on every purchase will go to books.com. Consumers will respond to price discrimination by changing how they use the Internet. Pricing will become increasingly dynamic. With Internet purchases, as with automobiles, there is much competition. If a customer experiences ill-suited price discrimination on the Internet, it is likely that the customer will never return to the site where the error in pricing was encountered. Thus sellers must make price discrimination decisions carefully on a case-by-case and product-by-product basis.

Convergence and the Internet

Internet commerce is a subset of telephone and mail order commerce. In a few years Internet commerce will be distance commerce because of the technologies of convergence. (Here mail order and telephone order are referred to as "distance" commerce to distinguish them from emerging models of electronic commerce.) Packet telephony, advanced television, and cable modems are all artifacts of digital convergence.

What is convergence? Previously, technology has provided for policy makers three distinct platforms for speech: print, air, and wire. This resulted in the creation of four media types: publisher, distributor, common carrier, and broadcaster. These types began to converge with wireless telephony, multi-media services, and television delivered through cables. Now all tradition media types exist on the Internet: the Wall Street Journal is a publisher; Amazon is a distributor; ReMix Radio is a broadcaster; and AT&T MediaOne is a common carrier.

All media types will ultimately converge onto a single network of networks using IP switching. All these media play different roles in distance commerce. Here I compare and contrast the uses of traditional media with the Internet.

Broadcasting is especially useful for advertising and information distribution (e.g. discovery.) Obviously its one-source-to-many-recipients model makes it unsuitable for purchasing; rather broadcasting is used to encourage a purchase. Discovery is supported in multiple modes on the Internet, as previously described. Now everyone has NTSC 8televisions and will slowly adopt digital high definition television. With high definition television the television image will be as good as the image on a computer screen so WebTV will be truly useful and possible.

With "Internet broadcasting" companies need to be cognizant of the recipient's capabilities in a manner that is not necessary with broadcasting. Disney provides an excellent example of a failure to understand the distinctions between traditional broadcasting/advertising and the Internet. On its Web site Disney offers much paid content and that seems reasonable, given its market power. However, to view the free part of the Disney site requires fast hardware, a very high speed connection, and multiple helper functions. The Disney site contains every form of content: video, audio, animation, etc. This probably looks wonderful at Disney Studios. To hazard a guess, I will say based on no information but my experience as a Web surfer that Disney used its regular internal team to develop its state-of-the-art Web page. But exclusive use of the best high-end graphics which are excellent for television are an error on the Web because of the resources they require. Disney's site is so state-of-the-art that it is time consuming if not impossible to use over a 56.6 modem. Children do not like waiting for downloads and will likely not install multiple helpers. There is little or no easy-to-download content; for example, pages to print and color. The use of video and animation is truly excessive. Clicking around the Disney page is an experience in frustration for users that do not have the latest equipment and at least cable-modem speed connections. Finally, Disney does not have the dominance on the Web it has on televisions so frustrated users can easily visit the sites of competitors. Thomas the Tank is as easy to locate on the Web as is Disney. In broadcasting, and in movies, Disney can dominated distribution. That is not the case on the Web. There are multiple Thomas sites with free coloring book pages, and simple interfaces that are suitable for a wide range of connections and machine, but nonetheless very entertaining. Disney does not understand that the distribution dominance it has on television does not map perfectly onto the Web. Thus Disney has used a flawed publishing model for its Web site.

Distributors are the category of media most perfectly replaced by the Internet. In media terms, distributors refers to "distributors" of information. The Internet is likely to greatly reduce the need not only for bookstands and newsstands, but for all types of distributors. Desktop computing reduced the number of middle managers, previously needed to watch the books and handle the paperwork. Sales forces will be the most notably reduced population by Internet commerce. As the Disney example illustrates, dominating the distribution channel has been to this point critically important in selling information goods and obtaining mind share. The music and movie industry structures are built upon the assumption of expensive distribution channels that tend to be controlled by a few major players (i.e. natural oligopolies). This will continue to change.

Publishers of material that is not broadcast are the second category to undergo fundamental change due to the Internet. In the case of newsprint, such publishers have monopolies in most cities. The Internet promises democratic in that there is no natural monopoly in distribution. Limited competition in newspaper, television, and radio markets results from expensive or exclusive distribution channels. Most towns have only one newspaper because start-up costs are too expensive for potential competitors. One must have a subscription base and a distribution network before one has a newspaper -- clearly a bootstrapping problem. Only one radio or television station can exist at one wavelength.

In contrast, on the Internet anyone can be a publisher. The network is the distribution channel and it cannot be monopolized. Monopolizing content control would require software at all user endpoints; for example, built into the operating system. IP provides only transport -- only distribution. IP does not distinguish between sources and destinations. Any user can be both.

Traditional publishers, of course, find this unnerving. However, traditional publishers will also dominate on the Internet if they fulfill consumer's trust criteria. Users will go first to established sources of information because they have some preexisting trust in these locations. This fact will serve the interests of established institutions who provide product and advertising information along with the opportunity to purchase. Outrageous claims and ill-considered priorities will; however, decreases this trust and evade their advantage.

The rise of the on-line magazine Salon (http://www.salonmagazine.com/) in the midst of the Starr investigation illustrates the role of trust in the established media outlets. The traditional print and broadcast media had consistently chosen to honor the privacy of Rep. Henry Hyde by not disclosing what they knew of his sexual conduct while simultaneously publishing the details of President Clifton's personal life. Salon broke with the pack by reporting the sad tale of a family destroyed by Rep. Hyde's sexual infidelity. Such reporting increased consumer trust in Salon and decreased trust in the traditional media. Certainly major media players have much trust remaining among consumers but the vaulted Watergate press corps of the seventies has already become the Lewinsky press corps of the nineties, with an corresponding decrease in trust. This leads to the question: how much of market control is based on trust and how much is based on established distribution patterns? Only long-term Internet use will answer this question.

Common carriers transmit any material, regardless of the message. Telephone companies and the U.S. Postal Service are common carriers. Clearly the Internet can provide the services of common carriage. Currently the mails are used for both transactions and discovery. On the Internet mail can be more tightly targeted, by for example, asking people to sign up at a Web site for a mailing list. Because of the low cost of sending email, some companies send spam. Spam is as likely to result in impassioned recipient's refusals to be customers as it is to result in customers. Internet commerce enables tightly targeted requested email.

Digital convergence usually includes broadcast televisions, radio, telephony, and cable transmissions. But more than traditional communications signals run across the wires -- payment also goes through the Internet. As video, voice, debates and newspapers converge payment for these will converge also. Internet commerce illustrates that money itself is also converging onto a digital, Internet-transmitted form.


2: Money

Why are reliable transactions important? What are the properties of a reliable electronic commerce protocol? Who will be trusted as a reliable creator of money on the Internet? To answer these questions, we must first address a more basic issue: What is money? One may say that electronic commerce relies on electronic money. But electronic money may not retain all the properties of money. Thus a careful definition of what money is, and how that definition relates to e-commerce, is in order.

Functions of Money

What is money? As defined by its three elemental functions, money is a store of value, a standard of value, and a medium of exchange. Ensuring that electronic commerce maintains money's functions as store and standard of value is not a trivial matter but certainly manageable. In contrast, ensuring that electronic commerce maintains money's function as a medium of exchange is difficult. The Internet's power lies in its lack of need for physical presence. This creates a difficulty for electronic commerce and however, in that because there is no physical presence there are also no handing of papers, no tactile examination of goods, and no certainty of receipt.

Money as a store of value requires durable storage. For money to be a store of value, it must not be easily destroyed or created. If money decays or is destroyed in storage, then it obviously does not succeed in storing the value it represents. In contrast, hyperinflation illustrates the failure of money as a store of value when it can be too easily created. Under hyperinflation, entire nations are forced to abandon money and return to barter. Durable storage is a critical factor, but one that is not difficult to achieve, in electronic commerce. Unlike physical money, electronic money is merely bits, and thus can be trivially duplicated. Note that this duplication of money differs is the same as the creation of only when the duplicates can be spent. Ease of duplication eases durable storage but also can simplify fraud. Thus ease of duplication is a double-edged sword. Durable money storage is necessary for electronic money to fulfill the functions of paper money, but durable storage is simplified in electronic commerce.

Money as a standard of value provides a simple triangulation for all transactions. Consider the sheer number of transactions that may be required to obtain a desired good in a barter economy. An entire series of trades may be required so that some final barter could be arranged. That is if one person had some good, say flour, wanted software there would be a series of transactions until the person who had flour had the goods desire by the person who has the software. Essentially this is how the Internet ran for many years. People wanting software contributed to the common good by working on the software they used and identifying bugs and offering additional features. This is the freeware culture: a social network of a closed community. Clearly that is not going to work for widespread consumer commerce because consumer markets are open and the consumers follow a wide variety of value systems who would value coding contributions in a substantially different way.

Exchanges with no Standard

Consider the series above. A has code and wants services. B can provide services but has no need for code. A has to go through a series of trades to get what B wants, in this case a new modem. D has a modem and wants a the latest Java shirt. So A has to go to C to get the Java shirt and then process a series of trades to finally be able to obtain the desired services. When A and B can both trade one common item which is wanted by both, rather than a series of exchanges of goods and B can just trade money. The comparison of the lines in the boxes and the simple exchange below illustrates why this is called triangulation. Electronic money must provide for this triangulation by setting standard of value. This is currently being done by pegging electronic moneys to a particular currency or set of currencies, so that electronic mechanisms can be used to trade dollar equivalents over the network.

Triangulation with a Standard of Exchange

Money as a standard of value also requires interoperability;9 that is, to serve as a standard of value, any specific form of money must either be itself widely used (a standard), or readily convertible to another form that is widely used.

Money as a medium of exchange requires special transactional properties. As a medium of exchange, money must have transactional durability; that is, money must be conserved in transactions, not created or destroyed. Monetary transactions must be consistent: the amount received by the seller must be the same amount paid by the buyer, with no change in that amount occurring during the transaction. The transactional properties that enable money to serve as a medium of exchange amount to transactional reliability. Reliable transactions in electronic commerce are necessary to the proper functioning of electronic money as a medium of exchange.

What are the properties of a reliable electronic commerce protocol? The study of distributed databases has defined the characteristics of reliable database transactions as atomicity, consistency, isolation and durability. I will address these in detail in Chapter 9; for the moment consider how physical transfers of money illustrate the properties of a reliable transaction.

Please note that during this, and all future analyses, I take advantage of gender-specific language to simplify my discussion. The customer is assumed female; the merchant male; and the bank neuter. This allows me to use she, he and it without worrying that the reader may confuse the noun referenced by the pronoun.

Consider a customer's physically handing a dollar bill directly to a merchant and how it exemplifies each of the properties referred to above. Atomicity means that the transaction fails or succeeds completely. Consistency means that both parties know the outcome of the transaction. Isolation means two payments do not interfere. Durability means that the transaction cannot be undone without the consent of both parties.

Atomicity : The dollar bill will not be lost as it leaves the customer's hand and is transferred to the merchant. There is always exactly one dollar; it is never duplicated or destroyed. If the dollar is dropped, then the customer can pick it up and return the transaction to its previous state.10

Consistency : After the transaction the merchant knows he has one dollar more; the customer knows she has one dollar less. At no point in the transaction is there ever any confusion over who has the dollar.

Isolation : That dollar bill will not be confused with a previous dollar bill, so the merchant cannot falsely claim failure to have received payment, and the customer cannot escape her obligation to make payment.

Durability : After any party receives the dollar bill, he or she retains the dollar bill until he or she transfers it in another transaction.

None of these simple physical safeguards to reliability necessarily holds in an electronic transaction. Like purchases with a dollar bill, some Internet commerce transactions are anonymous. When a merchant receives a anonymous payment using an anonymous system, it is as if the customer has thrown a dollar bill across a dark room. Who should the merchant credit with this payment? How can this payment be linked with a specific purchase if there is no customer standing in front of the merchant? Who should receive the goods? In this case, the electronic dollar cannot be identified with a specific purchase or purchaser. So the trivial issues in a face-to-face anonymous purchase are significant problems in a networked anonymous purchase. Overcoming these problems depend on a cryptographic public key to verify identity for a promise of payment (as described in detail Chapters 3 and 4). A public key is a mathematical way of proving identity and signing messages. If a public key is used to sign for a payments, who is at risk if the key was not valid: the verifier, the merchant, or the consumer? Right now the answer to that question depends on the physical location of the transaction. However, questions of jurisdiction are far from simple on the Internet. Knowing that the rule of the law varies between Florida and Utah does not determine which law is binding. Furthermore, neither the Florida nor the Utah statute has been tested.

In electronic commerce, the payment message must travel over an open network (that is, a network without security) from the customer to the merchant. Without verifiable acknowledgment as part of the protocol, the customer has no way of knowing whether the merchant received the payment message sent by the customer. Under the Internet's standard transmission control protocol, a payment message may be duplicated if the communications protocol believes the packet containing the payment message has been lost on the network. This happens, for example, when there is congestion and messages get dropped at the congested router. Moreover, network failure may destroy a payment message. If a payment message is lost, delayed, or destroyed, confusion may result. If forced or faked network outages can create confusion profitable to someone then such outages are sure to be created. (In the system analyses the last chapters include examples of how creating or falsifying network failures may enable fraud and theft. This varies with every system.)

In sum, ensuring transactional reliability is not a trivial matter in electronic commerce. Thus, the provision of reliable transactions is a critical issue in the analysis of risk in electronic commerce protocols. Difficult technical matters involving reliability assurance may obscure business decisions and risk allocations.

Digital Information Money

In its form money reflects the economy in which it is found. Cowry shells are the money of an inland society that trades with the coast. Tobacco is the money of an agricultural nation, in which producing something that is not food illustrates wealth. Internet or electronic money is the money of an information economy. Those left holding only paper moneys, those who do not make the evolutionary leap, in business practices and consummation of transactions, will be passed by.

Historically early money in most economies commodity money. A commodity is something that can be consumed. Commodity money fulfills the role of money as a standard of value but is less useful as a medium of exchange. After all, who could carry around an estate's worth of grain or tobacco? Commodity money is also a poor store of value. Very few commodities can be stored, especially agricultural commodities that will rot or be consumed by rodents. Commodity money is also subject to inflation on the basis of quality of goods. For example, after tobacco became the standard of exchange in colonial times, the streets of Virginia were flooded with quantities of inferior leaf. This dismayed the holders of money as much as the smokers. Leaf or grain may conceal impurities, such as stones, which have relatively great weight and no value.

A return to commodity money usually results from hyper-inflation or economic collapse. A well-documented case of returning to commodity money is the use of cigarettes as money after World War II in Europe. Two decades later public phone tokens were used for money following hyper-inflation in Israel. In this case the commodity was arguably a service - a phone call.

Metal money typically displaces simple commodity money in most economies. Metal money is the interim money, between money which can be consumed and money which has no intrinsic value. It can be argued that metal money is a commodity money because metal can be turned into instruments of warfare (e.g. bronze) or used in decoration (e.g. gold). Metal money can fill all three monetary functions previously identified: it can serve as a store of value, a medium of exchange, and a standard of value. It is more easily transported in large amounts than commodity money. Metal money can be transported over large distances --another improvement over commodity money. In particular, metal money can be transported on ships and through inclement weather in circumstances where grains and other consumables may rot. Metal money was particularly useful for international commerce in the nineteenth century.

Like paper money, metal money is subject to inflation, although certainly with stronger constraints. To wage war and finance empires, rulers through history have lowered the purity or weight of the coins of the realm. Yet not even the most creative ruler using only metal currency could not produce the hyper-inflation possible with paper moneys.

The recognition that there is no actual need to hold the metal money itself in order to possess the value it represents has given rise in the modern economy to convertible paper money. Convertible paper money began with merchants and banks, who would write out notes of credit for customers declaring that they had adequate deposits to enter into a particular debt. The money specified in the note could then be converted into metal at the trustee institution. Yet the holder of the money still had the certain feel of paper in hand. These paper guarantees of deposit were the first Western trust money. The holder of the money trusted the buyer not to abscond with the gold represented by the paper money, and similarly trusted the guarantor of the deposits to hold sufficient gold in the depositor's account to cover the notes when presented.

The concept of symbolic money was a necessary (and not uniquely Western) invention to go onto the next step - intangible money. Some vendors currently offer forms of electronic money that can be converted into tangible money, or greenbacks. Others offer money that can be changed into notational credits, on credit cards, for example, where changing to greenbacks can have a high overhead.

Fiat money is paper money with is no guarantee that it can be converted to any other form. The U. S. dollar has been fiat money since 1971, when President Nixon took America off the gold standard. Fiat money is trust money on a larger scale. Some vendors offer electronic fiat which works in a closed environment where it can be exchanged only for goods sold for that currency or for credits within the system of the vendor which issues it. For example, proposals for sharing computer code based on ratings of individual contributions are fiat money. In that case one could use code based upon what was previously distributed but on a token system instead of a reputation-based system. As I describe later on in detail (see Chapter 11) First Virtual credits are fiat money for a matter of weeks, but there is a guarantee that if both parties (customer and merchant) act in good faith the money will eventually be altered to notational money in the form of credits on a depository account.

Arguably one consistent trait in all these forms of money is their difficulty to produce: the labor associated with creation of the money is appropriate to the economy it serves. Thus when agricultural production was the standard, agricultural goods were standard money. As wealth increased with trade, the standard of harder-to-produce metal came to replace consumable goods. After the Industrial Revolution, when steel could be rolled out and sliced like cookie dough, the creation of detailed paper money was required. Now, anyone can produce photo-quality paper. Paper is too easy to produce, ill-suited for remote commerce, and risky to carry. Thus for moving money in an information economy only bags of bits will do.

Social scientists would argue that all money except immediately consumable commodities are networks of trust, not just fiat money (Coleman, 1990). This argument would suggest that Internet commerce is nothing new. I believe that Internet commerce is something new; never before has there been the need to establish a trusted currency for so many on the basis of such intangible connections. Further the implications for supply of money; for a private creator of global money; and global commerce cannot be foreseen. Yet the historical examples of shipping, private banking, and long distance commerce offer some guidance. Essentially history offers the lesson of caution. Even the most trusted entities may fail. Early Internet monies may hold their value no better than Dutch tulips. Putting the technical ability to prevent risk in the same hands as the contractual ability to distribute risk calls for careful consideration and oversight. That is, those that can best prevent risk should be the ones to bear it. And as the regulation of credit cards tells us, limiting the ability to exploit the customer may be a precondition for an explosion of the next generation of commercial instrument. So, with that in mind, now consider the risks of selecting a vendor for an electronic commerce protocol.

Money Vendors

Today the creation of money is seen as a national right - an inherent function of the nation state. Yet this was not always the case and there is no reason it should continue to be in the long term. As the phrase "not worth a Continental" reminds us, bankers and state governments were more broadly trusted with the ability to uphold commitments to convert money than the U. S. government in the early days of this republic. The U.S. government was able to successfully declare its national monopoly on coinage only after a century of repeated instances of financial speculation became intolerable to the public at large.

Who will offer electronic money? It appears that the first parties to offer successful electronic money have been multi-national financial services corporations. These companies have several advantages. One of the greatest is that they have already established trust, or at least customer relations, in many nations. Second, they have the diversity of resources to protect their entry into enter the potentially risky market of electronic commerce: other financial services they provide will remain profitable while the budding Internet commerce market unfolds. They have scale in number of users; that is, there are already many consumers who use their services and the corporations have the ability to manage all these accounts. And finally, they can offer easy interoperability between national currencies through their current international market relationships.

Another potential player the setting of standards for electronic money is Microsoft. There is an efficiency argument for integrating the wallet into the operating system; the same argument for integrating the browser, ftp, and many other functions into the operating system. By integrating the standard for electronic commerce into the operating system, Microsoft can set the default standard for Internet shoppers. Integrating the wallet into the operating system has many potential advantages. The wallet can be seamlessly integrated into the browser, enabling every user to browse and buy. This will allow Microsoft to control the risks as well as set the terms. Of course, if any single company holds the coin of the information realm, consumers may find post-hoc negotiations about the distribution of risk inadequate for self-protection. In any case merchants and consumers must take care in selecting their options and reading their software licensing agreements.

Computer chip manufacturers may also set standards for Internet commerce, or at least influence those standards that function optimally. Why is integrating the wallet into the hardware such an obvious next step beyond the browser? First, Moore's Law says that chip density doubles every eighteen months. But what will be the use of all those transistors now that every decent desktop machine end can handle multimedia? Security remains computationally intensive and slows even the best desktop machines. Security is therefore the next obvious application for the denser more powerful chips predicted by Moore's Law. A fast encryption chip could determine which systems works fastest, and is therefore most acceptable to the consumer.

Of course there is no reason for a single money with today's financial markets. There will be specialty money and custom currencies. Small firms can offer the quick time-to-market and responsiveness necessary to serve lower niche volume markets. There are many forms of specialized money today, such as frequent flyer points and discount cards linked to store purchases. It is extremely unlikely that the types of money will decrease with the expansion of Internet commerce and the ease of creating new buying or point systems. Scripts, points, options, and many more as yet unimagined forms of money will proliferate as Internet commerce grows in use and popularity.


3: Basic Cryptography

Cryptography refers to numerical algorithms, implementations of those algorithms and various mathematical and programming tools used to meet security goals. Cryptanalysis is an ancient science, certainly thousands of years old.

Cryptography can provide authentication and integrity for electronic transmissions if properly implemented. Information protected using cryptography can be transmitted confidentially, dated reliably, signed verifiably, and be simultaneously private and verifiable.

The following diagram shows analog equivalents of the functions of cryptography. Recall that the Internet example is not unlike a postcard -- and a postcard written in pencil where data can be easily changed. Continuing with that analogy imagine that cryptography can provide an envelope to prevent snooping (confidentiality). Cryptography can provide the seal on the envelope to assure the message has not been changed (integrity). Cryptography can provide the signature on the bottom of the letter (nonrepudiation and authentication of the sender). Cryptography can also provide the lock on the envelope -- assuring authentication of the recipient.

Analog Equivalents of Cryptographic Capabilities

There is no single foolproof way to ensure that a cryptographic function is secure and hard to subvert. Schneier's Applied Cryptography describes many algorithms; for what they are suited; and how they may fail. Thus when considering a purchase of a product for encryption of data it is good to have some rules of thumb. (I recommend the Snake Oil FAQ, at http://www.cis.ohio-state.edu/hypertext.faq/usenet/cryptogrpahy-faq/snake-oil/faw.html. Much of this discussion is adapted from that more extensive document. )

Two claims reveal with near certainty that a cryptographic algorithm is not to be trusted : it is proprietary and it is advertised as "proven secure". Owners of proprietary cryptographic algorithms argue that theirs are superior to other algorithms because they are secret. This argument is most clearly specious since every product that uses the algorithm must contain it. Even purveyors of secure hardware should be wary of reverse engineering. Consider the difficulty involved in keeping an algorithm widely used in many products secret, versus the effort required only to keep a company or personal set of cryptographic keys secret. If readers of this book take only one recommendation to heart, let it be never to buy a secret, proprietary cryptographic product.

Algorithms can be proven to be immune from certain specific attacks. Cryptographers are always working to find new methods of attack. For example, there is no attack on full DES more effective than just trying keys over and over until the correct one is found. (This is called a brute force attack.) Proving an algorithm is secure is beyond merely daunting. Consider that the most widely used public key algorithm, RSA, is based on a premise that large numbers are hard to factor. Mathematicians have been working on this question for centuries, and it is still only a hypothesis; it has never been proven.

Other warning signs of cryptographic products to avoid are described in the following paragraphs. First, the use of technobabble. The use of copyrighted or new terms to describe the system suggests common terms would be less than flattering. If even the description of the product is veiled in sales rhetoric too dense to understand, how easy can it be to use?

Avoid products that claim revolutionary breakthroughs. The vendor who claims revolutionary breakthrough in cryptography is either lying or ignorant.

Carefully evaluate testimonials from experienced security experts and rave reviews. Ask for biographies of the experts. Television interviews do not a cryptographer make. Hackers who understand telephone systems are not necessarily cryptographers; these are two very different sets of skills. Examine the rave reviews, and ascertain they were given for the product's cryptographic strength and not something like the GUI interface.

One time pads11 are the strongest possible type of cryptography. One-time pads Unfortunately they are very hard to make. The source of the numbers has to be truly random, like radioactive decay, not some fancy function that tweaks the bits. Humans are a notoriously bad source of randomness, so any implementation that depends on humans for randomness is fundamentally flawed.

Claims of key recovery can point to system flaws. The vendor may have escrow 12 systems which obtain and a copies of the key. Escrow systems add complexity and require that the vendor be trusted with all personal and business keys generated. If there is no key recovery feature, then the vendor must break the system to recover the key, which means someone else can too. Thus, if key recovery is absolutely required escrow is preferable to a flawed system, but neither is best.

Occasionally a cryptography system will claim to be "military grade". This is babble. There is no such classification. Again the vendor is either dishonest or ignorant.

There is a joke in some cryptographic circles. Q: How do you prove an algorithm is not secure? A: Export it. There is some truth in this jest. For an implementation of a cryptographic algorithm to be exported the either the keys have to be sufficiently short that they can easily be broken through brute force, or the system has to have some form of key escrow. Make sure any exportable algorithm uses the latter. Since security is not simple any escrow feature will make a system less secure. (Thus another relevant engineering phrase is KISS. "Keep it simple stupid." ) Of course domestic strength from U.S. vendors is widely available, or cryptography with no escrow features with keys or any length can be imported. Thus there is no reason to make this difficult choice between system weaknesses.In considering selection of a secure system it is important to be able to indentify sdecurity hoaxes. The most famous of these hoaxes is the "good Times" virus which may still be circulating in email. In evaluating security systems do not be fooled by hoaxes. At the Department of Energy incident response team, the Computer Incident Advisory Capability, (http://ciac.llnl.gov/ciac/CIACHoaxes.html) offers valid virus warnings at its site, as well as advisories concerning selected Internet hoaxes. For example, it discusses the famed and bogus Good Times Virus.

The CIAC identifies several critical elements of a hoax:

is not correct technical language. The very entertaining technobabble of Star Trek illustrates this fact most nicely.

are easy to make. Anyone can claim that IBM or Bell Labs are sending out a warning. Yet for any credible source check with the source before responding to the warning.

encourage recipients to forward without thinking. Checking the source will naturally lead to not forwarding hoaxes.It is important not to continue a hoax by forwarding it before checking. Such an action will not only further the hoax, but identify the forwarding party asan unreliable source of information.

Private Key Cryptography

There are two basic types of encryption techniques: private key and public key Private key encryption uses one key which is shared between various parties. A physical analog of a private key system is a shared lock box; that is a box with a lock to which a particular set of people have the keys. Those who have the keys can add contents to the box, and the same people can remove contents. Thus the presence of something in the box does not prove that a particular person put it there, only that one of the people with the key put it there. Private key cryptography uses one key for both encryption and decryption and is therefore sometimes called symmetric key cryptography.

Symmetric or Private Key Systems

Recall Figure 3.2. Symmetric key cryptography can be used to provide a envelope -- shielding what would be the Internet postcard from the eyes of observers. This means symmetric key systems provide confidentiality.

Private key systems can be used for authentication between parties who share a key. Thus if Alice, Bob, and Carol share a key, Alice can present the key to Bob and claim to be Alice. However, if Alice, Bob, and Carol all share a key Bob can also present the key to Carol and claim to be Alice. Thus private key systems create the opportunity for replay attacks. Replay attacks are attacks which use for authentication by literally playing it over again, enabling the person who replays the data to masquerade as another. In this case Bob got the information by talking to Alice. Bob could also have gotten the information by watching Alice's communications across the wire and copying the bits, like listening to a telephone call and recording the conversation.

A Replay Attack

Public Key Cryptography

In public key cryptography there are two mathematically related keys. The publication of one key provides no information about the other key. Anything encrypted with one key can be decrypted only with the other key. One key is held secret, shared with no one. The other key is widely publicized. This is why public key cryptography is sometimes called asymmetric cryptography. Information encrypted using the secret key can be decrypted only with the other, public, key. Information encrypted with the public key can only be decrypted with the secret key.

Secret key encryption can function as a digital signature in two ways. First, if a document can be decrypted with the published key this proves that only the person with the corresponding secret key could have made the original encryption. Since anyone can access the publicized key, this verification can be performed by anyone. Second, a signature on an electronic document can be transferred to another document with the greatest of difficulty. In fact, depending on the use of hash functions the odds against transferring a signature can be astronomically high.

Information encrypted with the published key can be widely broadcast but remains unreadable to everyone except the holder of the secret key. This characteristic can create a virtual sealed envelope.

Public key cryptography is based on one-way functions. A one-way function is easy to do, but hard to undo. In the physical world, for example, pouring milk from a glass onto the floor is easy to do but impossible to reverse. Although there is no special trick to reverse spillage, in cryptographic one-way functions always have a trap door. Opening this trap door requires the cryptographic key. It is much easier to multiply two large numbers (do) than it is to factor one large number (undo). The public key encryption algorithm RSA (Rivest, Shamir and Adleman, 1978) is based on the difficulty of factoring numbers. A second common cryptographic authentication technique,(Schnorr, 1990), is based on the discrete logarithm problem. Other techniques (Feige, Fiat and Shamir, 1987; Rabin, 1978) are based on the difficulty of finding square roots, which is a special case of factoring.

Asymmetric or Public Key Systems

Public key systems have several advantages in providing authentication, one of which is that replay attacks are more difficult with public key systems. Simple replay attacks do not work with public key systems, as shown in the figure above.13 Here Carol is not fooled because Bob addresses her as "Bob." Bob cannot take the message apart as he could with the private key system because he does not have access to the key Alice used to encrypt her message. Bob, also cannot break the message into parts and send only the part of the message he wants because the signature encrypts the entire message as whole - not bit by bit. This is because modern cryptography systems act upon the entire message, or on large blocks of a message --- not on easily separated small message blocks.

Simple Replay Attacks Fail with Public Key Cryptography

Public key systems can provide authentication, access control and integrity. Public key systems provide integrity through the use of digital signatures: messages encrypted with the secret key. It can be decrypted with the publicized key, so anyone can verify that the accompanying message has not been changed. Anyone who might o pen and alter the document cannot then re-sign it with the secret key, so integrity and authentication of the initiator are ensured.

Hash Functions

A third type of useful cryptographic functions is hash functions. Hash functions are one way functions with the property that, given the output , it is difficult to determine the input. Hash functions transform information so that it can be used for verification but not read. The output of a hash function is typically much smaller than the input. Think of hash functions as unpredictable compression functions. When data has been transformed by a hash function it can be said to have been hashed. The output of a hash function is called a hash value.

Hash functions are subject to attacks based on the birthday paradox (Mosteller, 1965). The birthday paradox is counterintuitive, so consider it yourself. Guess how many people must be in room to have a 50% chance that two people have the same birthday? If you did not already know the answer it was probably less than you thought - only 23.14 This holds across all values -- for example, there need be only 100 people in a room for there to be 99.99% certainty that two people share a birthday, although an initial guess might be closer to 365. This is because calculating corresponding birthdays in a room is a special case of sampling with replacement. Thus in order to calculate the probability that out of x people, two have the same birthday, the formula is:

.

This same principle applies when trying to calculate hash value collisions. In a sample of all numbers of x bits collision occurs when two have the same value. Thus the birthday paradox is an example of a collision of birthdays. In order to find hash values, by trial and error alone, an attacker must hash exponentially fewer values than might be expected. (On the order of 2n/2 values, where n is the size of the hash value.) Thus, even though hash values compress larger files into smaller amounts in an unpredictable manner, hash values less than 128 bits are not considered secure.

In sum: hash values compress data in a unpredictable way. This makes it possible to verify large files by signing small hash values of the file. Since a hash can take a large file and make a verifiable output, hash values are occasionally called thumbprints. This can be defeated not by running the hash function backwards but by making hash values of so many different alternatives that there is a collision. There are hash functions that are designed so carefully that this approach is remarkably unlikely to work. This class of hash functions is called collision-free.


4: Security Goals

In this chapter I define the goals of computer security, and introduces some tools used to meet those goals. Some of these goals cannot be realized without reliable transactions as well. Reliability, and related transactional characteristics, are described in the following chapter. Security, privacy and reliability are not entirely separate. A system without security can offer reliable service to consummate users' fraudulent transactions. Yet security and reliability are related. Security requires reliability. Security can provide authorization, authentication and integrity. Security and reliability are both required to ensure that a system is available to legitimate users even when under attack. Properties of secure and reliable systems, and the tools used to ensure security and reliability, are defined in these next chapters. For example, consider authentication; authentication is the goal of knowing that a particular user is authorized to take an action e.g. authorizing a charge to an account. Tools used to meet the authentication goal include passwords, cryptographic keys, and challenge/response mechanisms.

Appreciating the technical analyses in the following chapters and their implications requires an understanding of security, privacy, and reliability in electronic commerce. This chapter will provide the necessary definitions of some fundamental security tools and concepts.

Security is the control of information. In a secure system the ability to view, change, and distribute information is controlled by the technologies that implement the security policy. Usually security is the control of information by the owner of the information; however, as systems become increasingly decentralized, security may mean that the user or the creator of the information can control the information. Intellectual property controls would allow the creators of information to control its distribution.

Security ensures that authorized parties are properly identified and their messages are sent through a network unaltered. A secure system ensures the stated origin of a message is as stated and that the intended content is sent only to the intended recipients. The ability to ascertain the validity of a message is clearly necessary when the information transmitted is a of promise to pay or deliver merchandise, or a confirmation of payment.

Note that security is not privacy. Privacy means that the subject of information can control the information. Thus privacy requires security, since security is control over information. However, security is not sufficient for privacy, since the owner and the subject of information may have very different interests in and uses for the data. In fact, security may preclude privacy by ensuring that the subjects of information have neither control nor knowledge of the uses of that information.

Later in this book, during the analyses of specific protocols for Internet commerce, I discuss security strengths and flaws in specific protocols. This is not meant to imply that design issues eclipse implementation issues, as in the physical world, a good design does not guarantee a good outcome. However, even the best implementation cannot overcome a design flaw. To learn more about practical approaches to implementation issues I recommend Garfinkle and Spafford, 1986; Denning, 1982; and Pfleeger, 1989.

Threats to Electronic Information Systems

As in the physical world, in electronic systems security is never absolute. In no case is it perfectly impossible to undermine the security of a system. It is important when estimating the cost of security in electronic commerce systems to recognize that these breaches, once they have occurred, can go undetected for some time. In addition, the physical difficulties and dangers that limit the attraction of repeated robberies and break-ins in the physical world do not exist in the electronic realm.

The difficulty of defeating the security mechanisms in a system is referred to as the work factor. A system's work factor is the processing time and cost for processing power necessary to defeat the system. A system is considered strong, or a message verifiable, if the cryptography used to protect it has a prohibitively high work factor.

Fundamentally there are three ways to obtain electronic information without authorization: copy it during transmission, access it during storage or obtain it from an authorized party. Attacks on data transmission include eavesdropping, replay attacks and cryptanalysis. Eavesdropping is the act of surreptitiously monitoring a communication. A common criminal application of eavesdropping is the theft of calling card numbers as they are punched into publicly visible phones. Cellular fraud (which is most often implemented by programming one phone to charge calls to another) depends on ease of eavesdropping the identification codes of the targeted victim's phone. Once electronic information has been stolen it can be easily and anonymously transferred over a network. Encrypting transmissions can reduce or eliminate the benefit gained by eavesdropping over a network.

Replay attacks take advantage of the ease of duplication of information. Merchants can attempt to be paid twice by replaying electronic messages that authorize payment. This is similar to the use of a credit card number to make additional, unauthorized charges after an authorized transaction. Similarly, individuals can defraud legitimate users of a system by replaying authentication sequences to authorize illegitimate payments. The general problem of replay attacks can be solved two ways. First authentication techniques impervious to replay attacks can be used. Authentication techniques that which are impervious to replay attacks even from a participant are called zero-knowledge authentication techniques (Feige, Fiat and Shamir 1987; Tygar and Yee, 1991). Zero-knowledge authentication techniques are mathematically graceful and under used.

The other solution to preventing replay attacks is to add information, analogous to a receipt number, can be added to each message to make it unique. To add information to make transaction to make it unique, that information must be random. Adding predictable information does not prevent replay attacks because attackers could guess the information to be added and replace it for the next message. Such added information is usually presented as a random number and called a nonce serve this role. Sometimes nonces have two roles, such as transaction identifiers or identity challenges. Nonces are simple to the point of trivial, as well as multipurpose, and are therefore widely used.

Encrypted transmissions can be attacked using cryptanalysis, which refers to the analysis of encrypted transmissions to break an algorithm or obtain a key. It can be defeated by using secure algorithms with well-chosen keys. It is not possible to protect against cryptanalysis by using a secret algorithm because a cryptanalysis can guess the algorithm by looking at the input and output unless the algorithm is flawless. If the algorithm is flawless then it can be published without being broken. In fact, using a proprietary algorithm can be very risky, since such algorithms cannot be subject to widespread review so are more likely to contain a flaw.

Cryptanalysis is also used in attacks on the authentication systems that protect stored data. Such attacks are the electronic equivalent of an attack on a bank's vault. Building a secure server is difficult, and the concentration of valuable data in one virtual location can make the value of a successful assault extremely high. Weaknesses in operating system and windowing environments can undermine apparently secure applications. If an application is running on an operating system that is not secure then the files which the application needs to trust can be altered. Finally, unlike the case with a physical vault, a successful attack on a secure server may go undetected.

Secure information is commonly protected by passwords. One form of attack on password protected systems requires that encrypted copies of users' passwords be available to the attacker. Attackers then encrypt popular passwords, such as dictionary words, and compare these values to those in the encrypted password files. By encrypting and looking for a match the attackers do not have to decrypt anything.

The third method for illicitly obtaining electronic information, the subversion of security through the confidence of a trusted party, is in no way unique to electronic commerce. The most that security can offer in such a case is the ability to track the individual that improperly released the protected information.

Finally, there are denial-of-service attacks. In denying service the attacker does not obtain information but instead prevents anyone else from obtaining information. These attacks limit the availability of a commerce system, denying access to both merchants and customers. Analogous denial-of-service threats exist in the physical realm: that someone will damage your business premises or threaten your customers.

Confidentiality

Confidentiality is secrecy. When a message's confidentiality is preserved only by intended recipients can read it. Eavesdropping is either prohibitively difficult or useless against confidential transmission.

Confidentiality alone is not adequate for security, as illustrated in the following example. There is a classic problem in computer science called the Byzantine Generals problem. (The historically included might recognize it as the Spanish Armada problem, which Lord Nelson handily used.) The scenario is this: two generals are camped on opposite sides of a city. If their armies coordinate an attack, then they will be victorious. If they attack at different times the forces of the city will defeat them. One general needs to send a message to the other that is confidential (so the city forces will not be prepared) and correct. Imagine they have illiterate messengers, so that any written message is confidential. Yet the messenger could still alter the message. So if a message saying, "Attack Not, Retreat" was altered to read "Attack, Not Retreat", or a one becomes a seven, or the message becomes illegible when the attack was imminent the communications channel has not functioned securely, because of a loss in message integrity.

Confidentiality is also not privacy. Gossip is a classic example of confidentially communicated but privacy violating information. The integrity of a message depends on the ability to determine the identity of its initial author. When Carol whispers to Alice that the boss said Bob was a better employee than Alice, Alice knows the soft-spoken conversation behind closed doors is between only two. However, the integrity of the message leaves something to be desired. After hearing this missive, Alice cannot be sure her boss actually said it. The ability to violate with impunity Bob's or the boss's privacy by repeating their words or words about them depends on the cooperation of the listeners to keep the message confidential and thereby not verify the accuracy of the information. This also further illustrates the difference between privacy and confidentiality.

The simple case of gossip also provides a nice illustration of security in transmission versus security in storage. Carol's communication to Alice is confidential, but neither party can be sure that the information will be kept confidential. Alice does not know how many people have access to this piece of information in Carol's head; Carol does do not know to whom Alice will speak.

Different degrees of confidentiality are possible, in electronic transmissions as confidentiality can depend on simple passwords, complex one-time passwords, secure connections, or more advanced technologies.

Availability

Availability is exactly what it sounds like: keeping a system up and running. Malicious hackers, network failures or commercial espionage can compromise system availability. Denial of service can be costly, whether it results from an attack, a design failure, or an accident. To be useful and marketable, a system must be consistently available.

Availability for the individual merchant or customer is a function of network availability, server availability, and protocol scalability. Availability can be a function of protocol design. The TCP-based attacks called SYN flooding are an example of a denial of service attack made possible by inherent requirement for trust in a protocol. TCP has a three step process for initiating a connection, as described in the previous chapter. The caller calls, the receiver responds, the receiver puts aside some resources to deal with the connection, and then sender then confirms the response. This is analogous to interrupting paperwork and answering the phone. There is some overhead of interrupting work (time wasted, concentration interrupted, etc.) , and greetings are exchanged before significant communication takes place. If there is no response when the phone is picked up most people would query the line a few times before hanging up. If nine out of ten phone calls were prank calls, then the subject of the prank would get no work done. Similarly, the machine that accepts a TCP request for a connection creates a data structure and reserves space for information about the connection. Then the recipient holds this space available and waits for the requester to sender the connection.

Recipients trust that the sender is honest in desiring a connection. In a flooding attack, the sender takes advantage of this trust. The sender does not acknowledge the open connection and thus leaves a half-open connection. (See Figure 3.1) Instead of using the connection, the attacker continues to send TCP requests until the server is unable to serve any new requests but the bogus requests. (Notice that connections already established are not affected.) Measures to prevent such attacks include refusing many requests from a single domain; increasing the amount of space available to hold half-open connections; refusing requests from obviously bogus domains; and reducing the time a data structure remains reserved when waiting for message to verify the connection (Schuba, Krsul, Kuhn, Spafford, Sundaram, Zamboni, 1997). Notice this is an attack against a server -- in the network the increased flooding results in lower service rates for everyone but no one person can be blocked off.

A Three Way Handshake A Half-Open Connection

A commerce system may depend on real time access, rather than providing off-line authentication. Certainly consumers expect real time responses. System availability is a function of the reliability of the network as well as the number and size of messages required by the protocol used for transactions and transmission. Protocols vary in the ability to confirm information without access to the central server. In Digicash, the merchant can confirm the form of an electronic dollar off-line. This means the merchant can be certain that the digital dollar presented was at one time verified by a bank. However, the merchant still has to go to the bank to be certain that the token has not already been spent. A later version of Digicash tries to assure that anyone who spends moony twice will be caught. This provides some assurance to the merchant, so the merchant does not need to contact the bank at every transaction.

Availability requires reliability, but reliability is not sufficient for availability. Availability requires that a system needs to be scalable in the number of users. Availability and scalability are functions of the need for central processing.

Scalability

Scalability in the context of electronic commerce means scalable in the number of connections; the number of transactions, the size of transactions, and the number of users. I address scalability here because availability is be a function of scalability, not because scalability is a first level security goal in its own right.

One way to ensure scalability is through load migration. Migrating processing load away from the server to the customer or merchant can increase scalability. The design of the automated teller machine system migrates load by requiring terminals to verify PINs off-line before making a request to a remote bank. An electronic commerce project at Carnegie Mellon (NetBill) decreased the central servers' load by making the merchant sign messages using the Rivest Shamir Adleman (RSA) public key system, whereas the central server uses the Digital Signature Standard (DSS) (Cox, Tygar and Sirbu, 1995). Using both RSA and DSS serves to distribute load because DSS signatures require relatively few CPU cycles, but are complex to verify. Conversely, RSA signatures are computationally intensive but easier to verify. Therefore the merchant server does more work, than the central ensuring that the central server is available to respond to requests quickly. This illustrates that while a protocol which concentrates the processing in a central server might appear at first glance to be preferable to one which requires more work on the merchant side, the advantage of scalability can outweigh the disadvantages.

A second option in scalability is to offer batching services, so that transactions can be scheduled according to the availability of the central server. The electronic commerce mechanism designed by Visa and Mastercard (SET) offers merchants the option of batching transactions. Thus the on-line bank server (the gateway) may offer high average response time, but the time it takes to authorize or capture a single transaction may be quite high. In evaluating a protocol that addresses scalability partially though batching, it is important to consider the variance as well as the average transaction response time.

Authentication

Secure systems limits resources use according to user attributes (usually identity). Authentication establishes user identity or other appropriate user attributes. The appropriate user attribute is then compared against a table of permissions (such as read, write, alter) to determine functions for which the user is authorized.

Authentication is implemented either using shared information or the ability to prove unique information. The latter is most simple and requires that one party present information as proof of identity to another party. PINs and passwords are common examples of simple authentication. Authentication techniques which require that one party present to a verified identification information require that the presenting party trust the verifier.

In the case of PINs the customer's ability to produce a unique number provides authentication. Since the customer provides that number to the merchant's terminal, this means the customer must trust the terminal. In practical terms, this means that one badly protected or unreliable ATM can harm any bank connected to the network. The requirement for customer trust of ATMs enables attacks such as bogus ATM machines (Davies, 1981; Business Week, 1993; Johnson, 1993), thieves programming ATM cards with others' information (Harrison, 1994), and large losses at badly managed machines (New York Times, 1995a; New York Times, 1995b). A similar weakness in the credit card clearing system allows disbarred merchants to use terminals belonging to dishonest merchants (Van Natta, 1995).

A mutually trusted authority simplifies the problem of authentication. This authority can either be on-line to provide verification of authorization upon request, or provide electronic letters of introduction. (This is done with digitally signed certificates as explained in the next chapter.)

Cryptographic techniques and digital signatures using these techniques enable mutual authentication (Rabin, 1978; Schnorr, 1990; Feige, Fiat and Shamir, 1987; Rivest, Shamir, and Adleman, 1978). With mutual authentication each party can prove authorization to the other and neither party has enough information to later impersonate the other.

Untrustworthy hardware creates problems that can be addressed in three ways: by requiring secure hardware; by requiring merchants and customers to secure their own terminals; and by accepting the cost of fraud. Electronic transaction systems which require secure hardware are called smart card systems. Most on-line systems require customers and merchants to ensure the security of their own hardware. Systems which simply trust the user and accept the corresponding losses are called crypto-less systems.

Even when all parties are honest, networks are not always reliable. Therefore, the reliability of unauthenticated acknowledgments should not be critical to the security and reliability of an electronic commerce system. Some electronic commerce protocols assume reliable acknowledgments. Although higher layer protocols can provide acknowledgments that packets are delivered, this does not involve the acknowledgment of the contents of the packet. Thus the acknowledgments developed for reliable packet transmission are inadequate for verification for electronic commerce transactions. These acknowledgments are not secure; thus they do not provide verifiable information.

Authentication enables access control. Access control allows different levels of access for individual files or data fields. The following table offers an theoretical example of access control for a hypothetical credit record. Note that access control can protect privacy as well as integrity by limiting both read and write access. For example, notice that an employer can write information about disability payment if they are made, but there is no reason for the employer to be able to read past disability payments.

Data

Party

Mortgage

Status

Income from Disability Ins.

Employment

History

Current Debts

Individual

Read

Read

Read

Read

Bank

Read/Write

None

Read

Write

Employer

None

Write

Read/Write

None

IRS

Read

Read

Read

Read/Write

Access Control List

Access control also creates privacy conflicts. Access control can protect privacy by keeping records of who used the data about a specific individual. By increasing individual control of the data, access control has the potential to increase individual privacy. Conversely, access control can limit privacy by keeping track of data used by a specific individuals, in this example by a financial services worker. For example, if every record viewed or written by a loan officer has been authenticated by that loan officer, then it would be a trivial matter to track the loan officers behavior in detail. Thus access control would increase the privacy of everyone who had credit records but the loan officer's workplace would become a place of constant surveillance, reducing the loan officer's privacy.

Integrity

A recipient of a message with transmitted integrity knows that the contents of the message have not been changed. Integrity alone is not security. For example, if a message that claims to be from an account holder is actually from a thief, integrity can ensure that the message transmitted is the one the thief sent, but integrity does not prevent the theft.

Encryption can provide integrity. A document that is digitally signed is a document that is encrypted. Encrypting a document with a private or symmetric key provides the recipient with some certainty that the document was not altered. Symmetric key encryption provides confidentiality, integrity and possibly authentication. Symmetric key encryption provides confidentiality because only the holders of the symmetric key can read the message. Integrity is provided because when message protected by cryptography is altered it becomes garbage upon decryption. If the key is shared between only two parties, authentication is provided as well, since the recipient knows the sender must have encrypted it.

Using a symmetric key for verification of transmission requires that the recipient and the signer share trust on the contents of the document. With symmetric key encryption, any holder of the symmetric key can modify the document. For this reason digital signatures usually refer to public-key signatures, which means that the document is encrypted with the secret key of the sender's public key pair. Public key signatures provide integrity and authentication, and therefore irrefutability. (An action is irrefutable if it can be clearly proven to a third party that the action occurred.) Authentication is provided since only the sender could have encrypted the document with his or her secret key. Integrity results from the cryptographic security of the signature. Since the recipient could prove that the document was encrypted only by the possessor of the private key and that the message has not been altered, public key signatures provide irrefutability. Notice that since anyone with the publicized key can decrypt the message, public key signatures do not provide confidentiality.

Clear signing refers to signing a hash of a document, and sending that with the original document in the clear, i.e. not encrypted. This is particularly effective with large documents because it removes the need for multiple encryption operations. The transmission in the clear of the accompanying message means that clear signing does not provide confidentiality. Clear signing provides integrity, authentication and irrefutability.

Nonrepudiation

Nonrepudiation means that an individual cannot reasonably claim not to have taken an action. Nonrepudiation means an action is irrefutable. In physical commerce nonrepudiation is obtained through controlled hardware tokens (such as credit cards) and physical attributes (like physical signatures).

In electronic commerce nonrepudiation is obtained through use of digital signatures. A digital signature is created when a user encrypts a document using his or her secret key. Then anyone with the user's public key can decrypt the encrypted document and thus prove that the encryption could have been encrypted only by the original user.

Thus there are many types of cryptography, and all have their prices, in terms of processing power in addition to licensing fees. Choosing the right type of algorithm means picking one to suit your needs, because there is no such thing as the 'best' algorithm just as there is no such thing as the 'best' paper product. In summary, the table below summarizes the uses of various tools and their relationships to the properties described in this chapter.

 

Zero-knowledge protocols

Hash Functions

Asymmetric Encryption

Symmetric Encryption

Authentication

"

 

"

"

Confidentiality

 

 

"

"

Integrity

 

"

"

"

Nonrepudiation

 

 

"

 

Safe from Replay Attacks

"

 

 

 

Cryptographic Tools & Uses

Nonrepudiation is possible without identity information. In fact identity information should be a fail-safe, for the case that there is a system failure. Linking to identity is a second order attempt at accounting liability. Identity linkage allows a party which has not been served or paid to record this fact. Thus what is really necessary is linking a specific action to a key: the right to an item, the right to spend a specific type of money, etc. It is far better to be certain of payment or service than to be able to report failures to law enforcement. Nonrepudiation of action, rather than identity, should be sought in Internet commerce transactions.

Key management is a critical element of risk management in electronic commerce. The loss of a key should be both unlikely and have limited potential for damage. Linking a key to authorization for only a single action (e.g. authorize a charge to a single account) or a set of actions both limits loss and increases reliability.


5: Key Management is Trust Management

Cryptographic key management is trust management. This fifth chapter builds on the understanding of cryptographic technology and describes the trust relationships as implemented through key management. Digital signatures depend upon the trust hierarchy that validates that a particular digital key corresponds to a particular the signer. Digital certificates are electronic certificates (like electronic driver's licenses) link identity and attribute, but identity is not a Boolean variable. There are degrees of identity, with new mechanisms available for ensuring anonymity and pseudonymity

A valuable feature of public key systems is the ability to have digital nonrepudiation. However such systems have another critical feature that also makes them remarkably functional in distributed electronic commerce: the simplification of key management.

The management of public keys consists of linking cryptographic keys to identities or rights and keeping the secret key secret. In cryptography systems key storage is a critical but often weak link. It depends heavily upon correctly implementing not only the commerce software but also the operating system on which it relies. Operating systems (rationally) have usability and speed, rather than not security as their highest goals. For example, it is difficult to run a secure server on WindowsNT, since it has a number of security holes. A search of the Web by the wisely wary would reveal some tools for attacking and holes in any operating system.

Of course no store is perfectly secure either, or the customers would be unable to use it easily. No house can be both livable and perfectly secure, thus residents have safety deposit boxes and make copies of critical information. The same balance between security and usability should be reflected in Internet commerce.

Symmetric Key Management

In symmetric keys the problem of key management is exacerbated by the need for a unique key for every possible set of people. Consider the problem of managing keys if a company shared a key with every customer, and each of those keys had to be unique. Now extend that problem to when customers are themselves merchants, so that each one of them must further share a different key with every other party. For a number (call this number k) of customers and merchants assuming every entity has a to communicate with every other to use symmetric cryptography, there must be pairs of keys. Not only must these keys be created and organized, to minimize the threat of cryptanalysis or the possible damage from a lost key, they must also be changed at regular intervals.

There are many excellent uses for symmetric keys because symmetric encryption is much faster (less processing intensive) than asymmetric encryption. One such use is session keys, which are keys that are used for one conversation or transaction. Instead of using processor-intensive a public key operations for all encryption, most systems use the first public-key message to set up a session key and then that is used to protect the transmission from prying eyes.

There are two kinds of symmetric cryptographic protocols: block ciphers and stream ciphers. (Cipher is just an old-fashioned name for a symmetric encryption systems.) They are exactly what their names would indicate-- one operates on chunks of data and one operates on a continuous flow. Think of the difference between pouring concrete and building with bricks.

Table 5.1 is an example of a simple symmetric cipher:

 

1

2

3

4

5

6

A

B

C

D

E

7

F

G

H

I

J

8

K

L

M

N

O

9

P

Q

R

S

T

*

U

V

W

X

Y

&

Z

E

space

space

space

Thus "A" is 16 and "R" is 39.

So Jean Camp becomes

57561648 3&361638 19

and "I want red size eight shoes what is the price" becomes

474&3*16 48594&39 56464&49 47565&56 47273759 5&493758

56493&3* 3716593& 47494&59 37563&19 39473656

Of course the breaks would not be in the coded information, but adding them makes it somewhat easier to read. There is a single key and the information is encoded block by block, where the block size is one letter. This type of simple substitution , called a digraphic system, was developed in the sixteenth century. Although this kind of simple substitution is far more simple than any Internet commerce encryption, as an example it communicates a sense of what is happening, and some of the issues of key management. In this case the key is Table 5.1 shown above.

The above is a block cipher. A stream cipher works on information as it streams through the encrypting device. Of course every stream must be acted on at some discrete level, for example, at the bit level or at the letter level. Given the same input twice a block cipher will produce the same output. Given the same input twice a stream cipher should not produce the same output because the stream is not reset for each encryption. With a stream cipher the key varies as the data goes through. Consider again, "I want red size eight shoes what is the price". With a stream cipher this message is translated into the appropriate numeric, as is the stream as shown in Figure 5.3. These are added together, as shown here. The point of this rather extended exercise is to show that in this case there are two secrets: one a trivial cipher and the other the source or nature of the stream. The primary secret is the content of the stream. The name of the book, "The Hobbit" from which the stream is taken is the secret that one must be careful not to share in this example of a stream cipher.

Consider the simple code where A=1, B=2, ... e.g.

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

w

x

y

z

space

 

23

24

25

26

30

 

Table 5.2: Trivial Code for Stream Cipher Example

Now this extremely simple code is added to equally simple secret and results in the cipher text shown in Table 5.3.

 

 

 

 

 

Message

+

Streaming Key

=

Cipher text

msg. value

+

key value

=

cipher value

I

9

I

9

18

 

30

n

14

44

w

23

 

30

53

a

1

a

1

2

n

14

 

30

44

t

20

h

8

28

 

30

o

15

45

r

18

l

12

30

e

5

e

5

10

d

4

 

30

34

 

30

i

9

39

s

19

n

14

33

i

9

 

30

39

z

26

t

20

46

e

5

h

8

13

 

30

e

5

35

e

5

 

30

35

i

9

g

7

16

g

7

r

18

25

h

8

o

15

33

t

20

u

15

35

 

30

n

14

44

s

19

d

4

23

h

8

 

30

38

o

15

l

12

27

e

5

i

9

14

s

19

v

22

41

 

30

e

5

35

w

23

d

4

27

h

8

 

30

38

a

1

a

1

2

t

20

 

30

50

 

30

h

8

38

i

9

o

15

24

s

19

b

2

21

 

30

b

2

32

t

20

i

9

29

h

8

t

20

28

e

5

 

30

35

 

30

n

14

44

p

16

o

15

31

r

18

t

20

38

i

9

 

30

39

c

3

a

1

4

e

5

 

30

35

Table 5.3 Simple Stream Cipher

Were this exact text to be encrypted again the output would be different because a different stream would be used in the encryption. The key would continue as "nasty, dirty ...." to be added to "I want ..." resulting in a cipher text beginning "2301". Note that this means not only the test itself but also the starting point in the text must be synchronized.

The ideal cipher would be a stream cipher which used a different key for each message, and each key would be as long as the message. In practice the ideal would be a random string of bits, e.g. 100111010100010110, which is combined with the message in a simple technique called exclusive or. Exclusive or is a way that computers combine numbers so that the key and the text can not be guessed from looking at the output.

In the case of using a book to supply the data stream for encryption someone has already produced the stream. If a true stream cipher were being used there would be a random number as long as the message. This key would be extremely large, every message would require its own key, and key management would be correspondingly difficult. Thus whereas in theory this is the ideal; it is not remotely practical for price-sensitive communications.

Both the stream and block ciphers here are symmetric key systems. The same key is used to decrypt and encrypt the message.

Asymmetric Key Management

The first asymmetric algorithm (Diffie-Hellman, named after its inventors) was invented for the exchange of symmetric keys between users who share no trusted party. In fact, given any asymmetric key algorithm, key exchange becomes a trivial matter-- the initiator of the conversation encrypts the desired key in the receiver's public key and signs it with her own key. This provides confidentiality and integrity. It can also provide authentication if the symmetric key is linked to the initiator.

Given the current state of cryptography, the problems of security, authentication, and confidentiality could all be solved in a straightforward manner if the distribution of cryptography keys were elementary (and the related software and hardware perfectly trustworthy). Unfortunately, this is not the case.

Thus with asymmetric keys, distribution of the key itself is a trivial concern, whereas linking a key to an individual is difficult. It is simple for me to post a public key and claim it is mine. Presumably after that only I could read anything sent in that key, for I would have the secret key that decrypts the message sent in the public key. Of course it is equally simple for me to post a key and claim it belongs to Senator Kennedy. Thus the issue in asymmetric key management is linking the public key to the correct individual possessing the secret key. This is done today with digital certificates.

Digital Certificates

One early suggestion for managing public key distribution was that digital certificates be used to link keys and attributes, (Kohnfelder, 1978). This suggestion has now been widely adopted. A certificate in electronic commerce links an individual, an attribute, and a public key. For example, a Secure Electronic Transactions (SET) certificate links a consumer with an identity, the right to authorize a charge against a Visa account, and the public key used to verify a payment authorization. The certificate may contain a pseudonymous account number (PAN) instead of the customer's account number, or pseudonym instead of a name. Visa and Mastercard consider the certificate in SET to be the electronic representation of the bank card.

Certificates can be used to connect an individual to any attribute, such as a person to a public key. Examples of off-line certificates include credit cards, driver's licenses, and club membership cards. Just as one person holds many off-line certificates, one person can hold multiple on-line certificates.

With a certificate, key management concerns are the validity of the attribute/key linkage; the length of the root key; the length of individual keys; the number of roots; and the lifetime of the certificate. For a certificate to be valid, it has to have integrity and authentication must be possible. A signature from the root or any trusted authority can provide both of these, given that the root key is secure and the information in the certificate is still valid at time of use.

An attacker can use otherwise valid certificate if the associated secret key has been compromised. It is easy to obtain copies of a certificate, just as it is easy as it is to obtain a phone number, since certificates are public affirmations of attributes. It should be nearly impossible to obtain a secret key. Thus for a certificate to be valid, the secret key corresponding to the certificate must be secure. Also, a certificate may be used fraudulently if the information or the attributes attested to in the certificate are incorrect. This may result from of fraud when the certificate was issued, or from a change in information after the certificate was issued (such as the loss of credit privileges).

When certificates are renewed, keys should be changed. The lifetime of a certificate is the time between issuance and expiration. In the case of key compromise, the ability to commit fraud ends with the lifetime of the certificate. This suggests that shortening key lifetime could significantly reduce fraud. However, if certificate lifetime is too short, the cost of constant certificate issuing and the inability to cache certificates may outweigh the benefits of fraud reduction. (See Simpson, 1996 for a discussion of management of caching policies and certificate lifetimes. )

Key length is also an issue in public key systems. When attempting to break asymmetric keys, the attacker attempts to factor the number 15 that is the public part of the key set. However, it is reasonable to compare asymmetric key lengths for systems based on factoring and symmetric key lengths based on the difficulty of brute force attacks. Recall that when an adversary tries to break a key by trying all combinations it is called a brute force attack. Brute force attacks on symmetric keys consist of guessing numbers trying to find the key. The larger the key, the more numbers one has to guess. Brute force attacks against public key systems consist of trying to divide the key by different numbers. Considering only the difficulty brute force attacks, 56 and 112 bit DES keys are roughly equivalent in strength to 384 bit and 1794 bit RSA keys, respectively (Schneier, 1995).

There are two basic philosophies in the verification of the attribute/key link attested to by a certificate: hierarchical and nonhierarchical. Most designers of electronic commerce systems use a hierarchical approach. Hierarchical key management systems for general use have been proposed by the United States Postal Service (The Economist, 1996), and Verisign (Verisign, 1996). An example of a nonhierarchical system is used in Pretty Good Privacy (Zimmerman, 1995).

With Pretty Good Privacy, a user publishes his or her key, and other users can endorse this key using their own digital signatures. First a user generates a key. Then the user publicizes that key and endorses it with the corresponding secret key. This first signature/endorsement proves that the person claiming the publicized key has the corresponding secret key. This ensures that no other person can claim the key. 16 The user publicizes the appropriately signed key. Other users then endorse the publicized key/ attribute claim by using their own digital signatures.

Public key endorsements create a "Web of Trust" that takes advantage of off-line relationships and reputation. There is no single hierarchy that can verify every user for every situation -- only a set of people vouching for one's goodwill. This creates a network in which a person offers her reputation for proof that a key links to an individual. Thus, once someone has established a reputation in a particular electronic community, her endorsement will be meaningful in that community. However, if a reputation has been established on salon, the endorsement is meaningless on slashdot. (Interestingly enough, there is no monetary market for Web of Trust endorsements.)

In the special case of the Web of Trust an endorsement means that the endorser believes the holder of the key corresponds to the claimed identity. There is no implication that the endorser supports the endorsed party as being trustworthy on any count other than identity/key link. There is no implication that the endorsed party likes or approves of the endorsing party or has asked for or appreciates the endorsement. There is no implication of honor, agreement, or trustworthiness. No person can be prevented from endorsing another's key. Thus public figures have generally avoided the Web of Trust. To see why this is the case, imagine your chagrin at finding your key endorsed by the Aryan Nation hate group. Thus in a Web of Trust each person has limited power to state that an individual is linked to a key. Each additional signature increases the probability that the identity claim of the endorsed it valid. Some signatures are considered more trustworthy than others. Each person evaluating an endorsed key trust the endorsers differently. There is no single most trusted key.

Conversely a hierarchical system begins with the assumption that a single source has complete power in stating that a key corresponds to an individual. This trusted party, called the root of the hierarchy, may verify others as having the power to connect individuals to keys; however, every key/identity link is based on the trust of the first party. The mutual trusted party in such a system can provide digitally signed electronic credentials suitable for off-line authentication. These credentials verify that ownership of a public key pair corresponds to an attribute, usually identity. Multiple parties are planning to operate public key hierarchies. Competitors in the market for the provision of electronic credentials for electronic commerce include Verisign, Banker's Trust and the United States Postal Service (Verisign, 1995; The Economist, 1996). If there is a single winner in the competition to be the trusted root this endows one party with the ability to decide who has a valid existence in the digital world. This would also create a single point of catastrophic failure in what is otherwise a highly distributed system.

The relationship between certificates and trust may develop in an apparently arbitrary manner, as with physical certificates. When the state issues a driver's license , for example, the state intention is to verify that the holder has the right to operate a vehicle. However, in order to assure that this physical token is not transferred between parties, identity information is added in a human-readable manner. That information includes a photo, age, and hair color. Because of the inclusion of age and photo, driver's licenses are used to verify the right to purchase alcohol and tobacco. Because of the inclusion of the photo, driver's licenses are used to verify identity for, among other things, boarding domestic flights. And because of the unique identity number, driver's licenses are used to verify credit-worthiness when the bearer is writing a check, by assuring that the holder has not previously passed bad checks. Likewise, while the relationship between identity, certificate, purpose, and issuer may seem very limited and constrained at issuance, digital certificates can be used for many purposes.

Many businesses may choose to run their own certificate servers or add attributes to consumers' certificates. If a business wants to give out limited frequent-buyer discounts of some value, insecure technologies such as cookies may prove too prone to fraud for such a purpose. The business may add an attribute to a particular key/identity pair as bound by a certificate, although the certificate may have been intended for an entirely different use. A group of businesses may come together , issue certificates, and then share information on consumers in a direct manner. Currently businesses share unverified information at considerable overhead and with some risk that other businesses may provide altered data through cooperatives such as Abacus 17. Sharing a customer certificate could provide the same service, enhanced by the existence customer authentication and the potential for data verification.

X.509 is the dominant standard for certificates. Table 5.4 shows the required fields in X.509. The required fields determine the attributes or relationships on the basis of which the certificate issuer believes the certificate holder should be trusted. Notice credit-worthiness is not a required field. When reading this table recall the purposes of digital certificates. The distribution of certificates allows the trusted third party to provide off-line verification of multiple attributes. The distribution of the certificate also implies distribution of identity and association information.

Field

Purpose

Version

version 1,2 or 3

SerialNumber

unique (within Issuer) serial number, assigned by Issuer

Signature

algorithm used to sign the certificate

Issuer

trusted entity which signed the certificate

Validity

dates between which the certificate is valid

Subject

identity of the valid holder of the certificate

SubjectPublicKeyInfo

<p>.algorithm

<p>.algorithmIdentifier

algorithms for which this certificate is valid

SubjectPublicKeyInfo

<p>.subjectPublicKey

public key of the holder of the certificate

IssuerUniqueID

unique identifier of trusted entity

SubjectUniqueID

unique identifier of holder of the certificate, assigned by trusted entity

Extensions.extnId

identifies extensions

Extensions.critical

Boolean, use described in .extnId above

Extensions.extnValue

extension data

Information in a Digital Certificate

The loss of security for the secret root key of a certificate chain results in all certificates in that chain becoming suspect, and all bindings lose their trust. If the secret root key is compromised, an attacker can create or alter certificates; for example, an attacker could copy public key set of an otherwise valid certificate and thereby obtaining the ability to authorize payments on some other person's account.

The party that verifies certificates is referred to as the verifier. Yet this definition creates more questions than it answers. Who is the verifier? And how much does one business trust the verifier from another business? This problem can be solved by building on the distinct trust hierarchies. (Recall that key hierarchies are trust hierarchies.) Interactions between trust hierarchies can be formalized .

Two Certificate Hierarchies and their Trust Relationships

Consider the whimsical example above. The US government and a hypothetical multinational, Very-Corp, may have a relationship based on substantial data sharing at the highest level. however, not all the data shared at a higher level will be shared at a lower level. In fact, even work on joint projects might be prohibited at the lower level. The illustration above suggest that although there may be trust at the highest level, it is one thing to trust another party and another to trust that party's discretion.

Key Length

The second trust management issue, after choosing an algorithm for an application, is key length. Key lengths are measured in bits (binary digits). One binary digit, or bit, can be either zero or one because digital information is represented in base 2; just as one decimal digit (base ten) bit can be any number from zero to nine Two bits can represent 0-3, as two decimal digits can represent numbers 0-99. To largest number that can be represented in a key with n bits is (2n -1).

Here are some numbers in binary and base ten.

10 = 10 x 1 ten 1010 = 1x2 + 1x8

2=2x1 two 1 = 1x2

16=6x1 + 1x10 sixteen 10000=1x16

5=5x1 five 101=1x1+1x4

Thus representing a number in binary requires more digits than representing that same number in decimal.

Breaking a key, like breaking a safe, is a function of time and money. The price/speed ratio for key breaking is linear (Schneier, 1995). Thus, optimal key length depends on how long information must be protected and the value of that information if the key is broken. Key length is not a function of the duration of the transaction, since information may be stored long after the transaction is over. If a consumer wants to encrypt a credit card number and protect the credit card information for ten years, then she should select a key that will withstand a brute force attack funded by an amount equal to ten times her credit limit. The decrease over time in the cost of processing power must be a part of this calculation. For example: the credit limit is $10,000, the card expires in five years, and the consumer is using DES (Federal Bureau of Standards, 1977). In this case the consumer would want a key large enough that spending $100,000 would not break the encryption in five years: i.e. an eighty bit key18. Note that the factor of ten is a result of the decrease in the price of computing power resulting from Moore's Law 19 (Schneier, 1995). As an example, consider the Digital Encryption Standard, or DES. DES is a widely-used symmetric encryption algorithm. As a function of the way in which DES is designed DES keys must be 56 bits or a multiple of 65 bits, so an eighty bit minimum implies a 112 bit key.

Of course in practice there are off-line controls including laws that assign the burden of security loss. Thus instead of a credit limit, the consumer might want to substitute $50 in the calculation, since under current law the consumer could only lose $50 if her credit card number were stolen. However, some party will take the loss for unauthorized credit card use, and the maximum value of that loss provides a conservative estimate for key length required to defend against a brute force attack. The fact that any individual credit care transaction will take only minutes is not a guideline for key generation.

Once key length has been determined, how does one distribute the keys? By definition there is not already a shared key. Simply sending the key would provide no security, since any observer of future transactions would have a copy of the symmetric key. This problem has been addressed using a common trusted entity who can generate keys and already has independently authenticated both parties. In practice this would be the bank or financial services provided for a commerce protocol.

Credentials usually link an identity to an attribute. Credentials can also be used pseudonymously. Unlike a pseudonym alone, credentials by definition provide membership information, thereby giving partial identity information. For example, a user of a Carnegie Mellon University discount is one of seven thousand, not just one of millions. A user at the Kennedy School of Government is from an even smaller set.

Pseudonymity & Anonymity

The analysis of microdata offers the possibility of obtaining information about an individual's travels, beliefs, financial status, and any medical conditions. However, this information can be collected only if it is possible to link information from a transaction to the individual taking part in the transaction. Two ways to prevent such linkage are anonymity and pseudonymity. (Much of this overview is based on Froomkin, 1995.)

Despite its critical role in privacy, identity is merely another data field in an electronic information system. Any information such a system collects may be hidden during a transaction. When the identity of the customer is hidden, then that transaction is anonymous. Identity includes any user attribute that can be easily linked to a specific individual: user id and domain name, Social Security number, or IP address of a single-user machine.

Anonymity means that the identity of a party involved in a transaction cannot be determined during or after that transaction. Conditional anonymity means that a party's identity cannot be determined during a transaction, but may be determined afterwards with the cooperation of one or more record-keeping parties. True anonymity is technically feasible for electronic commerce (e.g. one mechanism for anonymous and pseudonymous communication was proposed by Cox, 1994) but for reasons of law enforcement such anonymity may not be desirable. In fact, anonymity is illegal for some transactions in many jurisdictions, including the United States. In the absence of legal protection, anonymity offers consumers the only protection against data surveillance. Unfortunately, wide spread availability and use of anonymity has its own dangers. A recipient of an anonymous electronic threat knows this truth too well.

Pseudonyms are aliases. Pseudonyms may provide continuity in an otherwise anonymous environment, or they may be a special case of conditional anonymity. Pseudonymity means that a customer can be identified by a pseudonym during a specific transactions or set of transactions, but the user's actual identity cannot be determined. A pseudonym may provide authorization or identify certain attributes (for a discount for repeated use, for example). A user may choose to have a unique pseudonym for each transaction, to use the same pseudonym for multiple transactions, or to have a pseudonym for each merchant. Without a delivery address, or with an intermediary that hides the delivery address, a pseudonym provides no identity information. Traceable pseudonymity means that the chosen alias can be linked to the user's true identity. Many so-called anonymous remailers 20 really provide traceable pseudonyms, since the records of the remailer can reveal the identity of the user of the service.

The value of a pseudonym in terms of privacy protection is a function of the frequency, duration, and breadth of its use. A pseudonym used many times in multiple situations becomes equivalent to identity. For example, the use of Social Security numbers has become so common that these numbers are now equivalent to identity; and like identity, Social Security numbers are linked to many attributes.

Microdata Security

Identity information can be maintained in separate locations. Any data can be maintained in separate locations, even cryptographic keys. Many of us do this in our daily lives. One person may be a member of a school board, a company, and a beer-making group. A person may be a parent, a partisan, a volunteer for the homeless, a stockbroker, and a member of a religious organization. Everyone who is both a citizen and employee should be able to easily shield religious and political beliefs from employers and employer reviews from his or her religious community.

The fragments of information about us that are distributed in different settings and roles are called microdata. In electronic commerce systems, personal data is distributed transaction by transaction. Information on any one transaction usually provides very limited information about a consumer. However, compiled transactional data can provide a detailed summary of a consumer's habits, preferences, income, and possibly beliefs.

Microdata security is the protection of the identity or attributes of individuals, be these individuals citizens, customers, or businesses. Microdata security is compromised if information about a specific entity can be obtained either from data sets where such specific queries are prohibited, or from the correlation of information across data sets. Microdata security is an issue of hiding fragments of information in such a manner and in enough different places so that the big picture is hidden as well.

Microdata security focuses on disclosure. A disclosure of information is not necessarily a violation of microdata security and a violation of privacy may not always be a disclosure of information. For example, TRW, a credit information agency, collects data about purchasing patterns of individuals both to provide credit references for consumers upon request and to market that information, which may be a violation of privacy. The first purpose is a arguably service to the consumer when the consumer is trying to obtain credit. But both uses of the data can be properly referred to as disclosures of consumer information.

Microdata security is concerned with all types of disclosure without consideration of intent. There are four types of microdata disclosure: identity disclosure, attribute disclosure, inferential disclosure, and population disclosure (Duncan and Lambert, 1989).

Identity disclosure is the release of information clearly associated with an individual. A university's release of the Social Security numbers of its students would constitute identity disclosure. Given that customers can be identified through their digital certificates, sending certificates without confidentiality-protecting encryption is identity disclosure. If an observer can watch a single person browsing and buying, that observer can determine where that browser client connects to a server. If the person directing that browser subsequently sends her certificate, there has been identity disclosure. When an observer watches a particular server, then the observer can identify all the users who send certificates, assuming those certificates are initially sent in the clear. Thus an observer could collect microdata to determine a customer's browsing and shopping habits, or watch a business to determine a merchant's customer base.

Attribute disclosure occurs when linking a record with an individual provides additional information about that individual. As an example, the Social Security numbers of the students mentioned above may allow anyone receiving that information to obtain the students' credit, employment, or medical history. Attribute disclosure has been a primary concern in the widespread use of Social Security numbers and other universal identifiers. When an IP address can be linked to anything from income range to identity the result is attribute disclosure.

Inferential disclosure is the release of information that does not identify associated individuals. This does not mean data have no unique identifiers, only that those identifiers cannot be linked to specific people. For example, the records of the New Haven needle exchange program use code names and therefore do not specifically identify a person but keep records over time. (Kaplan, 1991; Kaylin, 1992). The concern over inferential disclosure is that given access to a set of attributes, identity disclosure may occur. For example, some databases are made 'anonymous' by clearing the names. Yet this does not make the data set anonymous. Simply three datapoints; current location, date of birth and location of birth will identify a significant subset uniquely. Thus simple removal of identifiers is not adequate to protect identity information.

In commerce this is primarily a customers concern rather than a merchant concern as merchants like their identities advertised. Some systems offer pseudonyms for customer who want to mask their identity. Note that pseudonyms are only effective in protecting privacy if the user does not deploy it so widely that inferential disclosure is enabled. For example politicians and celebrities often seek pseudonyms that replace their names in the minds of consumers and voters. By frequent use the pseudonyms will become a dominant identifier: Prince, "The Body' Ventura, Vanilla Ice.

Population disclosure is the release of information associated with a defined population. Depending on the size of the population and the information released population disclosure can result in privacy violations. In a large population privacy violations through information releases are unlikely but release of information about populations that are sufficiently small may enable someone who has the information to link it to specific individuals or make specific inferences. For example, release of the mean and standard deviation of the salaries of five employees risks violating privacy due to sample size. A release of the same information about five hundred employees would result in a much lower risk to privacy. A well documented example of population disclosure of innately sensitive material was the release of the name of a high school with a high number of HIV positive students. This left students at the high school open to harassment (McGraw, 1992). Frequency of disclosure is an issue as well; repeated population disclosure can result in attribute or identity disclosure even with large populations. An example of repeated disclosures that can violate privacy is the release of the average salary in an institution immediately before and after one person joins the staff. A curious colleague could use the change in the average salary and the fact that there is one more employee to calculate the salary of the newly hired employee . For a business an example would be watching for changes in a competitor's delivery time commitments among purchases to infer stock levels for various items.

The risk of identity disclosure from the release of data is difficult to calculate. This risk can be reduced but not eliminated through masking (Duncan and Lambert, 1986). Methods of masking include: adding bogus records that leave aggregate values unchanged, changing values among different people in the data set, and removing identifiers. However, a significant risk of population and inferential disclosure may remain even after masking. Furthermore an observer who collects data about personal or business habits through long-term observations certainly has no interest in masking such data. Imagine someone following one person around each and every day and observing all purchases she makes, or a competitor standing at a business's doorway and noting the attributes and purchases of all customers. Such observation of businesses or individuals can be automated with Web commerce. Businesses and individuals can reduce the risks of observation. Businesses can regularly changing catalogue items, protect customer identity through use of secure connections , offering guarantees of delivery times only in encrypted form, or encouraging customers to use business-specific pseudonyms after the first visit. (These pseudonyms could refer to a certificate on file, reducing the need to send certificates.) Of course the best approach is to ensure that customers, or rather the chosen commerce systems, protect all stages of the transaction through encryption.

Studies of disclosure offer useful definitions and methods for developing privacy-protecting systems. Microdata security offers insight into the threats to privacy that can result from a data compilation. The microdata security paradigm recognizes the different threats to privacy created by compilations of different types of data and identifies some vulnerabilities of data compilations to privacy-violating misuse. The study of microdata security illustrates that the release of a single element of information must be considered in the context of all other possible data releases and not as an isolated incident. The application to transactional records is clear.


6: Privacy Perspectives

This chapter as well as chapter seven focuses on privacy. One's takes the risk of compromising privacy when one trusts another with identity information as well as attribute or action information. " Consumers understand that their privacy is not protected on the Internet. In commerce a company as well as a customer can be subject to privacy violations. In fact, privacy is repeatedly identified as a concern that prevents consumers from using the Internet for transactions. The Privacy Protection Commission Study (1977) identified electronic commerce as offering a particular strong surveillance threat. The report of the Commission stated that a centralized electronic funds transfer system would be "an unparalleled threat to personal privacy" and "a highly effective tool for keeping track of people and enforcing 'correct' behavior. The seventh chapter in particular focuses on the legal dimensions of privacy, and the values that underlie the legal construction of privacy.

This sixth chapter begins with a abbreviated and generic discussion of privacy with respect to information systems; and moves on to a more detailed analysis of what information is available in the specific case of Web browsing. This discussion shows how merchants as well as customers can lose information privacy on the Web. Also discussed on this chapter is the European perspective on privacy: that the issue is one of data protection rather than privacy. The United States and Europe use the same hardware, the same operating systems, the same applications, and sometimes even the same computer science textbooks. Yet fundamentally different assumptions govern the manipulation of user data within the United States and Europe . The United States has a rights-based and property-based concept of privacy. To pick one European country, the Netherlands has a greater respect for privacy and less concern for property rights; to consider another, the United Kingdom has a practical approach based on the goal of data protection. The discussion in this chapter focuses on the European Community's approach to the privacy issue.

This chapter also includes a limited discussion of the perspectives of privacy from the perspective of those who would limit privacy by requiring data collection. The viewpoints of law enforcement, business interests, system designers, and civil libertarians illuminate the conflicts between privacy and data availability. This brief discussion leads into the following chapters by illustrating the need for data collection. Without some data collection there is no accountability: no accountability for payment, no accountability for promises of merchandise delivery, and no accountability for fraud. Thus, there exists a tension with respect to trust in terms of privacy and data availability. Complete data surveillance means an extremely wide extension of trust, as data are easily correlated and searched by all observers. An absence of data also requires an extension of trust when accountability is limited by the absence of data concerning the identity of those who perform various electronic actions.

The following chapter (eight) discusses data collection. Ready availability of identity-correlated data is the opposite of privacy. For purposes of Internet commerce, the focus is on required governmental data collection for financial transactions. The conflict between privacy and accountability is clear in legal requirements for financial transactions: there exist both constraints on and requirements for disclosure. The entire assortment of statutory and regulatory constraints that can apply to electronic commerce is too immense to discuss the subject here, and the reader is referred to a plethora of publications on the subject of regulatory law. Regulatory compliance is achieved through required technical mechanisms, but laws and regulations typically have underlying social motivations that range from providing capital for preferred purposes to preventing money laundering. The reporting requirements selected for discussion here are classified first in terms of their expressed goals and then in terms of the technical means used to achieve these goals. The discussion focuses on the United States, because it is a leader in consumer protection (although a follower in privacy protection) in the world community.

Governmental data collection for law enforcement purposes is explicitly data collection to ensure accountability. Thus law enforcement data requirements make a good baseline for the information available for governance; yet they are only an approximate substitute. To help close this gap a discussion of alternative methods for achieving accountability follows the discussion of required data collection for governance in the system analyses.

Law Enforcement: Trust Us

Government has a need for information in order to accomplish its legitimate purposes. The range of and reasons for data requirements for governance are detailed in chapter eight. Perhaps the greatest source of conflict between privacy protection and data availability has been in law enforcement; perhaps because accountability for criminal acts is an area where the need for accountability is greatest and the desire for accountability among the participants (meaning the criminals) is least. Law enforcement opposes anonymity. The law enforcement community is charged with ensuring that individuals responsible for specific acts can be identified and held responsible. Increasing data availability makes detecting patterns of illegal activity and pursuing the appropriate parties easier. A range of reporting requirements created for the needs of law enforcement.

By definition, law enforcement consists of individuals who have committed their professional lives to serving the government by punishing those who commit crimes. Certainly some police become brutal, disillusioned, and corrupt but few select this life employment with the goal of being corrupt, petty tyrants. Ideally the ones who succeed are honest. Having sworn to risk life and limb for law and order they see every reason why law enforcement should be trusted with detailed surveillance information.

The law enforcement community has a set of data requirements that are made explicit in carefully crafted requirements for data as explained in Chapter 8. In particular, law enforcement has a clear interest in preventing anonymity. Anonymity and pseudonymity are related to risk because law enforcement views knowledge of identity as seen as a way to reduce risk. Identify information on those who break the law is necessary for punishment or retribution. The commonly known threat "I know where you live" implies that identity ensures culpability. It also illustrates that giving another knowledge about one's identity requires trusting the recipient of the knowledge to use it responsibly and not to mishandle it. Identity information and privacy are inexorably linked.

Basic safeguards such as a prohibition on anonymous bank accounts and limited anonymity in purchases fulfill the needs of law enforcement to preserve accountability. Anonymous electronic funds transfer mechanisms cannot appropriately be evaluated without the reality of money laundering. $500 billion annually is laundered in the United States, with 80% of that being drug money (Bickford, 1996). The ease of smurfing 21 makes the traditional simple limits on funds transfers inadequate in the electronic realm (Office of Technology Assessment, 1995).

Although the courts have found general limits on anonymous speech unconstitutional, general limits on anonymous financial transactions have been deemed reasonable. Laws limit the scale of anonymous transactions, impose record-keeping requirements, and requirements on maintenance identity information. These have included limits to scale in anonymous transactions. However, the requirements of law enforcement have not prevented all use of anonymous electronic currency. For example the anonymous currency Digicash has been offered by Mark Twain for years. The approval of Digicash for use in the United States is based on two factors. First, there exist size limits on anonymous transfers which includes electronic transactions. Second, Digicash can only be used once before deposit. This means that Digicash can go through a single transaction, but cannot go through a chain of transactions. Thus in every transaction Digicash returns to fully traceable banking channels.

The use of anonymous encrypted communication can allow widely distributed individuals to plan and implement illegal activities without any fear of surveillance. Modern porous international borders result in an inability to contain regional conflicts, and separatist conflicts may result in deaths on another continent. Internationally interconnected networks have magnified the ability of one individual to cause harm (Baird, 1996). In contrast, the ability to seek information without governmental oversight is a core principle of democracy. If citizens who cannot listen to another's speech without being subject to surveillance, the right to free speech is undermined (Cohen, 1996).

Like legitimate businesses, criminal organizations are becoming leaner and meaner. Because of computing and communications technologies, criminal organizations need fewer people and are therefore more difficult to penetrate (Bickford, 1996). Their ability to move money without a trace aided by these same technologies makes observing their actions, or even locating them, difficult.

International criminal organizations may be assisted by criminal governments, thus Americans cannot always depend on foreign governments to protect their interests. Law enforcement and national security are increasingly interdependent, and the lack of coordination and information between these two entities can be costly. Collapsing empires result in the rise of organized crime to enforce property and contract rights that the government cannot enforce. These criminal organizations can then create international corruption (Rodman, 1996).

Criminal governments will use the same tools for built-in surveillance that legitimate law enforcement uses for its purposes. There is no way to build a surveillance infrastructure and prevent its misuse. The most glaring example of this is the use of cryptography by human rights groups in countries with repressive regimes, groups with true valor but no great wealth of computing skills. Groups and organizations monitoring governmental corruption need protection specifically from what is for them local law enforcement.

Conflicts between anonymity and the need for information for law enforcement purposes are inevitable. Perfect surveillance makes any crime easy to solve -- except for those crimes committed by those empowered to watch. Perfect surveillance ensures that criminals are more vulnerable than police. However, in many nations the most violent crimes, including disappearances and torture, are committed by the police state.

Perfect surveillance is an ideal solution for balancing privacy and accountability -- but only if those in charge of surveillance can be perfectly trusted. Mandatory reporting of data requires an extension of trust to the government. One must trust the intentions, the judgment, the technical competence, and the data security competence of the data collectors.

Governmental data surveillance assumes that the information monitored or collected will be used only as intended and will not be used to harm the individual. Because of conflicts between government oversight and privacy; however, this assumption has been sometimes sadly wrong. Consider two cases in which information was collected for financial accounting was used for purposes clearly against the interest of the subjects of the information. In Minnesota Medical Associations v. the Catholic Bulletin Publishing Company, the Catholic Bulletin Publishing Company requested the names of all doctors, hospitals, clinics, who had been reimbursed for publicly funded abortions. The Company would not state the purpose for requesting the information or describe how it was to be used; however, it is reasonable to assume that it would be used to harass these women's health services providers, their practices, and their families. In another case, Industrial Foundation of the South .v Texas Industrial Board, the industrial group obtained access to the names of every worker who had filed or worker's compensation, probably for the purpose of employment discrimination. In both cases the data in question had been compiled in compliance with fiscal oversight requirements. In both cases judicial oversight forced the release of the information despite the intent of the recipients of the information to harm the subjects. Information technology made duplicating the information a trivial matter, thus arguments used in past decades against compliance with such demands for information (that record duplication was prohibitively expensive) no longer held. Applications of advanced encryption technology would have made these lapses of judicial wisdom less harmful by obscuring identity information in the data compilations.

Clearly when data is necessary to govern there must be some trust in government. Norms of disclosure that respect privacy conflict with the desire for open government, the "right to know". Even the most reasonable collection of data for billing creates the possibility of misuse in a way that is harmful to the subject. In such cases the common interest in order is in conflict with the common interests in privacy and freedom.

The Business Community: Trust Me

Providing data means trusting the watchers, in both the private and public sectors. Companies collect information for both primary (their own) and secondary (for example, somebody else's) use. The most obvious object of primary data collection is for repeat purchasers. Amazon collects detailed purchasing and browsing information about buyers and then offers them books it believe may interest them based on the information it collects. Again a conflict arises, in this case between customers' desire for tailored service and their desire for privacy. There exists a profitable secondary market for consumer information. Companies who gather customer information profit from both its internal use and the ability to market it to others. Both companies and consumers profit when targeted advertising results in a transaction.

Businesses most often use customer data to better serve the needs of customers. Those in direct mail consider their services as offered to the consumer, with direct marketing identifying opportunities of which customers may be unaware. To choose protecting privacy over the advantages of collecting and reusing customer data, business must perceive such privacy protection as the more valuable of the two. Respect for privacy has long term benefits, whereas the market for consumer data offers immediate profit.

Regulatory limits on the use of consumer transactional data would create an economic loss for those who market such data. Thus many merchants, including those that sell financial data, would oppose regulatory limits on the collection, analysis and disclosure of consumer information. What effect limiting the flow of consumer data would have effects on an information economy cannot be foreseen, and the uncertainty is greater than usually acknowledged by either side. The order of magnitude and the direction of the effects of privacy regulation on Internet commerce cannot be predicted. Consider the effects of the Electronic Funds transfer Act, which limited consumer losses in electronic transactions to $50 per credit card. Without this regulation the credit card industry would have never expanded to today's levels and fear of fraud would have smothered Internet commerce at birth. Yet at the time it was enacted the assumption by the banking associations was that such regulation would dampen the credit card market.

The business community has provided and will continue provide anonymity to those willing to purchase such a service. Customers willing to invest time and effort in such a search can find among a host of credit cards with varying policies for secondary disclosure of consumer transactional information those whose policies offer the desired level of privacy or anonymity. On the Internet, those interested in anonymity can use the anonymous token currency Digicash, for the cost of Mark Twain's services.

The market has not uniformly resisted consumer privacy protections. For example, Microsoft has pledged to follow the standards set forth in the European privacy directive as described later in this chapter. Netscape addressed the possible misuse of anonymous FTP as quickly as possible after becoming aware of it, in the release of Netscape Navigator 2.01. A possible increase in the use and trust of network services could profit such companies with credible assurances that no company will surreptitiously obtain additional data during transactions. No network services provider or business profits when its customers are subject to third-party surveillance. Thus companies might cooperate to create a floor of minimum privacy protection, and provide the equivalent of the Better Business Bureau seal to assist their customers in choosing trustworthy merchants. Currently such efforts are under way not only by the Better Business Bureau but also by TRUSTE (http://www.etrust.com/).

Externalities may exist with widespread implementation of anonymity. That is, there may exist a critical mass that must be reached n terms of number of consumers demanding anonymity before there are profits in distributing anonymous software or providing anonymity through secure intermediaries. Thus new price paradigms that recognize the existence of positive network externalities may be needed. If this is the case, and there is a powerful market for privacy-protecting services, that market may yet be served by market forces. Many customers may be willing to pay a price, but not the premium that Mark Twain would charge, for privacy.

The business community has a fairly uniform perspective concerning the prohibition on the export of encryption hardware or software. (United States Council for International Business, 1993). Export of encryption allows producers of software, hardware and systems to take advantage of a traditional American strength in serving the global market. The prohibition of export of cryptography without built-in surveillance hurts business by preventing them from effectively serving these markets. Thus the business community supports the free export of cryptography, which enables anonymity.

System Designers: Ignore Me

In a computer network the ability to observe and record users' choices depends in a large part of the configuration of each system. Implicit assumptions about the value of privacy are made explicit in the technical details such as file defaults, Usenet newsgroup selection, and provision of anonymous mail.

Electronic information system providers face a fundamental tension. To preserve the privacy of users, information on system use should be as secret as possible. System administrators need to collect detailed information on usage to tune and improve the system. This tension shows up repeatedly in information systems ranging from national Census records to private medical information networks. (Compaine, 1988) System administrators' ability to evaluate and improve software, thus providing user friendly interfaces, reliable service, and efficient systems, depends on the ability to monitor software use. Because obtaining the necessary information requires observing individual users over time as they adapt to information systems, there apparently exists a Hobson's choice between privacy and access to usage data.

When patrons access a Web page, they reveal information about their preferences and ideas. When used judiciously, usage data can provide information that helps administrators improve the performance of the information system, provide tailored service, and ensure faster response times. But collection of such data also creates the risk of abuse: this information can also be used to spy on or invade the privacy of a Web surfer.

Masking the identity of the user is, unfortunately, not a workable solution; information about the ways in which people change their use of the system over time requires a correlation between current and historical system use. Ensuring that changes in usage patterns result from of changes in user habits rather than changes in the user population requires behavior of specific users must be identified. Masking user identity prevents this type of longitudinal analysis.

For the special case of information systems providing access to on-line databases, information providers often require user identification to ensure that the licensing agreement is being enforced. For information services providers, in general, user identification is a necessary part of billing.

Identifying individuals also allows people to be tracked by type. This information can be used to determine, for example, only members of a particular demographic group are using particular services. The apparent inverse correlation between privacy of personal information and availability of data has been the subject of considerable from both the technical and sociological perspectives (Herlihy, 1991; Herlihy, 1987; Randell, 1983; Randell, 1983; Marx, 1986; Pool, 1983; Sproull, 1991). There exists considerable technology that can be used to protect privacy, for example the anonymizer (www.anonymizer.com).

Mathematical techniques that can resolve conflicts between user privacy and the legitimate need for access to data in an electronic system within a nation's borders. These same techniques have the potential to prevent international conflict when information regularly in one nation is defined as private data by another. The same technology that creates new conflicts if used without consideration of policy implications can instead resolve old conflicts if innovation is combined with respect for and awareness of international differences. Unfortunately these techniques are expensive and difficult to implement so such they are infrequently used.

Social Critics: Trust for the Common Good

Civil libertarians are both strong advocates of privacy and strong supporters of social goals unrelated to privacy. Both the potential for surveillance and the effect of a perception of surveillance concern them. Consider the impact of the proposal put forth by law enforcement that would ensure access to clear text of all communications (which will increasingly be financial transactions)using key escrow. The establishment of a governmental electronic funds transfer service was considered and rejected by the Privacy Protection Commission. The Commission's objected that such a system would result in government surveillance and thus enable government to easily prescribe "correct" behavior (Privacy Protection Commission, 1977). Key escrow for access to consumer financial transactions poses the same threat.

Secondary disclosure of information includes disclosure to the government. In Lamont v. Postmaster General 22 the Supreme Court noted that observation by the federal government has a chilling effect on the pursuit of information. There is no reason for this to change as information becomes electronic and not paper-based. The decision in Lamont v Postmaster General applied to both free and purchased information. Civil libertarians are fighting for a recognition that freedom with surveillance is not true freedom applies to electronic transactions.

From the civil libertarian perspective, law enforcement requirements have served only to limit the availability of security and privacy through constraints on cryptography. The prohibition of exporting encryption technology has had ubiquitous effects. This prohibition effectively prevents strong cryptography for the protection of privacy from being implemented. The ubiquitous use of public key Kerberos would be prohibited. The advantage of public key Kerberos is that it requires no central key authority -- and it is precisely this lack of a central key authority that causes law enforcement concerns.

Civil libertarians note that with the proliferation of information technology, cryptography is no longer a predominantly military technology. The list of uses for cryptography now more resembles the broad range of applications for internal combustion than the narrow focus of ballistic missile technology. Cryptography is used in every electronic commerce system. Export prohibition has prevented security from being an integral part of operating systems and software for Internet access from desktop machines, and thus limited privacy.

From the perspective of civil libertarians, that law enforcement is constrained from unreasonable search and seizure should not mean that citizens have to live with a network designed to make reasonable search and seizure simple. Citizens still have the right to avoid law enforcement access, according to civil libertarians, without a presumption of guilt.

Civil libertarians are also concerned about use of consumer data by the business community as well as government. Civil libertarians would applaud constraints on secondary use of consumer data. However, they also recognize the need for data to meet social needs, such as preventing discrimination, and thus support some federal oversight which requires some data collection. Civil libertarians also seek to protect consumers' economic rights, so concerns about reliability will affect their support for privacy. Civil libertarians are the most likely supporters of advanced but expensive technical solutions, such as anonymous certified delivery, to problems of privacy and reliability.

Civil libertarians support the removal of constraints on the use of strong cryptography even for international discussions. While the fight for consumer privacy may often result in conflict between civil libertarians and the business community, they are united in their opposition to the prohibition of the export of strong cryptography.

Europeans: Limit Trust

The European Directive on Protection of Personal Data, released on July 25, 1995 was an attempt to unify the laws on data protection within the European Community. The difference in language here is critical: in the United States the debate centers on privacy concerns whereas in the European Community the debate concerns data protection. The conflicts that can arise over data flows between the European Community and the United States may be even more severe than the previous conflicts within Europe if these fundamental differences in perspective aggravate the explicit legal differences. I argue that these differences are not insurmountable; that these conflicts are social and culturally, if not politically, resolvable.

There is some argument whether the issues involves a consistent regulatory approach to privacy or a trade barrier. I will assume the following is true, although the opinion that the European approach to data protection is more about the protection of European industry from trade than the protection of consumer data is sufficiently popular that it requires mention.

The European directive has resolved the divergent approaches across the continent in the data protection paradigm. The US still protects consumer privacy but considers data corporate property. What do these different viewpoints on privacy and property mean in practice? Can users expect data about their electronic habits to receive the same respect in both nations? Does European law and practice offer users more protection in daily life than U.S. law and practice? How do U.S. and European concepts of privacy provide secure access to electronic media? Is data protection merely a trade barrier wolf in civil libertarian sheep's clothing?

One weakness of the rights perspective is that once a person freely chooses to disclose information for a specific purpose, information is no longer considered private and can be disclosed generally. Examples of privacy-threatening secondary use of such data abound. One strength of the rights perspective is certain questions are prohibited. For example, a store cannot require any customer information for a cash purchase of most goods. How have these strengths and weakness extended into and even been compounded by the widespread use of information technology? What policies are needed in this new electronic realm? For example, will the registration information including television viewing, Web browsing habits, and specific daily browsing sent to Microsoft by WebTV be acceptable to the American computer user? Will the appropriation and dissemination of these same data be acceptable to the British computer user? How do attitudes differ?

Even asking these questions suggests that significant philosophical differences exists between European and American approaches to privacy. In actuality, the difference is in the regulation of privacy not the presumptions of privacy in the two cultures.

European regulation prevents secondary use of data. That is, the regulation requires that data be collected only for a specific and not then used for another purpose, although a significant exception for fair use (historical, statistical, or scientific purposes). The Directive requires that information collected be accurate, that only data necessary for the stated explicit purpose be collected, anonymous when possible, and that information be deleted when no longer useful for the original purpose. The Directive defines way in which data can be collected. The Directive defines the characteristics of mechanisms which will fulfill the principles.

The Directive limits the collection or processing of data in the absence of the subject's consent, but presents a large number of exceptions to this requirement; such as when the subject might want to hide the data to avoid the performance of a contract or entering into contract; when necessary for compliance with a legal obligation; or when the data are a necessary part of tasks "in the public interest". Data can also be processed for the vital interests of the data subject. "Vital" is not defined in the Directive, however. Is a commercial interest "vital" or does "vital" apply only for health information? The final and by far the largest exception is "processing is necessary for the purposes of the legitimate interests pursued by the controller or by the third party or parties to whom the data are disclosed,"

Although the First Amendment is uniquely an American institution, the European Directive recognizes the conflict between speech and privacy. This conflict arises when speech by one person is about another, for then the speech of one may be a privacy violation of another. The Directive states:

"Member States shall provide for exemptions or derogations from the provisions of this Chapter, Chapter IV and Chapter VI for the processing of personal data carried out solely for journalistic purposes or the purpose of artistic or literary expression only if they are necessary to reconcile the right to privacy with the rules governing freedom of expression."

The European approach to the issue of data privacy and protection has been represented as vastly different from the American approach, and this is true in a regulatory sense. In fact the debates themselves, data protection versus privacy, are very different in tone. Yet in practice the regulatory solutions to electronic privacy problems in the United States are very similar to those in the European approach.

The American solutions are usually based on the Code of Fair Information Practice and are piecemeal rather than comprehensive. This Code of Fair Information Practice as developed by the Office of Technology Assessment in response to concerns about the potential for electronic surveillance is close in spirit and function to the European Directive (Office of Technology Assessment, 1985). The Code speaks of compilations and data collections whereas the Directive speaks of privacy; however, there is a common essence in the two documents.

The Code sets forth a minimum set of attributes that all data compilations should share. It states that data compilations should never be secret. For existing data compilations, it requires a mechanism for individuals included in the compilation to find out what information is stored about them and how the information is used. The Code also requires providing to these individuals the ability to audit and correct their information. It requires mechanism for the individual to prevent disclosure; however, it identifies prevention of disclosure as the responsibility of the organization with possession of or access to the data. Considering this list of requirements, it is clear that the Code, if broadly applied, would parallel the Directive.

The Code has been applied to federal records (Privacy Act of 1974), financial records (Right to Financial Privacy Act, Fair Debt Collection Practices Act), educational records (Family Education Rights and Privacy Act), employee polygraph records (Employee Polygraph Protection Act ) and even video records (Video Privacy Protection Act ). It offers no protection of medical data because no specific enabling legislation has been passed. In fact a truly ill-considered proposal of having a unique medical identifier, with no corresponding requirements for privacy of the records created, has been proposed, only for the good of the people of course.

The lack of an over-arching regulatory framework with respect to data protection and privacy invites failures. The plethora of credit card offers in mail is clearly evidence of the failure of intent of the Right to Financial Privacy Act. The Right to Financial Privacy Act tried to limit the widespread marketing of individual credit data. However, credit offers continue to proliferate because the law was limited in scope, both on the data and the companies to which it applied.

Although the reality of data protection and the surrounding political debate are widely divergent on the two sides of the Atlantic, the basic concepts of how to protect data are the same. It is my belief that there are no fundamental differences in cultural perspective that will aggravate the explicit legal differences between them. The substantive differences are in regulatory reach, not cultural perspective. Thus the distance between the European Directive and American Code of Fair Information Practice is not great in underlying theory. The Directive would affect primarily those businesses that make considerable money from the secondary distribution of data. For companies that plan to observe their own customers and use the data to improve- service locally, following the Code has a high probability of meeting the Directive's privacy constraints.

The possibility that the Directive will present a trade barrier has been previously noted. It could also be a positive; however, a trade mark and verification of trustworthiness, with respect to privacy. Several mechanisms and institutions propose to show consumers that privacy is protected on the Internet. The largest of these is the TRUSTe effort at www.truste.org, which has a children's program in addition to a general consumer trust product. TRUSTe reads and rates the privacy statements of different sites. It does not attempt to ensure that these policies are fulfilled, as the burden of work is too great. The Directive could offer such a mechanism that would have some legal strength behind it, making the privacy directive a competitive advantage. This would require only that the site be acceptable under the Directive. If the logic of TRUSTe is correct, the Directive may in fact prove a trade advantage for any merchant willing to abide by its constraints. Verifiable protections of privacy could create opportunities for merchants around the world.


7: Privacy in Law, Privacy in Practice

Although Americans have long valued privacy, the law has been somewhat slow to recognize a right of privacy as such. In a now famous law review article, Warren and Brandeis (1890) made an eloquent case for recognition of a legal right to privacy. They justified this new legal right by pointing to a number of judicial decisions rendered in different fields of law, finding in these decisions the core idea that privacy is an interest that needed explicit legal protection. Warren and Brandeis advocated the right "to be let alone."

Case by case a new legal right of privacy has been built upon the logical foundation produced by Warren and Brandeis' article. Through common law (that is, case-by-case) developments in state courts, the privacy rights of Americans have slowly been recognized. The result is a patchwork of protections that vary across different states and situations.

Although much of the legal protection of privacy interests remains a matter for state common law, some states have also passed statutes specifically for the protection of privacy. Some statutes, such as New York's, are of a general character; others, such as those that protect the confidentiality of library records, are very specific. A few state include provisions that protect privacy.

The federal government has generally been less involved in privacy law than state courts and legislatures, in large part because of a general Congressional inclination -- one that has constitutional overtones because of the limited powers the US Constitution confers on Congress -- to leave to state law the legal protection of personal interests. Nevertheless, in furtherance of its powers to promote interstate commerce and communications, Congress has enacted a number of special laws, such as the Electronic Communications Privacy Act and the Right to Financial Privacy Act, that involve protecting privacy.

Furthermore, through a series of cases interpreting the Bill of Rights provisions of the U.S. Constitution, federal courts have come to recognize in the First, Third, Fourth, Fifth, Ninth and Fourteenth Amendments the basis for inferring a more general constitutional right to privacy. The best-known of the Supreme Court decisions is Roe v Wade (Schambelan, 1992), which announced a constitutional right of privacy in relation to decisions about whether to seek an abortion. There are a number of additional constitutional privacy decisions that deal with privacy.

Privacy protections offered under state and federal law arise from two fundamentally different sources: rights of autonomy and rights of solitude.

Rights of autonomy underlie the constitutional protections of privacy. They are necessary to ensure that the citizenry can take full advantage of the rights provided by the Constitution in practice as well as in theory. Rights of solitude underlie the protections of privacy provided by at the state level.

A further complication arises in the fact that privacy rights are also not absolute. They often conflict and must be reconciled with other social, economic or legal interests, such as the right to speak freely, even about others. Even the United Nations Universal Declaration of Human Rights (1995) defines a limited right to privacy, recognizing only the right to be free from unwarranted intrusions, rather than all intrusions23. In contrast more fundamental rights (freedom of the press) and the most basic right (to life) are subject to no such constraints. Many industry groups have lobbied against legislation that would expand privacy rights, arguing that privacy interests are better protected through more flexible and consensual efforts such as industry adoption of codes of fair information practices.

Thus, to provide a complete overview of privacy I end this chapter with a consideration of codes of ethics that have been offered in the absence of law. The discussion of ethical codes is preceded by sections concerning the more binding state law and federal law. At the federal level the chapter considers statutory and constitutional law separately.

>State Law

State law is primarily tort law which is civil law as opposed to criminal law. In criminal offenses the state is the prosecuting agent. By definition, criminal acts are offenses against the state. In civil cases, on the other hand, two parties argue the case and the state serves as the impartial agent for judgment. Civil law is also distinguished from criminal law in that it concerns only those violations that can be addressed with monetary compensation; the state alone has the right to demand imprisonment.

Trying to make conceptual sense of a the disparate rulings in the common law cases on privacy, Prosser in his 1941 treatise on tort law identified four kinds of privacy rights cases: intrusion upon seclusion, appropriation of name and likeness, false light, and public disclosure of private facts. Although some have challenged the appropriateness of Prosser's categorization (Halpern 1991; Bloustein, 1968; Kalven, 1966), the separation of privacy violations into four separate torts is the judicial standard. (The cases I cite in the following discussion of these four torts come primarily from Alderman and Kennedy, 1995; Trublow, 1991; and Speiser, Krause and Gans, 1991.)

Intrusion upon seclusion is the violation of the right to be let alone. The first judicial definition of privacy clearly singled out the press for intruding into private affairs: "Gossip is no longer the resource of the idle and of the vicious, but has become a trade which is pursued with industry as well as effrontery." (Warren and Brandeis, 1890). But what is seclusion when applied to the electronic realm? Is it one's own electronic mailbox where particular messages are unwelcome?

Appropriation of name and likeness is the use of a person's name, reputation or image without his or her consent. An early and well-known case involved a young woman who found her image distributed throughout the city on bags of flour. She had given no consent and received no compensation. The makers of the flour had thought her face would be commercially useful and that she was owed no compensation for the luck of having such a countenance. The New York courts agreed. Despite the woman's failure in seeking restitution at the time, appropriation of name and likeness has since become universally recognized as requiring compensation when it results in commercial gain. Different states set different limits on the ability to seek restitution in cases involving no commercial gain, but all states recognize at least a limited right to seek redress against those using private information only for commercial gain. Thus far the use of electronic images (such as the Babes on the Web site) for gaining hits (the currency of the Web) has not been tested in court. Furthermore the selling of data images or data profiles of individuals has not been successfully prosecuted. Most use of imagery in the electronic realm has been pursued in the courts under an intellectual property rubric rather than using a name and likeness approach. Thus this privacy tort remains untested in the electronic realm despite the many instances where it might be applicable.

False light involves the publication of information that is misleading and thus shows an individual in a false light. It is similar to libel. The ability to charge another with depicting in false light depends on the standing of the victim and the role of the privacy violator. Private persons (as opposed to public figures) need show only that the information presented is false in a case of false light; however, concerns over rights to free speech hinder the pursuit of restitution in such cases. Public figures need to illustrate that the information was incorrect, and that the author of the information acted with malice.

What makes a person a public figure on the Internet? Is everyone with a Web page a public figure? What about people who post to newsgroups? How much privacy does one forfeit when one becomes electronically active? Do Internet posts make one validly subject to other posts that disclose private facts? When everyone can be a reporter on the Internet, is everyone also a public figure? These questions do not yet have a definitive answer.

All fifty states recognize false under one rubric or another (usually under libel). However, some states do not recognize misrepresentation as a privacy violation per se. False light may arise in an e-commerce if for example someone else claims that another shops at a socially disreputable location. False light against a business entity might involve posting false information about its business practices on the Web, for example, lodging a complaint in a newsgroup violating voluntary industry standards. (Dealing with misinformation posted on the Internet was addressed previously. As noted there I believe that on-line responses should always be offered to on-line misinformation, but this does not preclude the possibility of legal action. )

Public disclosure of private facts is self-explanatory. Private information is just that -private- and publication of such information can give rise to a legitimate civil action. Information deemed as "newsworthy" however can be printed even if it is a violation of privacy. Some jurisdictions, including New York and South Carolina, treat public disclosure of private facts seriously; however, in others, notably North Carolina and Texas, one cannot bring action under this tort (Alderman & Kennedy, 1995). The existence of private facts in the electronic realm is yet to be established and defined. For an individual electronic private facts might include browsing habits or bookmarks. Like a sentence taken out of context, an isolated URL to which one has linked at some point can be used to imply habits or affiliations very unlike one's own. Businesses private facts are much more likely to be along the lines of customer profiles that they maintain, although the browsing habits of employees can clearly be classified as such.

In addition to tort and case law, some states also offer statutory and constitutional protection of privacy. The level of such protection varies widely among states; however, it is worth noting that moving to a state with low levels of privacy protection does not with certainty protect a business from privacy suits. In terms of remote commerce with credit cards, the usury laws have been constructed so that the home state of the offering company, not of the consumers, determines the applicable law. Internet laws; however, have not been solidified; thus it is possible for a customer to bring a company into court with privacy expectations based on that customer's home law. Thus an understanding the range of state laws involving is useful wherever you may be.

Ten states24 include privacy as an explicit right in their constitutions. Of those, only Louisiana and California provide privacy protection to private sector employees, while the provisions in the other states deal exclusively with the rights of the state to obtain information. How state constitutional law will be applied to electronic information remains undetermined.

State laws vary with respect to the categories of electronic information they protect. Fifteen states25 have laws that offer specific protection of financial transaction information. The laws in Arkansas, Massachusetts and Montana apply only to records of electronic funds transfer. The protections in other states limit disclosure of consumer financial information. In addition, fourteen26 states protect all financial information, not merely transaction-specific data, from state governments. Those laws offer protection similar to the Right to Financial Privacy Act at the Federal level (which is discussed later in this chapter.)

Forty-one27 states and the District of Columbia also have specific statutes on the confidentiality of library circulation records. The significance of this protection in the context of electronic privacy is that some of the records of purchases for information goods on the Internet can provide information on a consumer's regular reading habits, much like library records. Though there is clearly a parallel states have yet to extend confidentiality protections specifically to electronic commerce records.

States also may protect information specifically because of its in electronic form. State statutes of interest include both wiretapping statutes and broad computer crime statutes. Computer crime statutes at the state level often focus on manipulation of financial information for fraudulent purposes, and thus resemble the Federal Wire Fraud Act more than the federal Computer Fraud and Abuse Act. (Both of these acts are discussed in detail later in this chapter.) Eleven28 states offer specific protection against abuse of computerized financial information for personal gain. Under wiretap laws states may protect telephone numbers, which provide electronic location information in a manner that might logically be construed as analogous to IP address on the Internet. For example, in California and Pennsylvania, courts have ruled that telephone numbers as offered by Caller ID have some protection under wiretapping statutes. Yet courts have limited the reach of wiretapping statutes into other electronic realms. Again consider California, where the courts have ruled that intercepting email is not wiretapping.

A particularly problematic issue in the application of state law is the tenuous nature of location in electronic information systems. Suppose a customer has a credit card account in Wisconsin and an ISP in her home state of California and makes a purchase from an electronic merchant in Delaware. If the purchase information is intercepted and compiled for internal use by a company in Utah, where did the interception take place? Did it involve a wire tap? Is it judged under the local jurisdiction of the credit card headquarters, as would be the case if the customer was concerned with usury? Does it matter if the company in Utah is taking a demographic survey of the customers of the Delaware merchant? If the company in Utah makes no money but is trying to make marketing decisions about general Internet purchasing habits, is that wire fraud or legitimate use? None of these questions is simple anyway and they are further complicated by uncertainty of jurisdiction.

Federal Law

At the federal level there exist special statutes to protect privacy and constitutional guarantees of privacy. At the Federal level privacy concerns are autonomy concerns. Those under surveillance may not act freely even when not otherwise constrained.

Statutory Law

Federal law on at least four subjects can apply to electronic commerce: controls on financial information, controls on electronic information, and laws enabling the regulation of cryptography. The confluence of consumer privacy and cryptography is a recent technologically driven event and is addressed in a separate section. Here I restrict myself to a discussion of laws concerning financial and electronic.

Specific protections exist for various classes of personal information in analog forms, including medical, video rental, criminal, and financial records. When information is in one of these classes is purchase electronically, the purchase so also falls not only under the rules governing electronic transactions; but also in the category of financial exchange. Laws covering privacy of access to information have a different tradition than laws governing commercial information; these laws are based on the assumption that financial records belong to the bank and not the consumer.

The Right to Financial Privacy Act was enacted in response to a Supreme Court decision that denied rather than defined a right privacy: United States v Miller (1976). In Miller the Court determined that there are no Fourth Amendment constraints limiting government access to personal financial records. The Act extends the Fourth Amendment and the Code of Fair Information Practice to bank records. (Recall the discussion of the Code of Fair Information Practice with respect to its similarity to the European controls on data protection in the previous chapter.) It limits the conditions under which any institution can disclose customer information to Federal authorities. Yet the Right to Financial Privacy Act is not as encompassing as Fourth Amendment protection because it contains broad exceptions to the protection it offers. The exceptions of the act include when the bank is acting in its own self-interest, for regulatory proceedings, in response to IRS summonses, and in compiling federally required reports. The Right to Financial Privacy Act also does not apply to individual information included in an aggregate listing. The protection offered by the act is limited in other ways; for example, the Court has ruled that financial records which have been stolen from a third party through the contrivances of a government agency are admissible in court (United States v Payner, 1980).

The Fair Credit Reporting Act applies the principles of the Code of Fair Information Practice to credit reporting agencies. Unfortunately, it applies only to those entities that provide credit reports, such as credit bureaus or credit agencies, as their primary business function. This means that financial information given to credit card companies, banks, and other institutions can be freely traded without consumer knowledge since these organizations have primary business functions other than providing credit reporting. However, the Fair Credit Reporting Act has been effective in preventing the proliferation of private credit guides which contain information on individuals. Prior to its enactment, companies sold credit guides without the knowledge or consent of the individuals profiled. These credit guides offered detailed, often unreliable, information on easily identifiable individuals. Now only guides that contain encoded information, making it a nontrivial matter to identify a consumer without the information on a payment instrument (such a checking account number) are allowed. For example, checking account clearance centers use driver's license numbers and banking information to check the history of a persona and an account. The identity of the person who holds the account is not listed. The only information listed is the driver's license number and if their person has bounced checks. Compare this to previous private credit guides which listed names and addresses, along with assorted unreliable information, for example, neighborhood gossip. The status of credit records in the seventies was very much the status of medical records now: unregulated in the interest of the subject of the record, owned by various parties, often not accessible to the subject, prone to error, and difficult to correct.

The Fair Credit Reporting Act also protects credit agencies from the charge of negligent release in cases involving misrepresentation by the requester. Credit agencies must ask the requester the purpose of a requested information release, but need make no effort to verify the truth of the requester's assertions. In fact, the courts have ruled that "The Act clearly does not provide a remedy for an illicit or abusive use of information about consumers" (Henry v Forbes, 1976). The act also limits the information which can be included in the records maintained by credit agencies. Prior to its enactment salacious unsubstantiated material could be included, and credit reports often included neighborhood gossip.

The Privacy Act of 1974 also deals with government collection of data, codifies the principles of the Code of Fair Information Practice and requires that the practices it prescribes be followed for all government databases and databases of government contractors. It requires that individuals be informed of all government compilations of data of which they may be part and limits the sharing of data among federal agencies. The act also limits the use of Social Security numbers as universal identifiers in Federal databases by requiring that citizens be able to opt out of SSN use by selecting a different nine-digit number as an identifier.

The Fair Debt Collection Practices Act limits dissemination of information about a consumer's financial transactions. It prevents creditors or their agents from disclosing to a third party the fact that an individual is in debt, although it allows creditors and their agents to attempt to obtain information about a debtor's location.

Information exchanged in Internet commerce will be both financial and electronic. Laws that protect to electronic information are therefore of equal relevance to those that protect to financial information.

Transmission of electronic information is addressed by the Electronic Communications Privacy Act (ECPA) which establishes criminal sanctions for interception of electronic communication. The act calls for imprisonment of not more than five years, a fine, or both for criminal interception of electronic communication. These are strong penalties, but the ECPA also contains broad exceptions. Quite possibly so broad that one could push the entire Internet through. The act offers exceptions for those who act under the color of law; when the party intercepting the communication is also a party to the communication; or when one party has given prior consent. The first exception refers to law enforcement personnel, and gives access to communications with a warrant. The second exception, when the party intercepting the communication is also a party to the communication, is obvious. A financial agent, such as a bank or credit card company, that is party to a communication has the right to read and reference that communication. In the third exception, prior consent can be explicit or implied. Consent may be implied by an employee agreeing to work in an environment where system use is required or it may be explicit in a written request for financial services. Thus the ECPA provides limited protection from law enforcement or employer scrutiny but it does provide legal protection against observers.

The two exceptions of the ECPA that make it inadequate for protection on the Internet are the assumption that all parties to a communication are equally at risk to privacy violations and the assumption that any event at a business is business related. All parties are clearly not at equal risk for privacy violations on the Internet -- the server is far more able to collect aggregate information about visiting clients than the client can normally obtain from the server. By using cookies at multiple sites and spreading advertising widely an organization can correlate a users visits across sites (St. Laurent, 1998). Secondly, the assumption that there is no valid work-related computer use is never valid fails to reflect the change in boundaries created by the Internet. As the Internet marches forwards into the new century, its role as a public space for every citizen (as opposed to purely for the scientists or professionals, for example) is being shaped by two seemingly contradictory characteristics: the Internet is both ubiquitous and personal. Cyberspace, unlike the traditional media types (broadcast, common carrier, publishing, distribution) and many forms of public spaces in the physical world (Boston Common, the Logan Airport, city library, etc.) enables the citizenry to find new ways to interact economically, politically, and socially. This universal connectivity of the Internet is its potential for everyone and in everywhere. The ECPA does not address the realities of the ubiquitous personal Internet. In comparison to email expectations of privacy, employers have no right to open outgoing US mail even if it appears that company paper may be enclosed in the envelope.

The Computer Fraud and Abuse Act of 1986 extended the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984. Together these acts prohibit six types of conduct. Of the six, the three prohibitions which are of most interest here are: intentionally accessing a computer without authorization and obtaining information in the financial record of a financial institution; knowingly and with intent to defraud, accessing a federal-interest computer (see below), and causing damage of more than $1,000; and knowingly and with intent to defraud trafficking in computer passwords.

The definition of federal-interest computers covers more than is readily apparent at first glance --- it is not limited to federal computers. In fact, the limits of the definition is increasingly uncertain given the concern for infrastructure protection at the federal level. For example, a computer of Southwestern Bell at a local exchange office was determined to be a federal-interest computer because of the ubiquitous and critical nature of the telephone system (United States District Court, 1992). As electronic commerce becomes more widespread, the Internet is being moved towards a classification as a system of federal interest for the purpose of infrastructure protection (and therefore legal action). The act expressly prohibits theft of financial records, information trespass, theft of services, and removal of data from public sector computers. Furthermore, the act specifically identifies viewing financial information (as opposed to, for example, medical information) to be a breach of the law.

Note that only an unauthorized violation of privacy is a matter of concern under the Computer Fraud and Abuse Act. If the owner of electronic information, such a mortgage company or medical information clearinghouse, sells the information, there is no abuse or fraud under the act. Even if information is obtained for one reason and then sold to another party to be used in a fundamentally different way, there is still no fraud or abuse under the act further illustrating that security and privacy are not equivalent.

The Wire Fraud Act has a target similar to that of the Computer Fraud and Abuse Act. It prohibits transmitting signals in order to commit fraud. The Wire Fraud Act has been used to prosecute hackers who access computerized phone systems and reprogram them to obtain free long distance services. Presumably the Wire Fraud Act could also be used to prosecute those who commit fraud in Internet commerce for fraud though not those that use Internet commerce for surveillance. Internet auction fraud can be prosecuted under the Wire Fraud Act, or under acts which address fraud using the mails (since buyers pay through the mail). Like the Computer Fraud and Abuse Act and the Electronic Privacy Communications Act, the Wire Fraud Act focuses on unauthorized viewing or theft of information goods: the use of consumer information is not an issue.

Constitutional Law

States are constrained in matters pertaining to privacy by the U.S. Constitution, as well as their own. In 1969 the Supreme Court made the right to privacy explicit in Griswold v Connecticut. The Court found the right to privacy implied in the Constitution in the First, Third, Fourth, Fifth, Ninth and Fourteenth Amendments (Compaine, 1988). The constitutional right to privacy continues to be recognized by the courts in accordance with that decision, but only with respect to certain classes of information.

The First Amendment states:

"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances."

The First Amendment's implications for privacy are that people under surveillance are not likely to express views, or go to assemblies or religious meetings of which the agencies of surveillance are likely to disapprove. The Court has ruled that the right to privacy covers the right to read --- unobserved --- material that the federal government finds objectionable. Specifically, in Lamont v Postmaster General the Court stated that "any addressee is likely to feel some inhibition in sending for literature which Federal officials have condemned." The freedom to read is actually the freedom to read without fear of surveillance (Cohen, 1996). The Court has also found a right to privacy in association and political activities. In addition, the right to privacy covers memberships and personal associations (NAACP v Alabama, 1958), confirming the "right of members to pursue their lawful private interests privately and to associate freely with others."

The Third Amendment states:

"No soldier shall, in time of peace be quartered in any house, without the consent of the owner, nor in time of war, but in a manner to be prescribed by law."

The Fourth Amendment states:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

Together the Third and Fourth Amendments create a region of privacy -- the home -- a space inviolable by the government except in constrained circumstances. These amendments suggest that what one does in one's own home is not the business of the government. Note that members in the case cited above of the NAACP were found to have not only the First Amendment right to associate, but also the right to "pursue private interest privately," as one might in one's own home.

The Fifth Amendment states:

"No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation."

Just as the government cannot imprison citizens without charge, government cannot require that citizens speak. The implication is that the government has no right to hear all that a person could know and might say, thereby intruding into personal thoughts. As the Fourth Amendment limits the government's right to search papers, the Fifth Amendment denies it the right to search thoughts .

The Ninth Amendment states:

"The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people."

Without the Ninth Amendment, the right to privacy could not be found in the Constitution. Since the Constitution nowhere specifically identifies the right to privacy explicitly. If the Ninth Amendment's did not specify the disposition rights other than those it explicitly mentions, the right to privacy as implied by the other Amendments could not exist.

Section 1 of the Fourteenth Amendment states:

"All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the state wherein they reside. No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws."

According to Section 1of the Fourteenth Amendment of the rights set forth in the Constitution can be abridged by the States. If the federal government has no right to your home, speech, or papers, neither do the state governments. The rights that together provide privacy from the federal government provide privacy from state and local governments as well. (Sections 2-5 of the Fourteenth Amendment address apportionment of representatives, Civil War disqualification and Civil War debt, and thus are not of interest here.)

The Constitutional right to privacy differs from state civil laws in that it is focused on individual autonomy rather than the communications of others. The right to privacy allows individuals to take certain actions without fear of retribution, rather than prevent the publication of certain types of information as state laws governing privacy do. In fact, privacy rights prohibiting intrusion into seclusion and publication of private information have been limited at the federal level precisely because of the First Amendment's protection of speech rights.

The Supreme Court has determined that there is no constitutional right to privacy or expectation of privacy in financial matters (United States v Miller, 1976). Consumer's voluntarily supply financial information to financial institutions, the information is owned by those institutions, and there is no reasonable expectation of privacy for such information because by its nature it must be shared in the course of business. Some advocates of privacy rights propose a property law, whereby individuals would be construed to own information about themselves. Thus far property laws have been used primarily to limit privacy by declaring information about one person to be the property of another and not properly subject to the oversight of the subject.

Constitutional protections of privacy have been applied inconsistently. Often a delay extends between the introduction of new technologies and the extension of privacy rights to the users of those technologies. Consider the case of telephony. In 1928 the Supreme Court determined that no person has a right to privacy in telephone conversations (Olmstead v United States, 1928) ruling that recording telephone conversations was not a search under the Fourth Amendment because the conversation left the defendant's home on lines that could not be secured. The Court stated that since the technology was inherently without security, people knowingly sacrificed privacy when they communicated using the telephone. The Court reasoned that telephone correspondents knew that the signals went outside their homes and only the most naive would expect privacy. Olmstead reads: "There was no searching. There was no seizure. The evidence was secured by the use of the sense of hearing and that only. There was no entry of the houses or offices of the defendants. . . . The language of the amendment cannot be extended and expanded to include telephone wires, reaching to the whole world from the defendant's home or office. The intervening wires are not part of his house or office, any more than are the highways along which they are stretched."

The reasoning in Olmstead applies to the Internet today. Of course, this reasoning remains true for the telephone network as well. For the decades between Olmstead v United States and Katz v United States (Katz v United States, 1967), the law of access to telephone conversations essentially stated that because the system was open, privacy was not to be expected. In Katz the Supreme Court ruled that individuals have a "reasonable expectation" of privacy during telephone conversations. The Court determined that a court order, and the basis for suspicion that justified the court order, was necessary for listening into the same global network of wires found open to all in the previous decades. Court has not determined which judgment applies to the Internet today.

Privacy & Information Technology

The first judicial definition of privacy in Warren and Brandeis, 1890 was written in response to technological threats to privacy. Specifically, Warren and Brandeis were concerned with the press's reporting of scandals (aided by advances in photography, telephony, and printing) and its lack of regard for privacy. These jurists felt that the new technologies upset the previous balance between privacy and the availability of information, thus forcing a reconsideration of the right to privacy. A century later information technology, this time in the form of the Internet and the data processing capacities on the networks it connects, is again changing the balance between privacy and data availability.

Electronic information technology changes the balance between privacy and information availability because electronic information is so easy to compile, correlate, and distribute. Monitoring every keystroke of users of information technology requires a trivial amount of effort. Information, once electronically collected, is easy to analyze and distribute (Turn and Ware, 1976; Pool, 1983; Office of Technology Assessment, 1985, Compaine, 1988; Computer Science and Telecommunications Board, 1994).

Consider the effect of information technology on the four torts discussed earlier in this chapter. In a practical sense, invasions of privacy were once the purview of the press and government by virtue of the difficulty of publication and surveillance, respectively. Most individuals lacked the time and resources to conduct surveillance required to invade an individual's privacy as well as the means to disseminate the information uncovered, privacy violations were restricted to those who did, chiefly the government and the press. Information technology however, has made eavesdropping and publication easy for all Internet users, thus increasing individual's opportunity to violate the privacy of others. Currently the lack of means of determining the authentication or integrity of information makes the dissemination of false information via electronic media much easier because of the potential lack of accountability.

In the case of intrusion upon seclusion, electronic trespass has been defined as a crime at the federal level. Yet electronic intrusion upon seclusion has yet to be defined. One potential electronic case of intrusion upon seclusion was revealed in the beta release of Microsoft's networking software. Early hackers who had obtained beta versions of Microsoft network software and noted that the software would have sent consumers' machine capacities and entire directory structures to Microsoft when consumer's installed the product. . After this fact was publicized, Microsoft reduced the amount of information to be transmitted and offered the consumer a choice of whether to "register" with the company. Recently it was determined that Microsoft keeps the identifying information of those who register Word and Office, and can use identifiers in the software to determine who has produced Word or Office files. If in the electronic world one's own hard disk is electronic seclusion, then Microsoft's practice of availing itself of consumer information without consumer knowledge could have be a tort violation.

Certainly the capability of distributing other people's private communications, through forwarding email or building Web pages for general distribution, creates new possibilities in the electronic for false light. Email may be edited and displayed as "evidence" that an individual has certain beliefs very unlike his or her own. A combination of loaded labels, unidentified sources and hidden agendas can be used to present an issue in a false light (FAIR, 1996). Loaded link names, anonymous email, and misleading domain names make these tools of deception available to everyone on the Internet. Loaded link names are link names which incorrectly describe the page to which it is linked. For example, a link to Swift's "A Modest Proposal" describing it as 'a policy under consideration" would be a loaded link name. Anonymous email can be used to make statements without authenticating them. Much like in the newspaper, "an anonymous source close to the situation...", can say things which cannot be validated. Misleading domain names can be used to present a person with a gripe as an institution on a mission, as described previously.

To determine the effect of information technology of the appropriation of name and likeness, one must ask, what is one's electronic likeness? What value must exist for use of one's electronic likeness to be considered misappropriation? The difficulty of applying appropriation of name and likeness in the electronic realm is illustrated by the search engine, Alta Vista, http://www.altavista.com/ which provides it users the ability to search Usenet postings by author or keyword. Alta Vista is using the speech and ideas, the electronic visage of an Internet user, for the purpose selling attention span to advertisers. Yet this is an offense against no person in particular . The only possible "profit" is an increase in the number of hits to the Alta Vista site, and the corresponding increase in add revenues.29 No one person can claim that the unique statements he or she placed on Usenet create that value so no one person has the necessary standing to sue on economic grounds.

Finally, consider the fourth tort, public disclosure of private facts. Public disclosure becomes very easy when everyone is a publisher. For example, here at Carnegie Mellon University, one student's homosexuality was revealed on an electronic departmental bulletin board. Communicating such a fact to many of the department faculty and all of the student's colleagues was vastly simplified by the use of information technology. Without such technology no doubt the student who publicized the information would have been intercepted in the department and been tutored in the skill of quick exit.

In short, information technology has altered the balance between privacy and data availability by giving many people the power to compile and disclose information, powers previously held almost exclusively by governments and the press. The mere existence of a right to privacy that is universally recognized through the UN, in the national legal structure and in common law, is important. If privacy is a right, those who gather and disseminate data, not the individual, bear the responsibility for maintaining privacy. However, protection of this right has not been implemented in information technology: privacy is part of ethical codes but not consistently part of computer code. Unfortunately privacy protection through codes of ethics has proven inadequate (Office of Technology Assessment, 1985; National Research Council, 1996).

Browsing Information

Any transaction must begin with discovery; information than can be exchanged during discovery is properly classified as transactional information. If a customer uses the Internet for discovery, the merchant can obtain information about the customer as she looks through the merchant's wares. This is the first information exchanged in a transaction, and is common to all Internet commerce transactions.

A customer cannot purchase an item unless she knows of its existence. To sell a product successfully vendors must make their product's existence known to all customers who may want it. This requires information about potential customers. In the analog world, stores obtain information about customer preferences by observing their browsing patterns and set up displays accordingly, or use data on various customers' purchasing habits to target catalogs. Electronic merchants will no doubt do the same and can obtain electronic browsing information easily.

The customer must also know the location of the merchant in order to make a purchase from that merchant. With Internet commerce this does not require knowing exact identity of the merchant, merely the domain name.

The amount of information a merchant can obtain during discovery via the Internet depends on the policies, practices and physical configuration of the customer's ISP. Other factors that can affect the information available to the merchant include the configuration of the customer's system, the services provided by the customer's ISP, and the software the customer uses to access the Internet. I discuss examples of each of these in this section.

When the customer's client contacts a merchant's server, whether by ftp or Web browser, the merchant can capture the client's IP address. By connecting through intermediaries (remailers or anonymizers) that can prevent the release of consumer information. However, most consumers either do not know about such intermediaries or don't bother with them or are even unaware that simply by connecting to Web site they are transmitting their IP address. Thus the merchant usually captures the customer's IP address.

Recall that from an IP address, the merchant can obtain the name of the customer's host using the Domain Name System (DNS), which provides a mapping between domain names (ex., miami.epp.cmu.edu.) and the corresponding network addresses (ex., 128.2.58.26).

If the customer's system is protected by her employer's firewall, the IP address may identify only the employer. If the customer is going through the shared IP address of an Internet service provider (ISP) the information available to the merchant depends on the practices of the ISP. An ISP's configuration may prevent any information but the identity of the ISP from being made available -- for example if an ISP dynamically assigns IP addresses as customers access the Internet, as does MCIMail, then only the identity of the ISP is known. On the other hand if the customer's ISP is MediaOne, then the customer's machine name identifies the user. (A machine name includes a third and often a fourth level domain name.) This is because MediaOne uses the userID as the third level domain names, so for a user with the name, "userName" that users machine-specific name would be "userName.mediaone.com". Thus the minimum information the merchant will have as a result of the contact by a customer is the name of the customer's Internet access provider, whether that provider be an employer, a place of learning, or a commercial ISP.

Some ISPs provide automatic user identification services, the most widely used are finger services and AOL's profile services. In addition, as described with MediaOne above, ISPs may have a configuration that results in the users name being the machine's name. In these cases the merchant may be able to identify the customer by accessing a common process, such as fingerid, on the customer's machine. Depending on the naming scheme, various user attributes, such as departmental affiliation (if the machine is in a university setting) will be available (for example, cs.cmu.edu vs. epp.cmu.edu). In fact, if many users from one company contact a merchant the and the company's network transmit the information, a merchant can build a map of the company's internal network and corresponding users. In summary, the maximum information passed could be the customer's identity, employer and business area. Depending on configuration, business area data might not necessarily impart corporate role or job title information with much certainty. Secretaries and senior officers alike have personal machines in the corporate world.

A strategically placed observer can also detect that there is traffic between the merchant and customer's servers. For example, users with cable modems sharing a drop line can observe each other's traffic. Since browsing information is rarely encrypted, a well placed observer can monitor the merchant's business and compile probabilistic information about a customer's browsing habits. ISPs may also provide aggregate demographic and browsing about their user base for marketing (e.g., selling adds) or technical (e.g. caching sites) purposes. In such cases then the merchant gets probabilistic information about a consumer's attributes from the identity of the ISP.

If the customer is using the World Wide Web, availability of additional customer information depends upon the type and version of the customer's browser. All browsers send the browser type and version. Also sent are computer type, what operating system it uses, and helper applications. Depending on the version of the browser used, information including email address and previously visited pages can be obtained by the merchant. The purpose of transmitting this data exchange is to off the merchant's server information about helper functions30 available on the customer's machine, and therefore the types of content the user can accept. This illustrates the trade-off between privacy and service.31

For a customer to effectively hide identity in Web transaction the customer must also use an anonymity-providing service to prevent browser-based network information from providing identity information. For an illustration of this, visit http://www.anonymizer.com/. Anonymity providing services work by placing the anonymous server between the client and the server. Thus the information that would be available to the server is blocked at the anonymizer, and the anonymizer transmits every message between the client and server. Note that not all transmissions go through the browser; for example, real audio opens its own connection so real audio would bypass the anonymizer and go straight to the client's machine. Even with protection from disclosure of browsing information, if the customer is using a single user machine that supports finger daemons then the merchant can still obtain customer identity.

Clients can send information on available helper applications to merchants for more effective communications. Helper applications offer probabilistic information about the consumer's machine and even interests. For example, the number and variety of helper applications, presence of shareware or freeware applications, and the presence of advanced helper applications together imply the level of user technical sophistication.

After sending a request for Web contents the customer's client sends an accept command. This acceptance usually includes information on: monitor quality, including size and identification of color monitors, helper applications available, and the quality of the connection. In an early version of HTTP the accept command could just request, "send what you have," and let the server send everything to be sorted at the client machine. This is no longer widely used. Notice that web surfers still see much content they cannot use. This reflects the fact that merchants obtain information which they cannot use, as discussed later on in this chapter.

Many electronic commerce protocols are designed to begin with an exchange of digital certificates to assure identity and exchange keys. Digital certificates will become increasingly common. The customer's certificate includes the customer's identity, the issuer's identifier and its certificate policies. The issuer's identifier can support the customer's claim to be authorized to use the specific payment method she has selected. Any qualifiers on the customer's use of the payment mechanism and the certificate policy identifier provide information on the customer. For example, it may indicate the customer's credit worthiness or identify the customer as a student. The dates that the certificate is valid may indicate the customer's shopping habits or credit-worthiness, based on the policies of the issuer.

Availability & Value of Consumer Information

Much of this text focused on privacy. Many popular press title on Internet commerce argue the opposite: that the most important thing is obtaining as much data as possible and privacy is essentially not an issue. To pick (on?) one text that advocates aggressive customer surveillance, consider NetGain (Hagel & Armstrong, 1997) which presents a linear mechanism for building a profitable Web presence. The building process is based heavily upon the ability to identify and track users, so that they may be classified in various ways. The process is presented as linear and applicable to all firms. Among the various texts in this vein NetGain appears to advocate most aggressively the tracking of consumers. Given that this viewpoint is in direct contrast to the premise of this text I will now address the arguments presented in NetGain directly, as some are commonly accepted.

First, the intense tracing assumes that there is a single business model exists - that of providing intermediation between businesses and customers. This is the model for many of the overnight wonders one hears of on the Web -- including Yahoo. It is the model of AOL, which bundles Internet access with provision of selected content and selection of business partners. This model obviously cannot be applied to the majority of sites on the Web-- by definition only the minority of providers can be intermediators for all the other providers.

Secondly it is generally recognized that any community created must have some consumer loyalty to be long-lasting and viable. By collecting data sites are being built to be sticky -- but stickiness through constant intense surveillance and marketing will inevitably have an effect on consumer loyalty. There is a price to refusing customer privacy and the collection of data must be considered with respect to that price. Collecting privacy is not without costs. If indeed the electronic marketplace shifts the power from consumers to vendors and intermediaries, then constant surveillance of consumers may prove unwise in the long run because consumers may react to that power to by changing participation. However, when data gathered in an electronic transaction clearly and directly serve the consumer it is useful for both parties. The temptation to resell the data must be judged against the desire not to alienate customers.

NetGain advocates that fully 20% of a Net business's initial funds go into tracking the consumer mouse-click by mouse-click. Is that truly the best way to spend 20% of the company's funds for Internet presence for selling a product, as opposed to say, starting up an intermediary? If the business's goal is to sell attention span how much information is necessary? The standard business assumption currently is that users should be tracked with as much detail as possible. I have not seen this assumption validated by careful financial analysis.

The tracking of Internet users and concentration of customer information is reminiscent of nothing so much as the command and control-heady days of Vietnam. The White House during the Vietnam War was involved in centralized control in which staffers thousands of miles from the actual site used data to military operations. White House staffers had minute by minute information of the sounds and movement of the Ho Chi Min trail that they believed enabled them to determine precisely where a bomb should be dropped to destroy trucks. And, as a result they could claim to have devastated traffic on the trail whether or not that was true. In fact, the Air Force claimed at one point to have destroyed a number of trucks that was greater than the total number of trucks believed to be in North Vietnam at the time!

The White House believed it understood the war and collected and analyzed data exhaustively. Because the White House was certain that the data described the real situation the White House was effectively blinded by their data. By scientifically managing the data, they failed to manage the war. Similarly, merchants collect data exhaustively in order to understand the customers. Ironically, the same business plan that would recommend exhaustive data collection usually claims to understand the customer. Data are not information. Too much data and a focus on careful management of customers can lead to a lack of information and failure to respond to customers actual (often stated) needs and requests.

Why else is this war comparison reasonable? It provides reams of information that may or may not reflect reality leading to what is termed 'information pathologies' - which is often referred to outside of academia as the inability to see the forest for the trees. (Edwards, 1997) It is redolent of the centralized, remote operations justified on a statistical basis with no feedback from the ground, as when the Air Force used data to claim destruction of more trucks than were in existence.

Such information pathologies are all too evident as firms enter the information marketplace today. Microsoft was going to dominate the Internet by limiting PC-compatible users to the Microsoft network and obtaining detailed information about the users' machines and habits. Clearly this failed. Yet with Yahoo, a pair of graduate students treating everyone as interested and interesting saw great success. Yahoo may capture customer's but the focus is on customer service and customer cooperation. Even the superpower of computing, Microsoft, can push too hard by focusing on consumer data instead of consumer needs.

Scientific techniques of command and control failed in warfare, as innovation was throttled by statistical arguments that created a tremendous amount of data and no real information. Scientific management mechanisms that rely on data in the absence of context have failed to motivate workers and create the nimble productive business that advocates have claimed would inevitably result from their application. Is it plausible for Internet businesses to think that applying the same failed principles to customers would create loyalty and long term success when these techniques have failed so utterly with soldiers and workers?

In addition to the inherent fallibility of data and its interpretation the observation of consumers results in information pathology because so much information is collected that is not used. The most important experience on the Web site is the customer experience: to the customer, to the sale, and to the merchant. Recall from the previous section that it requires a trivial effort to capture data on connection speed along with monitor size and existence of helper functions. However, there are few pages optimized for this particularly useful piece of information. Thought there are many pages that customized to respond to data about a customer's past purchases, I have yet to come across a Web page customized to respond to information collected about modem speed, helper functions, and screen size.

This can be quite a problem as shown by two examples from my personal experience. First the A2ZTOys page does not fit a smaller monitor well. Consider the image in Figure 7.1

Figure 7.1

I was attempting to order at the A2Z site (and I did) this was the third screen that popped up during an A2Z order. Given where the screen is truncated, that it was the third screen, and my trend-setting ADHD adult shopping style, I considered my order to have been placed and went to the next store on my list. Because A2Z is bookmarked I went back later to query about my order. Of course it had not been filled. Figure 7.2 shows is the entire page:< A HREF = FIG15.html>Figure 7.2

This page no doubt has wonderful flow and feel on the developer's monitor. The generous white space must be pleasing to the eye. But given the amount of information that is available about the customer's machine once the customer links to the A2Z site there is no reason not to tailor the screen presented to the customer's monitor. There is no reason to squander the customer's attention span by requiring her to hit the PAGE DOWN button four times.

My second example of how much information collected, analyzed is far less useful than information that is ignored refers to helper functions. As discussed in chapter one, Disney is a fine example of excessive use of helper functions. Disney has the most time constrained set of surfers on the net: young children. Children are in a hurry. Yet the site makes no alteration of presentation or content for the at home browser despite having information about the connection speed. For example, Disney Radio could be offered in multiple formats with the browser automatically choosing the format suitable to the user's browser and connection speed. Instead, the browser is expected to download at least five helper functions, while (this being Disney) a toddler sits expectantly before the machine. (The toddler actually leaves before the first function is installed.) Simple use of the consumer data already available would provide a far better site from the consumer standpoint than the compilation of further data. Which, then, represents the better use of the vendor's resources?

I cannot say that Disney is the finest example of oblivious Web page design I have yet experienced in terms of lack of responsiveness to information about customer's varying connection speeds because the Star Trek Insurrection site has Disney beat. Perhaps it is reasonable to assume that every single person viewing the Insurrection page has a series of tasks surrounding the computer on which to work while the page laboriously downloads. Perhaps Star Trek is unique in the commitment level of its fans. In either case, there are no very convincing arguments for collecting even more data when not even the most basic data already being collected - like connection speed, helper functions, monitor size, are even being used.

The environment of information pathology has vendors collecting (and advertisers demanding) detailed data collection on consumers while data readily available most critical to the consumers themselves -- those that might facilitate access and improve experience for surfers -- are ignored. Web pages are built to be astounding on the designer's desktop, and to bring a 56K (or less) user's machine to a screeching halt. Basic data are not used, yet more data are collected, and the collection of still more is advocated as sound Internet business strategy.

The fundamental reason for customers to insist that merchants choose privacy-enhancing systems is the recognition that we are building a global infrastructure for generations to come. It is a time of heady dreams of millions of dollars, but it is a time of quiet responsibility. Privacy matters because it does. We can choose as individuals to build a surveillance system that shames the simple video technology of Orwell's 1984, or we can choose to continue to build a democratic infrastructure that addresses the range of needs of customers, as citizens, parents, workers, and consumers.

There is a fundamental conflict in every information system, including electronic commerce systems: privacy versus data availability. This conflict also exists in the law. Currently there are legal requirements which both protect and prohibit privacy. This chapter has identified specific laws and general principles that affect privacy and are likely to affect electronic commerce.

The concept of privacy a broad philosophical base. On that base are built state laws, federal laws, and constitutional prohibitions. The applications of these laws to information technology, and digital commerce in particular, are as yet undetermined. Information technology enhances the availability of data, making information easier to collect, analyze and disclose. It will therefore require a new balance between information privacy and information disclosure that is not reflected in current law.

At times, the laws and regulations on privacy appear to be a maze, constructed of varying and potentially conflicting laws for each category of information. Under which category or categories the Internet will fall is far from clear. Thus, only the outline the structure of privacy law has been included in this text. Examples from the most relevant categories have been considered. This chapter began with a with a brief overview of state laws, because on the Internet one can be doing business in any state. The state laws are based on tort law, which is the earliest privacy protection in the American tradition, drawing heavily on common law. After a discussion of state laws, I touched on federal statutory law was touched upon, and finally, constitutional law.

Having discussed privacy in this chapter, I turned to data availability in the next one. Federal law on information technology and financial transactions, as well as Federal cryptography policy will be enumerated. Reporting requirements of financial transactions and limitations on strong cryptography as the regulations that most affect the design of Internet commerce systems, will be given special attention.


8: Data Reporting: Trusting the Government

A conflict exists between protecting consumer privacy and ensuring that information is available to government so that it can perform its legitimate duties. Thus, in addition to laws protecting privacy, there are laws requiring data reporting and disclosure.

New technologies for electronic payment present new risks and require new regulatory approaches, but the basic social motivations behind the regulations remain unchanged. Technology changes, but the core roles of government continue to be security, justice, and general welfare. To consider regulatory requirements for electronic commerce, I strip away existing conventions derived from paper models to categorize the underlying social goals and then suggest new regulatory approaches based on the new mix of concerns electronic commerce is raising. The suggested changes are revisited in the discussions of systems in the last chapters, where information availability for governance is considered.

The fundamental choices for reporting economic data remain when the data becomes electronic. However, financial information is more prone to privacy violations than other information, not only because financial information is innately commercially valuable, but also because the Fourth Amendment does not apply to financial information.

I close the chapter with general policy suggestions. I refer to these suggestions as appropriate in the system analyses in the later chapters.

Required Information Reporting

Reporting requirements are a tangential and possibly minuscule part of funds transfer and electronic banking laws. (For the purpose of this chapter, reporting requirements are defined as requirements that information about a transaction or a set of transactions be available to either party of a transaction or to government.) Yet the laws requiring information availability for government can have a tremendous impact on the design of electronic commerce systems --- not least by unnecessarily prohibiting the provision of consumer privacy in electronic commerce. Current reporting requirements are based on the assumption of a paper currency model, in which transaction information is either documented on paper or potentially unavailable.

Laws requiring information in the private sector and those involving the public sector are of interest here, especially considering the decreasing distance between public sector and private sector data repositories. For example, to track both tax debtors to the federal government and parents delinquent in providing children support, the federal government has weakened the Privacy Act and the Social Security Act, by expanding the use of Social Security numbers as universal identifiers. The laws that require the use of Social Security numbers as financial universal identifiers are the Debt Collection Act and Child Support Enforcement Act, respectively.

Sometimes the biases against privacy in reporting requirements are implemented on purpose, as with laws that prevent anonymous payments above a certain size. Sometimes, however, there is simply a mismatch between the paper model on which these laws were based and the strengths and weaknesses of electronic currency systems. I illustrate here that within each category of electronic currency system, technical enhancements that can alleviate the need for the trade-offs that have been necessary in paper currency systems.

Similarly, every company and consumer has practices for record keeping and risk mitigation in the paper world. The discussion of appropriate updates for record keeping in government might apply to these private-sector practices as well. In what ways are the information required appropriate, and in what ways may that information be inadequate for the electronic realm? The previous explanations of security goals can serve in part to answer these questions. Consider these questions form the perspective of the federal government.

There are two classes of businesses that should be specifically addressed in this section for the purpose of exploring trust in the government with respect to electronic commerce: depository institutions and consumer credit reporting agencies. These two business types maintain most consumer financial records, and reporting requirements are often specific to these institutions. Businesses that currently handle the most cash --- credit unions, savings and loans, banks, and thrifts --- have specific record keeping and reporting requirements. These businesses will be discussed together here as depository institutions. (Note that other regulatory changes, especially the loosening of controls on line-of-business and reduction of marketing restrictions, are allowing these businesses to merge and converge.) The data kept by these institutions are primarily for risk management and dispute resolution.

To consider reporting requirements without the biases created by the assumption of paper currency, I separate the reporting requirements along two dimensions: system requirements and social goals. First considered is the range of system requirements inherent in regulatory requirements; for example, data must be trapped in a transaction; data must be stored and searchable by account number. The second variable considered is the underlying social motivation of the financial reporting requirement. A single unifying principle in regulations on commerce is lacking. Instead there is a set of underlying reasons that together motivate most reporting requirements.

After having categorized reporting requirements according to underlying social goals and data availability requirements, I construct a matrix that spans the range of goals and techniques. The general alternatives for reporting in the electronic realm are considered. Finally specific suggestions are included for making the techniques used more compatible with the capacities of electronic commerce so that the goals can be better fulfilled and unnecessary violations of privacy can be reduced.

Techniques for Fulfilling Regulatory Information Requirements

The specific requirements for information found in the laws that require certain types of record-keeping are manifestations of social goals. Within electronic information systems a wide range of techniques exists for ensuring information availability, including anonymous updates to aggregates32 and distributed escrow33. Within regulations based on a paper model, however, this same range does not exist -- four basic techniques are used for obtaining the data necessary for the government to fulfill its legitimate purposes:

immediate reporting,

periodic reporting,

periodic aggregate reporting, and

data storage requirements for later access.

Digital information is more easily subject to secondary use, and if data are required for storage it makes economic sense to try to find other uses. Data requirements increase the burden of trust on the subject of the data when information becomes digital. Additional protections of the information are required to recreate the balance of trust that existed when the data storage was on paper.

Immediate reporting means that documentation on a specific transaction or event must be reported immediately. An example of immediate reporting (not further addressed here) is a police report after a burglary. For the insurance to respond there must be a record of the crime filed with law enforcement.

Periodic reporting requires that data are compiled and reported. An example of this is the annual report which wage-earners make to the IRS, usually with a 1040.

Periodic aggregate reporting requires that aggregates -- means, distributions, and trends -- of data be reported. For example, companies may report aggregate warehouse numbers for insurance purposes without having to report the specifics of a single day.

Data storage requirements are implicit in all of the previous categories; however, sometimes data storage is required without reporting. In particular there are guidelines for information of many types which must be kept on file but need not be reported. A trivial example which most see every day is the example of the safety and capacity records of an elevator. It may be in the elevator or in the file. No copy need be mailed to any level of government. Yet it is required that such an inspection occur and be documented.

Each type of reporting has different system requirements. It is unusual to approach data reporting from this perspective. The standard approach is to consider the goal and then decide on the reporting. The purpose of this examination is to reconsider how the relationship between goals and requirements is altered by electronic rather than paper records. This examination will be based on how trust is required by the regulatory mechanism chosen, how this is altered as information become digital, and a change will be considered an enhancement if the regulatory purpose can be met with a reduction of trust.

To implement immediate reporting, a commerce system can consider each transactions to see if immediate reporting is required, such as purchase amount or item purchased, and then initiate a reporting action (e.g. email to fcc.gov, printing a form) when the conditions for immediate reporting are met. Alternatively, the commerce system can prohibit transactions of a given type to avoid immediate reporting requirements, or assume that such reporting is the merchant's responsibility.

Periodic reporting requires collection of data to be compiled and reported. The types of data and the circumstances of reporting vary by justification of the reporting requirement. Gun dealers make some report to the FBI or the ATF while depository institutions provide data to the Federal Reserve. In all cases here I am concerned with federal data storage requires information be compiled and stored. (The local reporting requirements above were simply examples.) With periodic data reporting the data may be disposed of after the required report is submitted or deleted from general access computers so that the potential for internal misuse is minimized.

Periodic aggregate reporting in the paper world requires that all records be kept for verification of information reported. For example, for a page of tax return there may be a shoebox filled with check stubs and receipts. Again, the types of data, the circumstances of reporting, and the particular agency which has the requirements vary. The commonality which I am addressing here is the manner in which the digital form of records changes the trust requirements. Because of the paper model, keeping long term records for verification is followed in the electronic realm as well. However, there are greater options for data reporting in the electronic realm. For example, individual records could be encrypted for auditing, policies against secondary use could be adopted, and close tracking of internal use of data could decrease risks of misuse. Alternatively, electronic escrow is far more simple than escrow with paper documents. Data storage options are also greater in the electronic realm than the paper arena. Data may be stored encrypted or in such a way that subversion of a single depository provides no useful data. In contrast, with paper records the records are either readable, identifiable and whole or unavailable. With paper records the common risks are that the records are lost, destroyed, or unavailable. Digital records shift away the risk from the entity needing the data (that the records are unavailable) towards the subject of the data (that the records are misused).

Note that where an information requirement fits within these categories is sometimes a matter of interpretation. For example, filing taxes is aggregate periodic reporting in that sources of income are aggregated over the year. It is not aggregate, but still periodic, reporting in that each individual or couple provides their own report. With this caveat in mind consider an example in each for each of the four reporting categories.

In the next paragraphs I go from the general case to the specific. I discuss particular reporting requirements which are likely to apply to Internet transactions.

The 1988 Money Laundering Act empowered the Treasury to require that all suspicious transactions be recorded and extended the provisions of the Bank Secrecy Act. The Treasury interpreted this to institute an immediate reporting requirement, requiring depository institutions to report all cash transactions above $10,000 and all purchases of financial instruments (such as traveler's checks) over $3,000.34 All transactions greater than $9,999.99 must be reported by all merchants, using the appropriate forms, to the Treasury. The Money Laundering Act of 1994 expanded this requirement to include all money transmitters, such as Western Union, American Express, and currency exchange houses. Given the extension in 1994, it appears the Money Laundering Act will also apply to transaction processors in electronic commerce systems.

The most common periodic data reporting requirement is the annual individual federal tax filing on April 15. Wages, tips, and other forms of income must be reported to the federal, state and local governments as necessary for tax purposes. The details of expenditures can be reported according to taxpayer preference. The increased record keeping possible in electronic currency systems allows for greater detail in records of buyers and sellers. Compare auditing the income of a home business conducted over Ebay and a home business conducted over paper advertisements. That such record-keeping would be effective in preventing tax fraud is suggested by the 800,000 "dependents" who disappeared from tax returns as soon as their Social Security numbers were required (Davis, 1995).

The Community Reinvestment Act requires periodic aggregate data reporting. It requires financial institutions to make credit and depository services available to all neighborhoods in their service area on an equitable basis. Typically this means loan application aggregates sorted by ethnicity of the borrower, neighborhood, or loan amount must be reported. Specific data requirements vary over time, among states, and even among institutions.

The Bank Secrecy Act, despite its name, actually limits consumer privacy by requiring detailed record keeping. It was passed to ensure law enforcement access to detailed records of personal financial transactions under subpoena. It requires that financial institutions maintain records of all transactions over $100 for at least five years. Note that the act requires not that the records not be encrypted but also that the record consists of an image of the bank's record of the transactions, such as a copy of the check. Although the act applies only to cash and to cash-like instruments and not to wire transfers at this time, it is reasonable to include it in this analysis, given the number of electronic systems that use cash and checks as the basis for their model, as illustrated in later chapters. (Of course, recording an image of an electronic check is nonsense.)

Many of these reporting requirements are based in part on the Know Your Customer regulations, which prohibit anonymous or pseudonymous bank accounts.

Motivation of Regulatory Information Requirements

In order to determine if a technological alternative, such as escrowed data or pseudonymous reports, can be optimized to fulfill the requirements for reporting data the motivation for the reporting requirements must be considered.

It is neither reasonable nor necessary to go through every reporting requirement to ensure that certain alternatives in the electronic realm can accomplish the same objectives. Illustrating that changes can be made in certain reporting requirements without decreasing the effectiveness of the data with respect to fulfilling the requirement for each possible category of techniques and motivation is sufficient to illustrate that regulatory flexibility can enable satisfactory auditing to take place without surveillance.

Traditionally four basic reasons have been identified for the existence of reporting requirements (Heggestad, 1981):

law enforcement

tax collection

optimization of social welfare

risk management of the financial system

Law enforcement in this case includes the Federal Bureau of Investigation, the Internal Revenue Service, the Drug Enforcement Administration and the Customs Service. Data obtained by law enforcement through periodic reporting is made available through FinCEN35 to other agencies, including Interpol, the Postal Inspection Service and the Immigration Service (Office of Technology Assessment, 1995).

There is some correlation between the reason a reporting requirement exists and the technique selected for compliance in that law enforcement cannot require periodic individual or aggregate reporting. This is partially a result of the Fourth Amendment: absent a warrant, the government cannot require that individuals report private activities periodically to law enforcement. Data for use in criminal investigations must be obtained with a warrant. Therefore the following examples of motivation and technique do not include examples for law enforcement in the periodic reporting categories.

Reporting Examples

In this section I construct a conceptual matrix by providing examples for each of the sixteen possible combinations of motivations and techniques previously delineated of how reporting requirements could be modified for the electronic realm without compromising the purpose for which the data are required. Only statutory reporting examples are considered, not the regulations written to implement these laws. Regulations are more fluid than statutes, and thus less of a long-term concern. Regulation E provides an excellent example of this fluidity. Regulation E provides specific requirements for the implementation of the Electronic Funds Transfer Act. It can be and has been revised not infrequently, most recently to better suit the capabilities of stored-value cards36 (Federal Reserve Bank of New York, 1996). Since regulations are bound by the statutes to which they apply, easier to alter to meet current technological realities than the statutes themselves, and generally subject to change, the focus here is on the less tractable and more stable statutory requirements.

Immediate Reporting

Consider an immediate reporting requirement for each of these categories: tax collection, optimization of social welfare, and risk management of the financial system. In the case of paper-based information systems, immediate reporting implies reporting within hours or days of when the transaction takes place, immediacy being an increasing shorter window of response. (Immediate reporting for the purposes of law enforcement as created in the Money Laundering Act was discussed previously. )

One immediate reporting requirement designed at least partially for the purpose of tax collection is the requirement that exchanges of title to a house be immediately reported. This allows property tax, liens, and other appropriate fines and taxes to be levied. One legally cannot own a house until the title has changed hands in the public record, thus in this case the reporting finalizes the transaction. In Internet transactions the reporting would be the end of the entire transaction, that is the global commitment. The same message which confirmed the commitment of all parties to complete the transaction (the global commit) can include the necessary reporting when electronic reporting is supported.

An example of an immediate reporting requirement for the optimization of social welfare is the requirement that any officer of a company selling or buying stock in that company must report the transaction. This regulation, combined with enforcement of insider trading laws, prevents officers from taking advantage of information about their own companies before it is released, thereby preventing manipulation of the stock market.

Immediate reporting requirements in risk management of financial systems do not exist per se, because the parameters of acceptable risk are set forth in general in bank regulation and actions defined by these parameters as risk-seeking require approval in advance. However, a close approximation to an immediate reporting requirement exists in the requirement under the Truth in Lending Act that any changes in interests rates paid by customers be public and that banks not offer discriminatory rates. Not only does this limit (for social good) discriminatory pricing it also limits banks in their ability to use discriminatory pricing to compete for the same few high-return, frequently high risk customers. Most banks fulfill this requirement by mailing interest rate information to customers in their monthly statements.

Periodic Reporting

Now consider periodic reporting requirements for the purpose of tax collection, optimization of social welfare, and risk management of the financial system. (A periodic requirement for tax enforcement purposes, i.e. filing for tax payments or refunds, was previously discussed.) An example of a periodic reporting requirement to assure the stability of the financial system (i.e. risk management) is the requirement that all stock trades be reported to the Securities and Exchange Commission (SEC). The SEC cannot prevent actions such as insider trading, and speculation that could weaken the market. However, full data on trades are necessary to detect insider trading, so that the criminal penalties can serve as a meaningful deterrent (Ziegler, Brodsky, and Sanchez, 1993; Zuckerman, 1994). Furthermore, SEC regulations and detection of insider trading require not only the identities of those trading stock but also some attributes --- for example, employer and position in the organization --- to ensure that senior executives are not taking advantage of privileged information.

An example of periodic reporting requirements for optimizing social welfare are contained in the Home Mortgage Act (HMA) which requires that depository institutions make available to the federal government data on the specific mortgage requests they are accept and reject. This provides the government with an opportunity to identify, and therefore rectify, discriminatory practices. The HMA requires that the lending institution keep records of the applicant's age, gender, and race long with his or her application. It does not then prevent the institution from keeping these records stored and linked to the applicant for future interactions, although the Equal Credit Opportunity Act forbids considering any of these factors in decisions about awarding credit. This illustrates a potential opportunity for advanced techniques in information technology to remove this apparent conflict, perhaps by requiring encrypted storage of or highly-limited access to HMA records.

Aside from tax collection, there is no periodic aggregate reporting for the purposes of law enforcement as noted earlier. Reporting aggregate financial information to law enforcement would mean that groups that law enforcement has no reason to suspect and who have acted in no suspicious manner must periodically report to the police, which would violate the Fourth Amendment.

Periodic Aggregate Reporting

An example of periodic aggregate reporting for social equity, as created in the Community Reinvestment Act, was previously discussed.

Periodic aggregate reporting would appear to offer the least threat of data intrusion to the subjects of the data compilation. The periodic aggregate reporting requirements that do create a threat to privacy do so by virtue of the data storage required for supporting documentation; that is, he detailed data requirements necessary to make across-the-board reports can be intrusive. An example is the requirement that ethnicity and gender be reported for each issuance of credit included in the HMA. To ensure that aggregate reporting does not become intrusive would appear only to require a limitation on secondary use, including resale and data analysis for unrelated internal use. Thus without further consideration I move on to data storage requirements.

Data Storage

Consider an example of record keeping requirements for each motivation: law enforcement, tax collection, optimization of social welfare, and risk management of the financial system. (A requirement for maintaining data on certain transactions for five years is part of the Bank Secrecy Act are an example of data storage requirements for law enforcement purposes, as was discussed previously.

As an example of record-keeping requirements for the purposes of tax collection, storage of all data relevant for purposes of taxation is required for any item or deduction that appears on a tax return until the period of limitation is over. For reported income or deductions, this is three years after filing or two years after paying, whichever is later. The period of limitation for unreported income is six years. If no return is filed, the Internal Revenue Service can demand documentation at any time. Thus the granularity of data storage requirements are controlled by the consumer's willingness to itemize.

The Truth in Lending Act was designed to prevent discriminatory and unsafe lending practices by depository institutions. It requires that issuers of credit include in their reports to consumers for any purchase for which the consumer is charged the name of the merchant, the date and location of purchase. Furthermore, if there is some relationship between merchant and creditor, such as a common parent or shared ownership of a subsidiary, then the item purchased must be reported as well.

To limit the exposure of the banking system, banks are required to keep of track of all outstanding loans. Banks are not allowed to delete data concerning loans that fail. Interactions with directors and companies that have seats on a bank's board can be traced with this data. Also, individual votes by directors are required to be recorded in order to enable regulators to detect, and hopefully avoid, conflicts of interests. This helps investigators act to prevent a bank failure, or in the worst case, trace the cause after a failure37.

These examples illustrate that a myriad of disclosure and reporting requirements serve a wide variety of purposes using the same set of technical requirements. Keeping this set of purposes and techniques in mind, consider the options in an electronic system.

Reconsidering Requirements

The previous set of alternative electronic approaches to obtaining necessary information suggests interesting possibilities. Now we will revisit the set of examples and consider ways that the adoption of technical solutions can be encouraged. Note that as electronic currency evolves not only are the possible methods of data reporting and compilation different, the market's capabilities and desires may differ as well. Governance requires data reporting and compilations under the following circumstance:

The government needs the data to perform legitimate functions.

The market has been unable to provide the required data adequately without regulation.

There is no less intrusive reporting requirement under which the market could provide the data.

The need for data is sufficient to justify the costs.

The costs to meet the need referred to in the last bullet include the risks of decreased personal privacy, the monetary cost to those required to report, and the administrative costs of the government in collecting the data and regulating and enforcing its collection. With this in mind, consider those cases where immediate reporting has been deemed necessary.

Immediate Reporting

Of the types of reporting required immediate reporting may be the least changed by electronic capabilities, with data transmission simply replacing the US Postal Service. Yet the quantity of data reported may be reduced, since electronically reported data can be analyzed at the time of the report. More efficient use of information may require less information to obtain the same result. Reporting data via electronic transmission offers certain options not available in a paper-based reporting including masking the identity of participants except as necessary for an investigation and using specialized software to analyze the data for suspicious activities. Immediate data reported on individuals are not usually interesting to the market or to government. Only when a particular case out of multiple data records is identified as worthy of investigation does the information becomes valuable. This suggests that constant pseudonyms could be provided for reporting on the activities of individuals -- thus the activities of those who are not acting in a suspicious manner are not recorded in an easily searched manner, while those who's actions suggest prohibited activity can be identified after the actions of their pseudonyms has been identified as suspicious.

Immediate reporting requires data compilation. Reports are not sent in and discarded, the data in the reports are kept. The data becomes valuable compilations to be used for direct marketing, business siting, and a myriad of other secondary purposes. Thus the major issue of immediate reporting is the major issue of all four categories of required reporting: secondary use of data. In the net paragraphs I discuss alternatives that would allow use of the aggregate data compilations that result from immediate reporting requirements while maintaining personal privacy.

Consider the promise of pseudonyms in the cases discussed previously. Immediate reporting of title exchange is required when a home is purchased, allowing the local government to levy taxes and identify the correct individuals to pursue when violations of building codes are discovered. What information needs to be accessible on-line to provide those with legitimate needs with easy access while not providing easy access to the price of your home and the sum of your holdings? This is a case where limits to disclosure or conditional pseudonymity should apply. If a home owner pays the bills and maintains his building up to code, then there is no reason for anyone to easily access information about the owners of the property for purely business purposes. For example, to decide how to value my property it is not necessary to know my identity. Secondary uses of aggregate information would still be allowed with privacy protected by pseudonyms. In the cases of community need for individual information, for example when a building is a neighborhood hazard concerned individuals (e.g. the neighbors or community groups) could request identity information. For example, listings of buildings that owners do not maintain at code would allow community groups and others with an interest in contacting the owner of a specific property to identify that person. Similar arguments hold for making available automobile ownership records.

Conversely, the identity of the individual and the data on the property he owns could be stored with separate agencies so that the agencies must cooperate to link an identity with a purchase. This would allow identities to be made available under special circumstances while preventing the widespread dissemination of information for further analysis, which as shown in the examples in chapter 6, are not always in the interest of the government or the governed.

As discussed earlier, the only immediate reporting requirements currently in place for financial transactions are based on a threshold, for example the size of the transaction. The techniques for reporting such transactions can be made less intrusive through electronic transmission since electronically reported data can be immediately analyzed for suspicious activity. Thus, while the same information would be reported, the compilation of stored data resulting from the reports would be more limited and therefor have less potential to violate privacy. Information transmitted that reveals no suspicious activity can then be deleted almost immediately by the receiving agency. Thus, it is possible that while less information is compiled more suspicious activity may actually be identified.

The purchase of stock in a company by one of its directors is a rare case where reporting the identity of the person making a transaction is important because the position of the person must also be identified in order to establish the transactions significance. A stock purchase in any of the companies involved by an individual in charge of mergers and acquisitions would be of interest to regulators as would interesting as is a sale by counsel of stock in a company she represents when litigation is pending. Here individuals limit their privacy by choosing a position of responsibility that subject them to a higher level of scrutiny. A similar argument could be made in the case of examination of financial decisions made by high level officials in government. In summary, there are cases where the identify of the participant is a critical reason for the reporting requirement. In these cases advanced electronic techniques for protecting privacy may not be useful, as the identity information is a core element of the information.

The reporting required in the Truth in Lending Act is an excellent model for data for institutional oversight without privacy violation. Under that act banks are not required to report the identity of their customers to the state to prove that all customers have been notified of changes in rates. The banks need only show that a policy exists for such notification and that the institution has followed the procedures established in the policy.

Periodic Reporting

Consider the cases of periodic reporting previously discussed. The HMA is intended to ensures that individuals are not discriminated against on the basis of race or gender. The Community Reinvestment Act (CRA) is similar in that the CRA is intended to ensures that communities, rather than individuals, are not subject to discrimination in lending policies. The reporting requirements in the CRA and the HMA can be unified using the capabilities of electronic data reporting with loans approved and denied being identified by ethnic origin of requester, ZIP code, minimal financial data, and gender. Financial institutions are thus be required to avoid the statistical appearance of discrimination. To avoid fraud in reporting the agencies could use a cut-and-choose 38technique to identify fraud statistically. The number of records checked would increase the certainty that there no fraud has been committed and would remove the veil of privacy only from those individuals whose records are so chosen. Notice that the selection of records for verification would be chosen randomly, the point would be detect fraud on the park of the lending institution not the individual requesting a loan. Mortgage information is already maintained on-line, and is in fact sorted so that information can be sold to other providers of financial services (Fenner, 1993). Thus, using a cut-and-choose technique is not as costly a proposal as it might initially appear.

Periodic Aggregate Reporting

Periodic aggregate reporting is problematic in that it implies storage of individual consumer data in order to obtain the aggregate data required for reporting. Thus, while periodic aggregate reporting may appear much less of a risk than immediate reporting, in fact the risk in both cases results more from the supporting compilation of data than the reporting itself.

If consumers had smart cards39 periodic reporting without identity information could be possible with anonymous updates of aggregate information (Camp & Tygar, 1994). The creation of smart cards offers one solution for the need to keep specific data to support aggregate reporting; however, some individuals will inevitably lose such cards. This implies that those with data they might wish the bank would forget could simply lose the information. Mandatory back-ups at a trusted facility could mitigate this problem, but selecting a widely trusted machines for back-up, and solving disputes about transactions that have not been backed up is no trivial matter in technical or political terms. In technical terms it may not be simply to correlate the data across the databases in different institutions. In political terms, deciding who is at risk for losing money is rarely easy.

Data Storage

A fundamental problem with reporting data is that it requires that these data be captured. Once data have been captured t with internal disclosure, external disclosure and the resulting creation of privacy-threatening compilations all present further complications. Thus the techniques to mitigate the risk to privacy for those data compilations which are required to support various types of reporting will also apply here. However, since no reporting is required for data compilations it is possible that regulatory, opposed to technical mechanisms, may be more feasible. This is because private institutions need not be concerned about forced judicial release of information under sunshine laws; thus hiding identity information is less critical. However, misuse of the data remains an important issue.

Current limitations on and requirements for disclosure apply only to the government, not to the institutions required by the federal government to store information. Furthermore, financial data have only limited protection under the Fourth Amendment. Thus, only weak limits exists on disclosure of information to government.

Internal disclosure refers to the use of required data compilations within the financial services institution for purposes other than which they were collected. When financial regulation requires investment in data compilation allowing institutions to use the data collected internally can soften the financial blow.

When concentration or monopolistic powers are involved, however, the use of such data can become more problematic. As the type of transactional information collected becomes more detailed, the use of this information in decisions about internal hiring, promotion, and consumer credit provision becomes an increasingly important issue. For example, can a bank look through consumer records to evaluate applicants for positions within the bank? Currently the bank has the right to do so, since this is considered valid internal use under current law.

What of internal disclosure for purposes of prosecution? A hypothetical example which that not too extreme is the possibility that Microsoft could use information obtained through its network services to identify possible violators of electronic copyright. There is currently no prohibition against Microsoft using its own internal data to assist law enforcement in identifying possible thefts of software. Legally, it does not matter that data used in this way were obtained without the consumer's knowledge. An example of this use of Microsoft's data compilations can be found with the Melissa virus. Melissa used the MSWord macros to subvert MS mail programs and servers. Microsoft used registration information which linked the identity of the user to the copy of MSWord used to create the macro. This was secondary use of data in order to cooperate with government prosecution and investigation. (A difficult and expensive alternative to cooperating with detecting misuse of badly designed features would be for Microsoft to build secure reliable software.)

Consider now external disclosure. Any data collected that can be used for internal decisions may be sold under current law to other organizations for similar decision-making purposes. Data sold include house purchase records, medical records, records of grocery store purchases, and records of on-line buying habits. There are currently no constraints on the commercial trade of such data. In fact, for most data the government requires once the government has obtained the data, it must release it to organizations who requests it regardless of their motivation. Two examples of this were include in the previous discussion of privacy with respect to billing information for women's health services and worker's compensation.

In light of the motivations and the options electronic information systems, information storage create, existing requirements seem unnecessarily intrusive. The Truth in Lending Act storage requirement for documentation of each consumer's transactions could be changed so that the customer need only have a valid signed agreement for every transactions, and upon presenting that receipt can obtain a full refund from either the merchant or the financial intermediary. Current electronic commerce protocols can be designed so that the credit-grantor need not store records of items bought in order to provide receipts -- encrypted signed receipts or transfer of purchase orders provide nonrepudiation. (Recall the discussion of public key cryptography and the capacity of digital signature techniques to provide nonrepudiation.) The practical requirement that information about purchases be recorded in a format that can easily be searched, correlated, and reproduced exacerbates the threat of data surveillance. Reporting requirements, the implicit data storage requirements, and explicit data storage requirements should be evaluated in part on the basis of this risk to privacy.

Before requirements for reporting of transactional data are created, the threat of possible surveillance should be balanced against the wrong being addressed by the requirement being imposed. This suggests any consumer whose data is compiled in fulfillment of a government requirement should be have her privacy protected by law.

Cryptography Policy

Although information technology has increased threats to and breaches of privacy by increasing data availability, it has also increased the power of individuals to maintain their privacy --- particularly through cryptography. However, the federal prohibition of the export of cryptographic technology (discussed in the following section) has effectively prevented the widespread implementation of strong 40cryptography in operating systems and communications packages intended for the global market (Froomkin, 1996). This prohibition has affected the design of electronic commerce systems intended for export (including the Secure Electronic Transactions and the Secure Sockets Layer systems described in detail later in this text).

Cryptographic technology has historically been the purview of governments. In the United States the now defunct Coordinating Committee for Multilateral Export Controls explicitly classified cryptographic technologies as exclusively military technologies shortly after the committee's creation in 1949. Control of exports of cryptographic technology falls under the Export Administration Act, the Arms Control Act, and the International Traffic in Arms Regulations (ITAR). The view of cryptography as war technology is expressed in the Export Administration Act, which prohibits "the export of goods and technology which would make a significant contribution to the military potential of any other country." Thus cryptography has been controlled as Auxiliary Military Equipment under ITAR. Note that any cryptographic technology that can be used by civilians can be used by the military. There is, as explained earlier, no such thing as military grade cryptography.

Under ITAR, cryptographic technology can be exported if restricted to the following purposes: copy protection, authentication, financial information, integrity without confidentiality, compression, or prevention of theft of information services (such as pay television). Thus without a specific license to do so, the export of any device that provides strong cryptography for protecting privacy in an electronic system is prohibited because it falls outside of the categories listed above (and such licenses are rarely forthcoming).

Note that the recommended key lengths for encryption recommended in the previous chapter would be allowed specifically for the encryption of payment information in systems intended for international use. However, a general use system, such as the Secure Sockets Layer, cannot incorporate strong encryption technology (for export) because that encryption technology could also be used to provide general communications privacy.

Current national cryptographic standards are not adequate for protection of privacy and security in Internet commerce (National Research Council, 1996; Schneier, 1995). In 1988, the National Institute of Standards and Technology (NIST) determined that the Federal Information Processing Standards (FIPS) on encryption needed to be replaced, and that any replacement should be "public, unclassified, implementable in both hardware and software, usable by Federal agencies and US based multinationals." The criteria developed by NIST with public input reflected the needs of Internet commerce: portability across operating systems, exportability across national borders, the need to provide privacy as well as financial security, and the capacity to optimize for specialized applications.

The standards NIST actually adopted fell short of accomplishing these goals, as have the subsequent proposals. The first proposal, which included a requirement for a flawed escrow system, as defined in the Escrowed Encryption Standard (National Institute of Standards and Technology, 1994) was classified (not public) and could therefore be implemented only in hardware. Because of strong objections41 the requirement that the Federal government maintain databases for key escrow has since been removed. The algorithm has been declassified. This Escrowed Encryption Standard has been followed by multiple additional proposals for escrowed systems, and governmental implementation of escrow systems plods determinedly along.

The analysis and evaluation of cryptography policy was subjected to a complete review by the National Research Council. The resulting report, Cryptography's Role in Securing the Information Society (National Research Council, 1996), recommended a new approach to cryptography. The report's recommendations most relevant to electronic commerce are:

National cryptography policy should be developed by the executive and legislative branches on the basis of open public discussion and should be governed by the rule of law.

National cryptography policy affecting the development of commercial cryptography should be more closely aligned with market forces.

Export controls on cryptography should be progressively relaxed but not eliminated.

The US Government should take steps to assist law enforcement and national security to adjust to new technical realities of the information age.

Cryptography policy has managed to appear to be constantly in flux for several years, without many significant changes actually occurring. Even with the possibility that this book may be in print for a number of years, it is nevertheless safe to write with the assumption that it will be true at any point in the near future when a reader might pick up this book, "Currently the Senate is considering a bill to lift controls on export of cryptography , and there is a case under the First Amendment. The current Administration has offered a new key escrow proposal" In 1992 it was widely assumed that any White House containing now-Vice President Al Gore would be an advocate for freedom to export cryptography, yet export controls remain in place today. Thus although it may appear from the foregoing discussion that this problem will be addressed shortly, there is historical basis for believing that this will not be the case.

Cryptography policy is a case of conflict, with data availability on one side and privacy and security on the other. On the one hand, cryptography is currently restricted for the purposes of law enforcement and national security. On the other hand, removing constraints on cryptography would serve commercial, security, and privacy interests. Controls on cryptography forcefully illustrate that there are reasons other than reliability for providing identity information.

Disclosure Summary

In this chapter I have illustrated that current policies, including cryptography policy, immediate reporting requirements, and data requirement, reflect an awareness of the utility of electronic information, in exclusion of an understanding of the threats posed by emerging technology. The juxtaposition of limited availability of true anonymity and a legal regime that arguably promotes privacy violations hinders the development of . secure, reliable, and private commerce systems.

One clear conclusion from this examination is that laws written for rapidly changing areas, such as funds transfer and consumer lending, should be written in the least technologically restrictive language possible, so regulators may allow or require innovative solutions without waiting for an act of Congress.

The response by the legal and regulatory communities to privacy-threatening innovations, and to new technologies in general, has been the development of technology-specific rulings after these technologies have been dispersed through the marketplace. This approach is becoming progressively less successful as the rate of technical change increases. It may not be possible to respond to information technologies after they have reached some critical mass; privacy protections may need to be included in the hardware (Morgan, 1992). Arguably, in this information age, there needs to exist a system of laws recognizing that the right to privacy is technology independent.

In the conflict between privacy rights and data reporting, many problems associated with paper currency remain with electronic currency: law enforcement concerns, tax collection, auditing and fraud detection, prevention of discrimination in the provision of financial services, financial privacy, the assurance of funds for socially desirable goals, and the balance between risk-reducing regulation and productive but risk-seeking free market behavior just to name a few. There is no reason to abandon the goal of solving these problems as currency becomes increasingly electronic. But as the nature of currency and commerce change, previously chosen methods of advancing regulatory interests are increasingly ill-suited to the progressively more electronic environment in which they operate. Just as different regulatory techniques are appropriate for different forms of paper currency, different regulatory techniques are appropriate for different electronic applications. This presents a particularly difficult problem in the case of transactional data. From the perspective of customers, information about their purchases is clearly personal information. For merchants and electronic commerce providers, it is critical business information about consumer preference, as well as a product in its own right.

The current debate over customer information in the increasingly competitive voice telephony market may provide a glimpse of the conflicts to come. Data about whom a customer calls, and when are Customer Proprietary Information (CPI). The collection and use of this information has tremendous privacy implications. It also has increasing market value, especially as local telephone markets become competitive. Knowledge of the calling patterns of a region, a neighborhood, or an individual is a powerful weapon in the competitive marketplace because such information can inform infrastructure development, pricing, and marketing decisions. In order to provide a level playing field for the incumbent local telephone service provider (the regional Bell operating company) and any new entrants into the local telephone business, all possible entrants into the local communications market should have the same information. However, this implies that all information about the location and duration of every phone call a consumer makes should be available to anyone who can claim a possible competitive interest.

CPI is both important commercial and private personal information. Customer's expect privacy concerning the recipients and the contents of their phone calls, and there is a tradition of providing such privacy in the Bell System. Yet there is also a public interest in fair competition that would compel the release of such information. This type of conflict will become increasingly common as information technology proliferates. The likely solution to the CPI debate -- widespread information availability -- is not entirely promising as a model for resolving the conflict. The intelligence in the hardware at the endpoints of Internet commerce; however, offers a broader range of possibilities for resolution than are available in telephony.

Now the discussions of privacy, and security are complete. In the next chapter transactional reliability is addressed. Then specific systems are examined before concluded with some basic risk avoiding practices.


9: Transactions

The relationship between reliability and anonymity is this: with high levels of reliability, anonymity is a not a threat to accountability. With no reliability anonymity is a license to steal for one party or another, as an anonymous party can neither demand compensation nor be found to be subject to the demands of others. Identity is often part of the information about transaction participants which is exchanged during a transaction for dispute resolution in the case that a transaction fails.

Transactions with transactional reliability have atomicity. Transactional reliability originated in the study of fault tolerance. A system's fault tolerance is its ability to provide reliable service despite some classes of failures. For example, one would not want a computer to fail because a single transistor fails. Therefore computer hardware design includes fault tolerance to ensure functionality in the event of such small failures. Similarly if a single message is lost a transaction should not fail, for example, there should be no confusion between the parties as to the status of the transaction. A transaction should be reliable despite certain network failures. Some systems have limited transactional reliability which provides sufficient information for governance (as defined by the information requirements enumerated in the previous chapter) regulations, e.g. First Virtual. Additionally some systems with transactional reliability would nevertheless not meet consumer protection or law enforcement requirements for transactional information. In this chapter I explain how this is possible, how to evaluate the technical reliability of a system. how to recognize when dispute resolution is based exclusively on policy frameworks and when such resolution is supported by reliable design.

Reliability

Reliability is the ability to recover from failures to a consistent, isolated, and durable state. Reliable electronic commerce protocols provide certainty in the face of network failures, memory losses, and attackers in that the more reliable a transaction the less the uncertainty that comes with extensions of trust. An unreliable electronic commerce system cannot distinguish a communications failure from an attack on the system. If a failure can be used effectively to commit theft, then such attacks will certainly occur.

Reliability and security are interdependent. Reliability is not security. Reliable protocols on servers that are not secure will provide reliable services to attackers as well as authentic users. As noted above attacker's can exploit a lack of reliability in an electronic commerce system to commit theft.

Reliability in electronic commerce requires security to provide authentication, integrity and irrefutability. Reliable electronic commerce provide fail-proof transactions. This fundamental requirement implies other technical requirements. It is widely agreed providing fail-proof transactions in an electronic currency system requires divisibility, scalability in number of users, conservation of money, exchangeability or interoperability, and availability (Cross Industry Working Group, 1995; Okamoto and Ohta 1991; Medvinski and Nueman, 1993; Low, Maxemchuck and Paul, 1993; Brands, 1993). The properties that characterize a fail proof transaction are described in this section.

ACID Properties

The acronym ACID refers to transactions that are atomic, consistent, isolated and durable as defined in Chapter 2, and further explained below. (Recall the concepts were introduced in the discussion of cash transactions.) ACID transactions are robust, meaning they can prevail in the face of network outages, replay attacks, failures of local hardware and errors of human users (Gray and Reuter, 1993).

Atomic with respect to transactions has a Newtonian sense. That is, atomic transactions cannot be split into discrete parts: customer's payment, merchant's receipt of payment, and merchant's delivery of receipt or goods. An atomic transaction either fails completely or succeeds completely. An atomic transaction conserves funds - money is neither created nor destroyed during the transaction. For example, consider what happens when a customer transfers funds from a savings account to a checking account. Either the checking account is credited and the savings account is debited, or neither account balance changes. In an atomic transactions, there is no case where money either disappears from both accounts or is credited to both accounts.

A transaction is said to be consistent if all parties relevant to the transaction agree on critical facts of the exchange. If a customer makes a $1 purchase from a merchant then the transaction is consistent if the merchant, the customer and the bank (if it is involved) all agree that the customer has $1 less and the merchant has $1 more.

Transactions that do not interfere with each other are termed isolated. The result of a set of overlapping transactions must be equivalent to some sequence of those transactions executed in a non-concurrent, serial order. Transactions may be overlapping because they are made at the same time and place, or near the same time by the same parties. If a customer makes two $1 transactions, then the two payments should not be confused by the customer, merchant, or bank. The customer should not end up being charged twice for one item nor should one of the payments be counted twice to give customer $2 total goods for the single (miscounted) $1 payment.

When a transaction can recover to its last consistent state, it is durable. A transaction recovers when uncertainty is removed and there is a clear state, i.e. everyone know where they are, what message they are expecting, and the status of the transaction. For example, if a customer physically drops a dollar when making a purchase, that dollar does not disappear. When the customer retrieves the dollar, it is restored to its last consistent state. Similarly, money available in a computer before it crashes should not disappear during the crash but should still be available when the machine reboots.

Atomicity, consistency, durability, and isolation in a transaction make nonrepudiation possible in electronic commerce. Suppose, for example, that a customer wants to make a purchase from the local software store. The customer must pay, or promise to pay the purchase price for the item she wishes to purchase. The merchant either gets payment (cash) or proof of intent to pay (a standard purchase order or check). The customer gets a receipt from the merchant indicating that she has paid and expects certain merchandise to be delivered. When it is delivered, the customer signs a receipt for the merchant indicating that delivery has occurred. Each action is linked with some verification of the action so both parties have some proof in case the other party attempts fraud or fails to perform. Linking an action with the proof of the action provides nonrepudiation for that specific action. When all the steps of a transactions are bound together (atomic), consistent, durable, and isolated from other transactions the nonrepudiation of the steps creates nonrepudiation in the transaction as whole.

Degrees of Atomicity

Electronic commerce systems have widely varying scopes, some covering only payment and others addressing everything from negotiation to delivery. Different electronic commerce systems offer different degrees of atomicity to address the problems of remote purchases: money atomicity, goods atomicity and certified delivery.

Of course, electronic transactions may have no atomicity. No atomicity requires mutual trust among participants. The physical equivalent is sending cash or goods in the mail to a Post Office box. Sending cash to the Post Office box is a bad idea because the recipient can claim never to have received the cash. The recipient then provides no receipt and delivers no goods. Among electronic currency systems considered here, Digicash has no atomicity. This means that the merchant can claim receive payment and claimed never to have had it (Yee, 1994). It can be easy for a customer or merchant to commit fraud in systems with no atomicity.

Electronic transactions may have money atomicity. The physical equivalent is money atomicity is paying cash in person. Money-atomic systems have no mechanism for certifying that of merchandise has been delivered. If used for remote purchase with accepted techniques for the delivery of physical goods, money atomicity is quite adequate. But fraud, through a customer's theft of goods or a merchant's refusal to deliver goods after payment, maybe a trivial matter when systems with only money atomicity are used for transactions involving goods with on-line delivery, such as software. Among the systems discussed here the Secure Electronic Transactions system provides money atomicity (Mastercard, 1996).

The highest level of atomicity in an electronic transactions is goods atomicity. Goods atomicity corresponds (in a physical transaction) to using a certifiable payment mechanism with certified delivery. Goods atomicity provides high reliability and reduces the opportunity for merchant fraud. Goods atomicity is the electronic equivalent of Cash on Delivery: the merchant is not paid unless there the delivery is made. The customer does not pay unless there is a delivery.

The highest level of atomicity possible in an electronic commerce system is certified delivery. With certified delivery the customer pays only if the item delivered matches the description of the item promised. Although this amounts to a semantic matter, it is powerful nonetheless. Cybercash owns many of the relevant patents on certified delivery.

Atomicity depends on design and implementation of the electronic commerce system in use as well as the business assumptions on which the design is based. Atomicity depends on funds-available policies because of rollback. Rollback is a technique in which all steps in a transaction are recorded and then reversed until the most recent consistent state of the transaction is reached. For example, if a customer's attempt to transfer funds from checking to savings fails to occur, funds withdrawn from the customer's checking account are placed back into the customer's checking account restoring the transaction to the last consistent state.

Rollback becomes more complicated as financial transactions involve transactions in multiple databases. For example, suppose a customer orders, as a frequent flyer award, a free ticket and supplies a credit card number to pay for the courier charge. If the entire fare is mistakenly charged to the card, rollback is obviously possible, but requires coordinating three databases: the airline frequent flyer database, the airline billing database, and the billing database of the credit card company. This is obviously a bit more complex than simply re-depositing unused funds at a single institution. As transactions become still more complex and involve even more databases, rollback becomes progressively more complex.

Superficially, electronic transactions are just exchanges of bits, and if such an exchange can be reversed, then the transaction can be made atomic. Yet for Internet commerce to expand, there must be some interoperability not only between different Internet commerce systems but also between Internet currency and traditional forms of money. Therefore, if the rollback for collecting a fraudulent transaction takes too long the fraudulent party could abscond with unrecoverable cash prior to rollback, making the later acquisition of bits meaningless. This implies that a transaction that implements atomicity using rollback and that is theoretically atomic may not truly be atomic. Two phase commit in which the record or funds involved are locked until all parties commit to the transaction. (At the point where all parties agree that the transaction has been completed is called the global commit or referred to as global commitment.) This implies that for rollback to be possible for long periods after a transaction has been entered into funds should remained locked until commit, so that the money can not be withdrawn or moved in the interim.

Money atomicity, consistency, durability and isolation provide conservation of money: money is neither created nor destroyed in a transaction.

Scalability

Scalability sounds exactly like what it is: a system's ability to scale transactions. Transactions can scale in the size of a transaction (very small to very large) or in the number of parties (upwards to millions).

Electronic transactions were initially adopted for business to business relationships because the amount of the transaction can be scaled upward. Because of the difficulty of securing large amounts of physical cash, paper cash is troublesome for large scale transactions, such as those over hundreds of thousands. Electronic fund transfers started with large-scale transactions made through private networks between large institutions. These large scale transactions are sufficiently valuable that they make the cost of extremely careful management of single-use cryptographic keys a eminently reasonable expenditure. (Of course, these transactions often also occur on a proprietary network, but with careful key management a proprietary network is unnecessary and without careful key management a proprietary network is inadequate. ) Obvious examples of high value networks this include FedWire and CHIPs.

Microtransactions are at the far end of the same scale. Microtransactions are valued at fractions of a cent. Microtransactions enable entirely new markets. For example, consider banner adds an inchoate form of micro-currency. There is now a market in single clicks on an add and sets of eyeballs viewing an add. An as-yet-undeveloped market is that for temporally valuable or specialized publications. For example, consider the number of people who might have paid to read some number of words in Wall Street Journal on "Black Monday", October 19, 1987, compared to the number who would want to buy the Journal on a daily basis. The difference between the daily purchasers and the potential purchasers on October 19 is large enough to be of interest, but not sufficiently large that the cost of building the distribution infrastructure to reach all potential customers on Oct. 19 can be justified. The Internet can serve these tiny temporal information markets by reducing transactions cost, providing instant connectivity, and widespread distribution.

Microtransactions on the Internet can be implemented using techniques that are inherently suitable to small transactions (e.g. Millicent) or by spreading the cost of multiple transactions over many transactions (e.g. MicroMint) . (Both of these systems are described in detail in Chapter 11.)

Divisibility

Divisibility is essentially the ability to make change, to divide a currency into discrete parts so that any sum might be exchanged. This is not the trivial matter it might seem at first. The ability of instruments like checks to create instructions with exact amounts, such $435.68, removes the need to make change. But once created the amount designated on such an instruction cannot be divided. A check for $435.68 cannot simply be torn and used to make two purchases of $217.84. It cannot be divided, it must be converted (through interoperability) to another form that is divisible.

There are two basic types of currency: notational and token. Token currencies are currencies where the value is bound to the currency, like commodities, gold, and bills. During transactions the tokens are exchanged. Notational currency exists as a notation in a ledger and during transactions instructions to alter those notations are exchanged. (This distinction is further explained in Chapter 11.)

Notational currencies address divisibility by requiring an exact transactional amount for each purchase. Notational currencies require that notes be validated (deposited, in the case of checks, and authorized, with credit cards) for each transaction.

Some token currencies for microtransactions do concern themselves with making change by effectively making every payment in pennies. So that the division of 43,568 pennies into two equal amounts is not difficult. Thus divisibility is not an issue in token systems for microtransactions.

Note that availability is a security issue as well as a reliability issue, as described in chapter 4. Systems where the money can not be accessed or spent do not provide reliability in that money is effectively destroyed.

Interoperability

A rainy day. A check. A wallet of credit cards. No cash. No taxi.

The taxi takes cash only. The potential passenger is wet and the driver has lost a sale because of the lack of interoperability. As illustrated by this example interoperability is the ability to exchange money in one form for money of another form. Cash and credit cards do not have real time interoperability, so the potential passenger is all wet. While credit cards can be used for cash advances, and checks turned into demand deposits, cash and credit cards are not perfectly interoperable. Yet for money to be a standard of value does require interoperability. As the wet traveler example illustrates interoperability is not absolute; there are degrees of interoperability.

The ideal market goal of every purveyor of an electronic commerce system is it's offering become the sole standard. The ideal of all customers and merchants is that they get to choose among a suite of commerce mechanisms that are widely accepted and can interact.

The best way to prevent competition among systems is to establish a system that is not interoperable. The best way to assure competition among systems is to require that all systems be interoperable. Thus it is in the interest of those searching for a monopoly to create commerce systems that lack interoperability. The vendor of such a commerce system can then extract payment for every transaction.

In the electronic environment, interoperability of a protocol in terms of wide use also means that a it can be implemented on many and diverse platforms. Open standards encourage this type of interoperability. Low requirements (e.g. have good credit, have a bank account) for participation in electronic commerce also encourage interoperability through wide use, by expanding the base of possible customers. Restrictions on participation have the reverse effect. For example, an electronic commerce system that requires that customers have a credit card (Mastercard, 1996) prohibits the participation of anyone without a credit history and significant income, as well as anyone who simply chooses not to possess a credit card. Credit card requirements may exclude some of the heaviest Internet users: students.

Interoperability of a protocol in terms of convertibility means that different vendors' software can exchange data; in electronic commerce, converting money requires the ability to exchange data. Agreements to exchange funds can be handled within the business community, as shown by the evolution of the check clearing system, without regulatory requirements for interoperability, but this requires informed consumers. Interoperability is important for the expansion of Internet commerce, without it the number of customers who can be reached may be dramatically decreased.

Interoperability is not a critical research issue in the theoretical study of secure electronic commerce protocols, since even systems that are not secure (First Virtual, 1995a) can provide interoperability. However, understanding the details of interoperability means understanding some of the risks involved. In particular if two systems are interoperable a failure of one systems can cause problems with another system. A cynical definition of a distributed system, "one where a computer you have never seen, that is not part of your organization, and that is miles away can prevent you from getting your work done" echoes this problem. When a system failure causes a failure in a connected but not necessarily related system the effect is called cascading. Interoperability can create the possibility for financial cascading. Conversely, a lack of interoperability means that one system must eventually emerge as dominant and then the discovery of one weakness will infect the entire system. That is, just like biological communities, information networks resist viruses more effectively if there is heterogeneity. Heterogeneity requires interoperability.

Fully interoperable systems can interact seamlessly. Systems can also have limited interoperability. Examples of limited interoperability today can be found with cash, credit cards, and checks. That all merchants who accept cash cannot accept credit cards illustrates the limits to interoperability (recall the empty taxi). A credit card bill may be paid with a check. Cash can be obtained using a credit card. A check can be exchanged for cash at a bank, and cash can be deposited into a checking account. Another example of limited interoperability is the credit approval system, although a merchant must have a relationship with Visa and American Express to take both cards, the same hardware can be used to check multiple credit cards. The acceptance of one card does not enable the acceptance of the other. However, there is a shared interoperable standard that means that the merchant need not purchase new hardware to handle transaction involving one card if he already has the hardware to handle another

Let's return for a moment to the example of the taxi. The driver of the taxi does not accept other mechanisms of payment for three reasons: specialized equipment, trust, and overhead.

Consider the specialized equipment required to accept payment mechanisms other than cash. In the cash of the credit card the driver needs to have at least one piece of unconnected hardware that he can use to record the information on the credit card. Ideally the driver would also have a telephone-based link to the credit card provider to verify that the card is valid. Though a wireless phone does not qualify as specialized equipment the hardware to read the magnetic strip on the card and communicate with the card issuer certainly falls into that category. For owner operated cabs, each driver would also have to have a merchant account with the company that issued the credit card the customer is presenting.

The second issue precluding the driver from accepting payments other than cash is trust. To accept a check the taxi driver would have to extend trust to passenger with respect to her creditworthiness. To accept currency, on the other hand, the driver needs only trust his ability to detect counterfeit bills and the stability of coin of the United States.

Overhead is a function of trust and specialized equipment yet it deserves consideration on its own as an obstacle to the driver's accepting means of payment other than cash. In the case of credit cards and checks a significant part of the overhead costs involved is incurred in establishing a relationship with the purveyor of the commerce mechanism. Note that one does not have to have a relationship with the U.S. government to use its currency. In fact, many users of U.S. currency, drug dealers for example, would very much like to prevent any type of relationship with the federal government. Yet to accept credit cards or a check it is necessary for the merchant to have an established relation with a depository institution approved by the Federal Deposit Insurance Corporation. (Of course many of the aforementioned government-avoiding drug dealers manage to have an arms length relationship with the FDIC by using approved banks, despite the best efforts of law enforcement.) With the bank account there is also the issue that the check must be deposited. In both the check and credit card case there is the cost of money over time. That is, the funds are not available immediately after the transaction. These same constraints can exist in Internet commerce, as discussed in Chapter 2, although there the concern is the ability to reach all users.

Because conducting transactions in hard cash is all but impossible, Internet commerce faces the same three issues: need for specialized equipment, trust issues, and overhead. In the case the of Internet commerce the equipment required is specialized software, usually free to the consumer but at some cost to the merchant. Trust is a ubiquitous issue, as discussed throughout this text. Overhead is a function of network bandwidth required and processing on the consumer's computer.

Open Systems, Standards, and Protocols

A relevant example of electronic interoperability is Web clients (browsers) and servers -- any browser can be used with any server. A contrast is the use of a proprietary system, such as LotusNotes or Microsoft groupware. If a company chooses a Microsoft groupware product then its ability to offered controlled but seamless access to the Web is limited. An example of limited interoperability is Macintosh and Microsoft operating systems. A program written for Windows will operate on a Macintosh (using SoftWindows) but the reverse is not true. A Macintosh can read an IBM-compatible disk but an IBM compatible is just that, and cannot access information on a Macintosh-formatted disk. Yet none of the operating systems- MacOS or any member of the Windows family - is open. So any interoperability that exists can be removed as soon as the market is considered sufficiently captured (not likely with MacOS).

In Chapter 1 I noted that the Internet is built on open standards. Increasingly today there is a buzz over open systems and open source. Open standards enable interoperability. Open source ensures interoperability is possible. Interoperability does not require open code. Open source code is open to review and reuse. This has fundamental trust implications, but first on the issue of interoperability. Since the code is open it can be wrapped -- that is other code can be added so that the input to and output from the code can be altered to any format. Since the code can be altered it can be made compatible with any system.

Consider the drivers for vendor, merchants and customers. Many vendors selling electronic commerce solutions offer open standards. This means that the stated trust relationships between the parties can be understood with a close analysis of the protocol. This book has all the information necessary to read an open standard. Standards which are not open, for example, Mondex, ask all users to take on faith that Mondex has the judgement to set correct trust relationships. Since the proliferation of physical monetary mechanisms suggests that different trust relationships are appropriate in different transactions, it is unlikely (bordering on impossible) that Mondex can make such judgements correctly for all parties. In fact it is extremely unlikely or impossible for any vendor to make such judgements.

Open standards require less trust in the vendor than closed standards. There are no security advantages to closed standards, as explained in Chapters 3 and 4. Open standards are more likely to offer interoperability, and allow the trust implications of interactions with other systems to be examined.

Standards are documents which define protocols. Protocols are communications standards which define a series of messages and the syntax for those messages. Code is the implementation of the protocol (which should be compatible with the standard). Code is the actual software, the detailed language description of how the standard actually works in practice. Standards are the ideals of how the code should work.

Open source code, also known as open code, requires the least trust in the vendor. With open code there is no lock-in. Customers and merchants can move between code suppliers, service suppliers, and thus probably between Internet commerce mechanisms as they evolve. Open source can be viewed and examined. After open source is purchased and installed any competent programmer can alter the programs to add interoperability. Notice that this does not imply a lack of security. Linux, the open source operating system, is more stable, more powerful, and has fewer documented security flaws than Windows NT. (In addition, there is no version of MSWord for Linux, so the MSWord macro viruses that explode onto the Internet in 1999 cannot effect Linux.)

If the code for a commerce protocol is open that code can be examined. Any shortcuts that have trust implications, for example short cuts in key generation, or deviation from the open standard implemented in the code can be seen.

Selecting a proprietary standard is an extreme extension of trust. Trust that there will be support for critical functions in future upgrades. Trust that security failures will be addressed quickly. Trust that the level of security selected by the vendor remains appropriate for the merchant and the customer. Trust in the efficiency and efficacy of the vendor. Trust that the vendor is solvent and will remain solvent and available to provide services indefinitely.

This trust can go badly wrong. Trust in proprietary code has already gone badly wrong. The Y2K problem would be trivial with open source. The source could be examined, improved, and re-installed. As it is, problems are myriad. The source for code is not available. It cannot be upgraded without reverse engineering. Vendors have gone out of business, or no longer offer support for an old package (which may remain the core of a business built upon it).

Open code supports much of the Internet. Building a business on proprietary code is clearly not insane, else why would it remain so popular. It is simply a great leap of faith, an extension of long term trust, and an acceptance of long term risk.


10: Examination of Internet Commerce Systems

Reliability, security, and privacy are critical in commerce systems. Current commerce systems include examples as diverse as paper cash, Internet commerce, and hardware-based commerce. Credit card verification systems, point-of-sale transfers, lines of credit, billing servers, secure co-processors, and systems based entirely on software all co-exist, competing for consumer and market interest. There are also special-purpose systems such as electronic postal metering systems and copyright collection services.

Electronic currency systems are as widespread as they are diverse. In the 1970's electronic currency systems such as electronic funds transfer (EFT) began to be widely used (Reid and Madam, 1989). In 1990 more than 40% of the $500 billion in federal benefits and state-administered programs were paid using some form of electronic funds transfer (Wood and Smith, 1991). In 1988 physical rather than electronic currency, including cash, credit cards, or financial instruments, accounted for more than 99% of all transactions (Newberg 1989). The same statistics reflect transaction patterns today; but the total value of all electronic transactions nevertheless dominates the total value of the vastly more numerous cash transactions. The net value of all electronic transfers exceeds the total value of all cash used, has for many years, and will continue to increase its domination.

In addition to the widely used ATM systems and the Fedwire42, there are private networks and products that provide automatic budgeting, check writing, and invoice creation. The sheer number of electronic currency and invoice systems available for private institutions overwhelms the possibility of an exhaustive report. Furthermore, most of those systems are proprietary and therefore examination of trust assumptions is extremely difficult.

Given that an exhaustive report is not feasible, which systems should be considered here? First, only systems based on open (or at least open) standards can be properly evaluated. Separating systems into categories removes the need for detailed analysis of every system without unreasonably limiting the scope of this work. Recall that the only concern here is with systems for general Internet commerce. These systems require that the user have no specialized hardware.

Distinctions Among Commerce Systems

Electronic commerce systems are separated into token and notational systems. In token currency the strings of bits transferred in a transaction are themselves legitimately valuable. For example, a dollar has value in and of itself, and is not a promissory note for a particular transaction from a specific account as a credit card purchase slip.

The implementation of anonymity in token systems underlies trust requirements. Thus token systems differ from one another with respect to the existence of anonymity. This is because of the relationship between anonymity and accountability: an anonymous party cannot be subject to penalties. Anonymity in token systems determines who is able to take untraceable, and therefore potentially deniable, financial action. Note that untraceable steps implies that the steps are not be linked to identity, not that the steps are immune to rollback.

In a notational currency system the information transferred is an instruction to change notations in a ledger, such as a bank's records. In notational currency the value is held in the records, not the instruction. Notational systems are further subdivided by the business model on which they are based. The business model encompasses the underlying commerce model, the distribution of risk among the parties to the transaction, the distribution of liability among those parties, and therefore the distribution of trust as well.

Notational money exists as notations in the ledgers of an institution. Electronic commerce systems based on notational currency differ from one another in the role of the institution that holds these ledgers. Notational currency systems require customers and merchants to trust, to some degree, the holder of the ledgers. However the degree of trust required and the concentration of trust vary widely from system to system.

Notational systems are based on three different models, in which customer and merchants pay different fees and take different risks: the checking model, debit card model, or the credit card model. The checking model places the most risk on the merchant, and requires that the merchant trust the customer. The debit card model places risk on the customer and requires that the customer trust the merchant to deliver the goods -- the merchant is certain to be paid. The credit card model distributes trust, as described in the example analyses in this chapter. Some systems involve additional financial intermediaries that alter the traditional assumptions of interaction among merchants, customers, and their respective financial institutions. The existence and roles of these intermediaries may change the distribution of liability, and therefore trust, in financial transactions that the systems require. I select systems based on their distribution of risk, rather than the designer's intellectual model.

Other characteristics may be incidental to a system (either token or notational) and can differ in a given implementation. This fact enables me to collapse the systems into classes for the purposes of discussion and analysis. Then, within each class I select one system will be selected as an example for more detailed analysis.

The token systems analyze here and their respective levels of anonymity are:

Digicash: token, complete anonymity

MicroMint: token, both anonymous and identifiable implementations

Millicent: no anonymity

The notational systems analyzed, and the banking models are:

First Virtual: transactions without security, merchant trusts customer (checks)

Secure Sockets Layer: secure transactions, merchant/customer trust (credit cards)

Secure Electronic Technology Specifications: multiple acquirers with on-line presence and mutually respected certificates, customer trusts merchant (debit cards)

Analyses of Various Systems

Having selected examples from each trust category, I analyze these systems along legal, market and technological axes. The discussion focuses on the system's ability to meet market and legal constraints.

Transactional Reliability

Each analysis begins with an overview of the vendor's business model. Both the reliability and security of a system depend on the business plan on which it is based. If the business plan is flawed, a lucrative security hole may exist as a result. Thus security is not as straight forward as algorithm choice and key management (not that those are particularly simple) but also involves an understanding of the business implications of any system failures. Similarly, a system's atomicity also depends on design, implementation, and business policy. Atomicity depends on funds-available policies because of rollback.

The second step in the analysis is a detailed description of a transaction. It is in this description that it becomes clear which parties are considered trustworthy and for what, and thereby reflects the trust model of the system. The bank, for example, may be trusted based on how much documentation is necessary for the bank to take durable action. The business perspective may explain how a technical failure is not unacceptable, since the realization that such failures occur is addressed explicitly in the business plan.

There are three encryption functions commonly used in electronic commerce protocols: a one way secure hash algorithm, asymmetric encryption, and symmetric encryption. (Recall the definitions in Chapter 3.)The following notation is used, when the variable or message being encrypted or hashed is x.

h(x)

the hash of x

Ek(x)

x, encrypted with symmetric key k

(x)i

x, encrypted with the secret key of i's asymmetric keys

(x)I

x, encrypted with the public key of i's asymmetric keys

The public and private halves of a public key pair are identified as I and i, where i is the first initial of the party or item to which a key corresponds. For example, values of I might include B, C, and M; these correspond, respectively, to the bank, the customer, and the merchant.

Each transaction in the analysis is taken apart and considered step by step. The analysis focuses on determining the reliability of a transaction. A step by step allows me to illustrate how the failure of a single message might put the system in an inconsistent state. Each protocol considered is classified as providing no atomicity, money atomicity, goods atomicity or certified delivery. As part of such a classification, the specific messages that provide or fail to provide atomicity are identified.

Security

Every system makes certain security assumptions, whether the existence of separate communications channels or that secret keys are indeed secure.

For each system analyzed the worst-case results of the failure of each security assumption are enumerated. Of course not every possible combination of security failures is considered. If the simultaneous failure of two security assumptions creates a different possible outcome than the separate failure of the two assumptions then that is included assessed as well. For example, if both a secret key and an account number would enable an attacker to make unauthorized changes in an account, but access to either instrument alone would not enable such an attack, this combination of security failures is noted.

Certain fundamental, widely accepted cryptographic assumptions as assumed to hold throughout the discussion. For example, though it has not been proven that there is no way to factor numbers in polynomial time, I will make the standard cryptographic assumption that this is the case (Baker, 1984; Schneier, 1995). (Recall the discussion of one-ways functions in Chapter 3. The assumption that factoring is difficult, more difficult than multiplication, is a way of saying that I assume cryptography works and no stunning mathematical advances will be made that change this fact.)

Privacy

The next step in my analysis is to evaluate the level of privacy offered in each system, which allows the information available to each party during each transaction is listed. This allows for simple comparison of different systems without trying to place a qualitative measure on the intrinsically quantitative issue of privacy.

There is broad agreement that privacy is a question of what information is exposed. To consider the efficacy of the method used here to evaluate privacy offered by various systems consider the specific methodology for evaluating system security recommended by in National Computer Security Center, 1990 and the following related series of publications (the rainbow series). There elements of this methodology applicable to the analysis of privacy levels in a system are: a general description of all information that is be transmitted through and stored in the system; a summary of the expectation of the security of data contained in each subsystem and system; the assignment of final responsibility to a single individual to ensure that security is maintained; the use of mechanisms to ensure security; a description of the entire user community including those with the lowest level of access; and the types of access permitted provided in each subsystem and system. The simple matrix technique used for assessing privacy in the following system analyses includes many of these elements.

An example matrix, which shows the information available to various parties in a checking transaction, is shown in Table 10.1 below. Each row shows the information available to the party named in the leftmost column. Each column lists a datum of interest: the identities of the parties, the date of the transaction, the amount of the purchase and the item purchased. Any party can have "full", "partial" or "no" information about a particular datum.<

Information

<p>Party

Seller

Buyer

Date

Amount

Item

Seller

Full

Full

Full

Full

Full

Buyer

Full

Full

Full

Full

Full

Law enforcement

<p>w/ warrant

Full

Full

Full

Full

No

Bank

Full

Full

Full

Full

No

Observer

Full

Full

Full

Full

Full

Information Available to the Parties In a Transaction

The row labeled "Law enforcement with warrant" identifies the information that would be available to the government. This row provides a basis on which to compare each system's ability to provide information desirable for social welfare. These entries are derived from reporting requirements as discussed in Chapter 8. Law enforcement can obtain records from bank and merchants. However, in the case of a specific purchase, the merchant may not keep detailed records. In this and later tables if law enforcement depends on merchant records to obtain item information, this is denoted as "Full" in italic type.

For the privacy analysis, we consider an observer who is electronically well-placed, i. e., the observer can monitor transmissions between the customer and merchant. The observer cannot read encrypted information but can read all other information transmitted in making the transaction. Using an information matrix as described above as a basis for comparison, each system is roughly classified as providing high, low or medium levels of privacy.

How does one determine the information made available in a particular protocol? One necessary assumption for comparison (of both privacy and atomicity) is that all transactions have the same scope: that is all transactions begin at discovery and end at merchandise delivery. This ensures consistency in the comparison of the protocols. Information transmitted during the discovery process is available with every purchase. Thus those protocols that claim that no identity information is transmitted in a transaction assume a communications exchange or remailer. Removing all identity information during communication is not a trivial matter. For two-way communications it requires a series of remailers capable of encryption. Even then, partial identity information can be reconstructed with the cooperation of all involved parties; however, such a reconstruction has such a high work factor that it is reasonable to say such services provide privacy.

Keep in mind while reading the privacy analysis of each system that partial or full identity information can sometimes be obtained from IP addresses and commonly available network services as was previously described in Chapter 7.

Governance

Finally, I consider the ability of the system to fulfill governmental needs for data. I consider only the information provided to the government about a transaction itself. Most transfers of value today use auditable channels, meaning channels where the regulatory oversight of transactions is both well-established in law and physically possible. For example, any depository institution is auditable while cash under the mattress is not.

Both reporting requirements and possible improvements in terms of data compilation were suggested in Chapter 8. These suggestions are revisited in the governance section of the protocol analyses presented here and in Chapter 11. In each case the trade-offs created by the current requirement and the way that the suggested solution would alleviate these trade-offs are identified. Technical suggestions are made as to how each system could be enhanced to resolve or eliminate trade-offs for merchants, financial services providers, and customers. Techniques used in this section are described or referenced in the earlier description of security. The general implication of the governance analyses presented is that current constraints on anonymity of data provided to for governance should be relaxed.

Example Analyses

The two example analyses presented in this chapter, those for credit card and cash transactions, are meant to clarify the subsequent discussion of specific Internet commerce systems in Chapter 11. Rather than launching immediately into potentially analyses of unfamiliar systems I begin with these two common examples: credit cards ( a familiar notational system) and cash (a familiar token system).

Both examples illustrate concepts of atomicity, security failures, and the use of data availability to value a system according to privacy considerations.

Credit Cards: A Notational System

With credit (and debit cards) the instruction to debit or increment an account is made electronically, as opposed to checks where the instruction is written on paper When a purchase is made over the telephone, for example, the information printed on the card is sufficient to authorize a charge. This differs from point of sale (POS) systems, in that the physical presence of the card is not necessary in a telephone order. Thus credit card orders can be authorized using information only.

Remote credit card transactions are a form of electronic commerce; the critical transaction information is delivered electronically by voice, from one human to another. Orders are entered into billing computers, processed electronically, and delivered physically.

The credit card market developed because checks have limits of interoperability (Rubin and Cooter, 1994). Checks have limited interoperability as a result of the fact that merchants who accept checks are at risk for having a check returned for insufficient funds and therefore accepting a check requires an extension of trust to the consumer. Previously customers who used checks to pay merchants were limited in accessing funds because merchants had to extend trust to each customer who wrote a check. The creators of credit cards, or "entertainment cards" as they were originally called, addressed that weakness by assuming the risk involved if customers turned out not to be credit-worthy.

Today automatic teller machines offer international interoperability for checking accounts by providing customers with immediate access to deposits in the form of cash. That the original impetus for the creation of credit cards is gone does not imply that the credit card market will decrease, since the original entertainment cards have evolved far beyond the original market niche.

Although the settlement process for credit cards depicted in this discussion (and in the accompanying figure) specifically considers Visa, it is representative. Visa is was chosen for the discussion (and the figure) by virtue of its wonderfully brief name.

The credit settlement system is similar to the check settlement system. However, credit cards developed in a more orderly fashion than checks, and have evolved a clearance system with less need for governmental support. (The Federal Reserve Board handles regional and national settlements of checks. Check law has been built upon tort law while credit card clearing was developed through private contracts.)

Notice that the clearing system for credit cards is based on regional hierarchies. Each region has a clearing bank. If the customer and the merchant share the same bank Visa does not process the charge. This means that the national Visa office does not have to process every charge. Each clearing bank has a network of local banks that recruit merchants. Card-issuing banks may further subcontract credit checks for individual transactions to third parties.

When a bank recruits a merchant as a credit -accepting account the bank accepts some risk. If a merchant defrauds a customer, or a customer stops payment on the basis of fraud, the bank that recruited the merchant may be liable for the fraudulent costs the customer is not required to pay (based on the merchant's contract with the merchant's issuer bank). The merchant may be guaranteed payment from the merchant bank regardless of the claims, and the substance of such claims, of fraud made by the customer. This liability on the bank that recruits a merchant to accept credit card transactions is the control mechanism Visa uses to prevent unethical merchants from entering the system and taking advantage of the assured payment mechanism. Although banks of untrustworthy penalized under these payment guarantees may pass this penalty may pass this penalty on to the merchants, even doing this will control the entry of untrustworthy merchants into the clearance system.

Payment on a credit card transaction is assured by distributing the losses (accrued when customer refuse payment) across all purchases in the form of fees. The fees are assessed by various entities as customer's payments make their way along the chain from customer to merchant. Each entity receives a percentage of each purchase as the payment fee. This is illustrated by the decrements of funds shown returning to the merchant in the outer loop in the diagram.

When a customer is defrauded in a credit card transaction she can refuse charges. Whether the fraud is a result of physical theft or electronic attack, the customer's losses are limited by law to $50 per card. Customers have sixty days to refuse to pay a charge, much longer than they have to challenge the authenticity of a check presented for payment on their account. Therefore customers do not feel the need to trust merchants as much as in the case of checks. Thus customers expect less documentation in a credit card transaction. This allows the processing credit charge slips to be radically truncated43, that is the merchant sends the data from the slip, not the slip itself, to the bank. (Although it is legal for check processing to be radically truncated, customers have generally demanded that their checks be returned.

A Transaction

The following discussion delineates the steps in a credit card transaction as depicted in Figure 10.1. The steps in the transaction form two concentric semi-circles: the billing records for the customer travel counter-clockwise in the inner circle and payment to the merchant moves clockwise along the outer, circle. The inner circle depicts the movement of the order information; the outer circle shows the movement of payment in return. The merchant pays a percentage of the amount the customer charges to the merchant bank on every charge. As noted above depending on the specific authorization and the contract with the merchant bank, the merchant may be guaranteed payment on a credit card transaction even if the customer does not pay the charge. However, in remote credit card orders the merchant usually is not paid if the customer denies the charge.

The difference between the amount paid by the customer for the purchase ($100 in the figure) and the amount received by the merchant ($98.295 minus the fee charged by the merchant's bank) goes Visa's and the associated banks' profit, overhead, and risk management.

Note that because a customer may receive goods and deny payment, or a merchant may receive payment and deny goods, the system lacks atomicity. The lack of atomicity in credit card purchases is managed through this systemic debiting as the payment instructions move through the system.

A credit card transaction is not money atomic, although it can appear atomic to the merchant if the merchant is guaranteed payment. From the customer's perspective, credit card purchases have a period in which payment can be canceled, either by explicit cancellation request or by a refusal to pay for an item when billed. Thus customers have the ability to generate rollback.

Credit card transactions are consistent in that the customer and merchant agree on the amount paid and whether there has been a reversal of payment in the case that the merchant is not guaranteed payment.

Credit card transactions are not isolated. In some cases in which a merchant obtains a block on user credit that is not always promptly erased. These blocks can in some cases lead to failure of isolation. For example, a hotel may block enough on a customer's credit card to cover possible damage done during the customer's stay at the hotel , thus preventing the customer from accessing that portion of her credit line to make a later, unrelated purchase. The practice of obtaining authorization (blocking) for more than the final charge (settlement) is common among hotels and care rental agencies.

Credit card transactions are durable. However, it may take weeks for a credit transaction to become final.

A Credit Card Transaction

The size of a credit card transaction is limited only by the customer's credit limit. ATM machines in contrast impose a size limit for an individual transaction regardless of the total available balance in her depository accounts. This limits risk, just as the limit on currency denominations (there are no $5,000 bills) limits risk by increasing the difficulty of counterfeiting money -- in both cases the limits increase the number of false payment instruments needed for large scale fraud. However, even these efforts to limit risk sometimes fail, as the theft of over $300,000 with a single card and access code illustrates (New York Times, 1995a; New York Times, 1995b). Since there credit card transactions cost the merchant (in terms of the fees deducted by Visa and various banks in the Visa network) there is a limit to size on lower end of the scale. Few merchants will accept a credit card for $.0.25 purchase. Like checks, credit cards are scalable in terms of number of users.

There are limits to interoperability between credit and debit card systems. A consumer cannot pay her American Express card bill with her Visa credit card except by first obtaining cash. (This statement has been experimentally verified by the author.)

Security

Opportunities for credit card fraud vary depending on the payment policies. With remote credit card purchases there are two options for dealing with potential fraud: the acquiring bank guarantees payment or the merchant accepts risk.

If the bank guarantees payment regulating merchant fraud is a straight-forward task. Merchants can fail to deliver goods and still demand payment but banks can limit this type of fraud by tracking complaints against merchants and revoking accounts of merchants who abuse guaranteed payment. Merchants can, however, incorporate in a new guise and request new accounts with these new identities. And, some merchants allow other disbarred merchants to use their accounts, so this system itself has weaknesses (Van Natta, 1995).

Whichever system us chosen for dealing with fraud risk, it is a straight-forward matter for a customer in a remote transaction to commit fraud to obtain goods. Customers may simply claim not to have received goods. The lack of a verifiable physical delivery system constrains security for all remote purchases of physical goods. Banks address this in the same way that they address merchant fraud: customers are tracked and rated, and those found to engage in this type of fraud are subject to the removal of credit privileges.

If the merchant has a physical presence, that is an imprint of the card used in the transaction, the merchant is guaranteed payment by Visa or the other credit card companies involved. If the merchant is offering telephone purchases, the certainty of receiving payment for a credit card transaction depends on the type of merchant, merchant characteristics and transactional characteristics, including merchant credit history, the market served by the merchant, the item(s) purchased and the amount of the transaction.

Arguably the greatest weakness in the telephone order protocol is its vulnerability to replay attacks. Only credit card number, expiration date, and sometimes billing address are needed to authorize a purchase. Thus, any person who obtains a complete receipt from a credit card purchase can authorize telephone purchases using the number on that credit card receipt.

In the next paragraphs I compare the risk of the two transmission systems. Consider that with telephone purchases critical authorization order information must be transmitted over the public telephone networks. In comparison, on the Internet purchases are transmitted over the networks that together form the Internet (sometimes including the phone wires). Transmissions over the Internet are encrypted - never transmit credit card information in the clear. Public phone networks are more difficult to monitor than Internet transactions for several reasons. First, the telephone networks transmit voice data for a telephone-order credit card purchase. Filtering and searching voice data after it has been trapped requires decoding the data as transmitted over the phone network and then using voice recognition technology to obtain the actual content of the conversation, and thus the credit card number. (Alternatively one could listen endlessly and hope to eavesdrop on an order but this does not seem efficient or likely to be successful given the volume of information on the phones wires which is not related to credit card purchases.) Voice recognition is orders of magnitude more difficult than identifying information already optimized for digital content-based manipulation. (Internet content is digital.) After having analyzed the data to determine the content using voice recognition, then the data must be searched, just as in the case with data on the Internet. Conversely, information on the Internet is typically in a form that is simple to intercept and filter. Finally, the sheer scale of the two systems makes monitoring phone calls harder. There are 28 million (Hoffman, Kalsbeek and Novak, 1996) to 37 million (CommerceNet, 1995) Internet users in the United States, while there were 172.844 million households (Bureau of Census, 1995) subscribing to telephony services in 1995. Telephone service is common to almost all households, with 96% of households having telephone service in 1994 (Federal Communications Commission, 1995). Computers will not reach this penetration rate for some time.

Privacy

Credit card transactions create machine-readable records. Identities of the parties to a transaction, the amount involved, the date, and some content information (such as the items purchased) may also be recorded with credit card purchases. The ease with which this information is analyzed and distributed compromises consumer privacy.

Credit card purchases provide detailed information about the transaction to merchants and associated financial institutions. Such transactions can also leak information through electronic surveillance by an observer. Because the processing banks obtain information about a transaction, it can be available to law enforcement. The merchant may record detailed content information, and content information in machine-readable format may be obtained by the bank as part of billing Information distribution in a remote credit card transaction is delineated in table 10.2.

In this discussion of privacy the banks involved in the billing and payment process have been combined. This is for two reasons. First and foremost, the information about the purchase is passed through the billing system. Second, there is widespread marketing and sharing of data between banks. For the case of examining privacy these banks have been collapsed into one.

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Full

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enfw/warrant

Full

Full

Full

Full

Full

Bank

Full

Full

Full

Full

None

Observer

None

None

Full

None

None

Information Available in a Credit Card Transaction

Credit card companies policies vary widely with respect to privacy. American Express offers consumers options on the use of personal information. Mastercard and Visa allow card issuing banks to set policies on consumer privacy. That one company offers a privacy-protecting option suggests that the market can serve the privacy interests of those with financial resources. Thus secondary use of data by the participant banks depends on the policies of the card-issuing association (e.g. Visa, American Express) and the banks involved.

Governance

Since credit cards have by now become widespread, many of the regulatory issues involved in their use have already been considered. In fact, two specific regulations of interest were enacted at least partially with consideration of credit cards: the Electronic Funds Transfer Act and the financial information provision of the Computer Fraud and Abuse Act. Thus a short consideration of lessons learned with credit cards may be fruitful for later discussions.

The Electronic Funds Transfer Act was passed because of the government's recognition that a customer has neither the ability to manage the risks of the payment system as a whole nor the ability to prevent use of a financial instrument once stolen. The initial assumption that a customer would bear the cost of charges made in the event her card is lost or stolen is reflected in the assumption by many electronic commerce systems that the customer will simply bear the cost of charges if her cryptographic key is lost or stolen. In fact, under the Digital Signature Law as originally passed in Utah, if an attacker obtained a consumer's secret key, the attacker could enter into contracts requiring the customer to continue to pay for many years. This results from the law's failure to consider key loss -- the law treats keys if they were unalterably linked to an individual, like the signature on which keys are modeled. (Notice proposal to use biometrics for identification are potentially even more hazardous to consumers. If the data describing biometrics, e.g. fingerprint or retina print, is stolen from a database the consumer will have difficulty stopping resulting fraud. And of course, it is simply not possible to replace a fingerprint when data are stolen.)

Cash: A Token System

Cash is token currency, as defined earlier in this Chapter. The examination of legal tender provides a model and basis for later comparison with electronic token currency.

In the United States federal law ensures the interoperability of cash. It is "legal tender for all debts public and private." The business model of cash is interoperability and availability assured by government action to enable and encourage commerce. Cash is interoperable because it is legal tender. The lack of interoperability of bank and state currencies was a driving force behind the creation of a national currency. Internationally, interoperability is provided by currency exchange services which convert one currency to another (and charge a price for doing so). The ability of American legal tender to sere as a store of a value and a standard of exchange has resulted in its having global interoperability.

The subsequent section analyzing a cash transaction discusses the importance of the various attributes of cash, as well as the problems that arise in trying to use cash to make remote purchases. One fundamental problems remains with electronic cash -- how can a customer prove payment for a remote anonymous purchase? The privacy and security sections below illustrate the strengths and weakness of physical cash along those dimensions.

There are no limits to scale in the number of users of cash, except those imposed by limits on the number of bills printed and/or available. Not only are individual transactions isolated, but the system is also free from bottlenecks.

The availability of cash has proven critically important for economic and social reasons. Some scholars argue that the oppression of sharecropping for blacks in the American South was very much predicated by the return to the gold standard and resulting currency shortage45. This dramatic example illustrates that a single standard for Internet commerce emerge the private control of the currency, the potential lack of availability, and any lack of interoperability may have unforeseen implications.

Cash is divisible in that it comes in many denominations; a single high-value token can be exchanged for many low-value tokens, and many smaller tokens can be exchanged for a single high denomination token.

A Transaction

Consider a remote cash purchase -- sending a dollar through the mail to a merchant to purchase a particular good or product. Assume that delivery of the goods is to a Post Office box, so that the customer need not offer the merchant any identity information.

A Cash Transaction

First, the customer requests a dollar from the bank as shown in Figure 10.2. The bank decrements the customer's account the appropriate amount and provides the dollar to the customer. The customer then sends the dollar in the mail to the merchant, requesting an item in return. The merchant verifies the dollar through visual examination -- this is analogous to off-line verification. The merchant can prove that he has the right to spend that dollar by virtue of having the dollar. Thus the merchant has no need to provide authentication to spend the dollar. The merchant then sends the goods requested to the customer.

When the merchant deposits the dollar in the bank, the bank does not link that dollar to the one given previously. In theory the bank could keep track of the serial numbers of all the dollars it gives out and to whom it gives these dollars, but this would be an extremely costly method of surveillance. This type of surveillance also requires that the customer and merchant use the same bank, further decreasing the likelihood of any attempt at surveillance along these lines.

In a cash transaction the customer cannot prove that the merchant received payment; this is not an atomic transaction. If the customer uses registered mail there is still no proof of the contents of the envelopes; however, this would give the customer some claim. The merchant can simply take the money to the bank for deposit. The customer cannot prove previous ownership of the dollar or that the merchant has made a commitment to deliver merchandise in exchange for the dollar.

What would happen if banks kept records of the identities of all those who withdraw dollars linked to the serial number of the dollars that were withdrawn, and dollars could be spent only once before being returned to a bank? (Notice the electronic analog is not so unlikely or impossible when money is on-line, digitally signed, and bank-specific.) The customer could then sacrifice anonymity (because her name would be linked to the serial number) could verify her claim to have paid the merchant (who's identity would also be linked to the serial number through the deposit). This example illustrates one type of conflict between anonymity and atomicity present in the electronic systems examined in this text.

Back now to the cash example: this transaction is isolated. Regardless of what occurs in any other transactions, the merchant can deposit the dollar.

The transaction may or may not be consistent. If the dollar is lost in the mail then the customer may believe that she has been defrauded and the merchant will not know a transaction has been attempted. The dollar may simply disappear.

Finally, the transaction is durable. The merchant will have the dollar; the customer will not. The customer cannot arbitrarily reverse the transaction.

Security

Cash does not require trust between users. If a bill is determined to be counterfeit, the holder of the bill is not compensated. However, when accepting a bill the mechanisms for evaluating a bill require trust only in one's own abilities. Compare this with the impossibility of verifying a check, which would require knowledge of the account status and intentions of the check-writer. The validity of a bill can be verified during the transaction by visual inspection. By accepting cash, merchants imply only that they only trust their own ability to detect counterfeit, as opposed to a credit card association or a customer's creditworthiness.

Clearly there are security failures in the form of counterfeit notes, but security in cash transactions is generally maintained by a time-tested work factor. The design of the bills is periodically updated to discourage counterfeiting. Systems-level failures in the paper currency system are prevented by risk-limiting regulation, federal depository insurance, limits on denominations, and the sheer magnitude of the task of passing enough counterfeit currency to upset the entire system. However, once the hurdle of printing a single counterfeit bill is overcome then the marginal cost of printing each additional dollar approaches zero.

The dollar in the example could be taken from the USPS since it is unprotected, except of course by law. In 1994 there were roughly 20,976,000,000 pieces of mail delivered (Bureau of Census, 1995). Thus the sheer magnitude of the effort of searching the mail, combined with the relative rarity of finding cash in such an endeavor, provides a high work factor that essentially prevents theft by observers. There is no advantage to scale in this sort of theft: searching for the nth dollar will be as hard as searching for the first. Of course, if sending money through the mail was common, so that half of all envelopes contained mail, then the search would prove worthwhile.

Privacy

Cash offers both privacy and anonymity because a dollar contains no information that can be used to determine its transaction history. Nor does the exchange of cash necessarily create a record that includes the identities of those involved. Cash transactions usually provide anonymity to the customer but not the merchant. The privacy afforded by a cash transaction is limited by the potential for physical observation of the customer by the merchant. Yet it would be unlikely that the merchant kept records of customer attributes. The information available to different parties in a cash transaction is shown in the following table.

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Partial

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf

No

No

No

Partial

No

Bank

No

No

No

Partial

No

Observer

Full

Full

Full

No

No

Information Available In a Cash Transaction

A cash transaction produces no bank or law enforcement records. It is reasonable to assume that no bank employee or law enforcement officer observes most cash transactions. Therefore the information available to a bank or to law enforcement is limited by what it can obtain from written records of the transaction. The law requires reporting of some transactions is required by law, but these reports depend on the active cooperation of the parties involved. Use of a bank imposes an upper limit on the size of any transaction, since the bank knows the amount of any resulting deposit and must comply with government reporting requirements.

In a remote transaction the customer can choose to have materials delivered to a Post Office box, so only the customer's region of residence is known to the merchant. With a warrant law enforcement could obtain the identity of the box holder if given access to the merchant's records and if the PO Box was not rented under a pseudonym.

If we consider an observer who is physically well-placed: for example the observer is beside the customer in the post office when the customer places the envelope containing the dollar bill in the mail. The observer can watch the item be placed in the post, but cannot open the envelope to discern what it contains. Again, the work factor in searching the mail makes it unlikely that any given letter will be intercepted.

Governance

The current regulatory structure has been built over time to deal with issues involved in transactions money. The regulatory structure has as a fundamental goal that risk be placed on the party in the transaction who is more able to prevent loss. For example, if the merchant in this example steals the money (receives it but does not send the requested merchandise) the customer absorbs the resulting loss. This is because in this case the customer is the only person empowered to choose to send her cash in the mail. Similarly, merchants and banks lose if they accept counterfeit cash because merchants and banks are in the best position to prevent counterfeiting.

This principle of assigning risk to the party more able to prevent a loss has not yet been widely applied to electronic commerce. In part because the ability and responsibility in terms of keeping information secure has not yet been culturally determined, and in part because there has been no market failure thus far to allocate risk in a way that is acceptable to consumers.

Summary

In this chapter I have explained the core differences between token and notational commerce systems. I have also developed a method for examining the placement of risk in an electronic commerce transactions. This method, examining the reliability, security, and privacy in a transaction, should be applicable to the plethora of commerce systems on the market, as well as those which are emerging. Every commerce systems sounds ideal from the vendor's perspective, and one of the basic tools of this method is to provide a common framing for all commerce systems.

In the next chapter I use the analysis methods described and used in examples here on Internet commerce systems. The following examples serve not only to examine popular and proposed Internet commerce systems but also to further illustrate the use of the step-by-step method for evaluating trust in an Internet commerce system.


11: Internet Currencies

I separate the currency systems discussed in this chapter into notational currencies and token currencies, according to the classification scheme described in Chapter 10.

Notational Currencies

In notational currency, the information transferred consists of instructions for payment as I described in the last chapter. The value in this currency is stored as notations in the ledger of a trusted institution. Transactions made using this currency include instructions that these notations be changed.

The advantage of notational currency is that record keeping is an inherent part of the system. This simplifies recovery from failure. If a single ledger is used the transaction is certain to be serialized and as a result implementing ACID transactions is straightforward. That is, if all the steps are recorded in order in one place then dispute resolution should be simple -- all that is necessary is that a central ledger be queried.

Here I analyze three Internet commerce systems that use notational currency: First Virtual, Netscape's Secure Sockets Layer, and Mastercard's Secure Electronic Transactions using the method described and illustrated in chapter 10 .

First Virtual

First Virtual is a protocol for the first generation of Internet commerce. As do all on-line systems First Virtual offers automated customer support, promotion, administration and processing. Some First Virtual transactions are large enough that aggregating them to meet the economic threshold for a credit card transaction is unnecessary; however, First Virtual can aggregate small transactions, and thus can overcome the lower limit to scale in credit card transactions. The goal of First Virtual is not to decrease transaction costs but rather to provide immediate access to customers on the Internet for medium-priced information goods.

First Virtual is based on the theory that the provision of information goods over the Internet is practically free and that the Internet itself is inherently without security. First Virtual aggregates Internet transactions, filters Internet transactions, provides billing for Internet transactions , and resolves disputes about transactions over the Internet (First Virtual, 1995a). First Virtual is an account acquirer from the perspective of consumers and merchants; in contrast, First Virtual is a single merchant from the perspective of the Visa-associated merchant acquiring bank.

First Virtual filters transactions and resolves billing disputes about these transactions by maintaining that the customer is always right. First Virtual limits customers' abuse of this policy by limiting the customer's total number of refusals -- after a given number of refusals of payment a customer's account privileges are terminated.

First Virtual's protection against fraud is based on three business practices:

Commerce without security has limited application. The size of a purchase using First Virtual is limited by the tolerance for fraud of the merchant's involved. Merchants with high cost goods for which there is a high demand are unlikely to accept the potentially high levels of fraud possible in First Virtual transactions. First Virtual works well, however, for low priced goods with a small to medium market, or high priced goods with a specialized market. The acceptance of First Virtual is also subject to how often attacks on customer account identifiers occur and customer tolerance for the time and effort in addressing these attacks.

First Virtual is a useful means of transacting for information goods delivered over the Internet. The fact that many on-line information goods are widely distributed and often have very low value hampers the market for these goods. Many on-line information merchants are not large enough to find having merchant accounts with credit card companies practical. In addition to the number and small size of many information providers the market is problematic for current Internet commerce protocols because the value of these merchants' items is low, consumption happens soon after delivery, there is no standard for proof of delivery on-line, and there is no physical presence of merchant or customer at the other's location. Because First Virtual's business model is based on negligible merchant losses, First Virtual is not well suited for orders for physical goods -- the losses of a merchant who is unpaid for physical goods are not negligible, unlike those unpaid for a copy of digital goods.

Becoming a First Virtual merchant requires a credit card, email, data storage capacities and Web access. Notice that an Internet user can be a First Virtual merchant with a standard credit card account, while other systems require that merchants have merchant accounts. First Virtual's approach vastly expands the number of possible merchants who could use the system and therefore the probability that there will be goods of interest to a customer.

To obtain a First Virtual account a customer must have email and a credit card. The prospective customer sends email to First Virtual that includes a customer-selected password. The customer then calls First Virtual and provides credit card information over the telephone. The credit card information itself is never sent over the Internet. Customers use their password and user name (which First Virtual calls an account identifier by) are used by customers to authorize charges against their accounts. First Virtual charges a low initial fee to become a First Virtual customer.

Presumably, First Virtual also profits from the redistribution of the email addresses of its customers. Customers are allowed to opt out of this program when they sign up for an account; however, if they do not ask to be excluded the default value is to have their email addresses available for redistribution.

A Transaction

Figure 11.1 shows a First Virtual transaction. Note that the bank involved in the transaction is actually off-line, and is contacted by First Virtual after a transaction or a series of transactions have been completed.

A First Virtual Transaction

The transaction begins when a customer selects an item and requests a price quote from the merchant. The customer then requests the item with a message to the merchant that includes her First Virtual account identifier and the associated password. The merchant can authenticate the customer's claims to be a valid First Virtual customer at this point, or wait until after the goods are delivered (as shown the figure above). First Virtual verifies that password and account identifier supplied by the customer at the merchant's request. If the customer is a valid First Virtual customer, the merchant is contractually obligated to deliver the requested items.

The merchant sends the goods to the customer. Then the merchant transmits a request for the customer's payment authorization to First Virtual and requests payment, as shown in step 6. If he has not sent the customer's authentication information previously in step 4 he can do so now, however, in any case the merchant is required to send the merchandise requested before asking for payment. First Virtual then sends an email message to the customer requesting final authorization of the charge. The customer is charged only if the customer verifies the charges. Finally, First Virtual notifies the merchant of the result. (At some number of amount of charges First Virtual charges the customer's credit card off-line, as a traditional merchant as described in Chapter 10.)

The merchant may choose to validate the customer at the price request, or before or after delivering the goods. Validation at the price request before means that the merchant never serves a request with an invalid password. There is a trade-off in this choice: a merchant saves one message on every valid transaction or saves processing invalid requests. The appropriate choice depends on the ratio of valid purchases to fraudulent requests as well as the relative costs of communications and processing.

Regardless of the merchant's timing of verification, the customer has the right to refuse to pay for an item after having received it. This prevents conflicts based on quality and deceptive advertising. First Virtual reserves the right to limit the number of times a consumer may choose not to pay for an item received; but a merchant cannot choose to refuse to send an item to a valid First Virtual customer. This means the merchant must accept First Virtual's definition of acceptable risk.

First Virtual's email to the customer in step 7 and the request in step 1 travel to their destinations through different parts of the Internet, like a telephone call to Tokyo and a fax to New York travel from Boston travel to their destinations through different parts of the telephone network. Therefore First Virtual considers these independent channels. Although it is simple to obtain a packet containing ordering information from First Virtual, intercepting the authorization request message to the customer is more difficult. It would require either filtering every message received by the customer or sent by First Virtual, or alternatively breaking into the customer's home email account to respond. In contrast, there are locations where the majority of the traffic consists of First Virtual identifiers or purchased goods, so fishing for a First Virtual identifier there would be profitable and require searching through fewer email messages. More importantly, there would be no gain in completing the second, more difficult, part of the process because any attacker has already obtained the goods. So it is likely that the email sent to the customer results in a valid reply in step 8.

First Virtual transactions use off-line billing. First Virtual does not provide money atomicity. The actual transfer of funds in the First Virtual system is implemented off-line using standard payment mechanisms (like any merchant who accepts credit cards unrelated to the Internet). While First Virtual looks like a bank to the on-line consumer, First Virtual is a single merchant from the perspective of the financial infrastructure. Like a card-issuing bank, First Virtual can cancel a customer's First Virtual account if First Virtual makes a payment to a merchant on the customer's behalf and then the customer refuses payment to First Virtual using their card-issuing bank's dispute resolution mechanisms. However, canceling this customer's account will not make the previous transactions money-atomic.

First Virtual does not provide goods atomicity or certified delivery. The customer can receive goods and refuse payment.

Successful First Virtual transactions - those in which the customer chooses to pay for the requested goods - are isolated. Unsuccessful transactions are not. Because First Virtual tracks consumer refusal, the result of a customer refusal on one transaction depends on the outcome of her previous transactions. Too many refused transactions eventually result in a refusal of service to the customer -- that is First Virtual no longer validates the customer's account identifier and password when a merchant presents them, so merchants presumably no longer send merchandise to her. Thus the lack of isolation is not a flaw in this system, since it is a result of a considered business strategy.

First Virtual transactions are consistent. Both the merchant and customer know whether the merchant has been paid. Note that there is not consistency , however, with respect to goods delivery. The merchant may believe the customer has the goods and expect payment, but the customer may not pay. If the protocol were goods atomic, goods consistency would also be expected.

After the final email from the customer to First Virtual confirming her willingness to pay, First Virtual transactions are durable. The customer cannot change her mind about the quality of merchandise after having approved payment of a charge.

Security

First Virtual assumes that the Internet is inherently without security and thus does not send credit card information itself over the Internet.

First Virtual is not secure. An attacker need only trap a packet that has the account identifier of a First Virtual account holder to be able to use the customer's account information to make purchases. Since there are well-known locations that receive many of these packets (for example, the First Virtual Infohaus), finding such a packet is unlikely to be difficult. Thus the very lack of widespread interoperability between forms of network commerce is an advantage for First Virtual, since you cannot trade First Virtual account authorization for any other financial instrument only for purchases made through First Virtual. Even that use will eventually run out, after the customer has refused toe verify the charges and the theft is discovered. Since a customer's credit-card number is never transmitted through the Internet, obtaining a First Virtual account identifier does not provide the attacker with access to the customer's line of credit.

Merchants likewise can get customer First Virtual account information but not customer credit card information. Thus, merchants do get the information necessary to authorize further purchases within the First Virtual system including charging their own customers for items they did not select. Merchants themselves will not profit from padding charges however, they can use this information to illegitimately obtain information goods from another merchant.

Merchants cannot protect the information they sell as it travels over the Internet. Attackers may steal information goods by trapping and copying information goods as they are sent to legitimate First Virtual customers.

In sum, ,First Virtual mitigates risk by limiting interoperability . Although the First Virtual system is not secure it isolates and limits security failures through business practices.

Privacy

Table 11.1 shows the information available to various parties in a First Virtual transaction. In First Virtual transactions the merchant gets the customer identification information immediately upon the request for goods, so merchants can easily build detailed consumer profiles. In fact, First Virtual requires merchants to keep detailed transaction records for at least three years after the transaction (First Virtual, 1995b).

A customer can choose a pseudonym for her First Virtual account identifier. If the customer takes advantage of this option, a merchant can identify serial purchases by the same customer but cannot link that information to any non-First Virtual transaction data.

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Partial

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf

w/warrant

Full

Full

Full

Full

Full

First Virtual

Full

Full

Full

Full

Full

Observer

Full

Full

Full

Full

Full

Information Available in a First Virtual Transaction

Since messages transmitted in a First Virtual transaction are not encrypted an observer could easily develop a detailed profile of consumer habits. Observers can even more easily profile a merchant's on-line business by watching only one server location.

Governance

First Virtual can provide all the information necessary for any regulatory purposes concerning any transaction made through its system. In fact, First Virtual maintains more information than it would legally be required to provide.

Banking laws cover neither First Virtual merchants nor First Virtual itself, so legal requirements for maintaining customer transactional data do not apply. The multi-year retention period required of First Virtual merchants by First Virtual for transactional data required of merchants reflects the time frame of interest to law enforcement rather than businesses, since for a given charge the customer's right to dispute is contractually limited to weeks.

First Virtual makes no attempt to control the use of data about customer maintained by First Virtual merchants. The system's crypto-free nature means that consumers and merchants who use it to transact business do have no privacy from even casual observers. This makes the careful choice of and frequent changes in customer pseudonyms important. First Virtual supports such changes.

First Virtual requirements on merchant data retention reflect the need for broader controls on consumer records. Since the provision of consumer credit reporting is not the primary business function of either First Virtual or its merchants, their customer records are not covered under the Fair Credit Reporting Act. Thus First Virtual and associated merchants can resale detailed data on customer preferences.

Secure Sockets Layer

There are multiple versions of secure protocols for use on the World Wide Web and with many browsers. These include S-HTTP, encrypted telnet, encrypted ftp, and the Secure Sockets Layer. These protocols can be used for Internet commerce, and in fact, Netscape has long advertised (Netscape, 1996) Secure Sockets Layer as an Internet commerce tool. The option addressed here is that offered by Netscape for use with its own browser: Secure Sockets Layer (Freier, Karlton and Kocher, 1996). Version 3.0 here is the focus here, as described in the appropriate Internet Draft.46

The Secure Sockets Layer is built to enable secure peer-to-peer communication over the Internet, not to enable electronic commerce per se. Electronic commerce is, however, the most obvious, and possibly most frequently used, application of SLL, although it is not an electronic commerce protocol. Rather SSL is a handshake protocol47 for establishing a secure channel that can then be used for commerce. Possible uses include confidential email, real-time contract negotiation, and transmissions of sensitive data within or between institutions. The Secure Sockets Layer can be combined with other protocols that would be strengthened by the use of an encrypted channel, such as First Virtual. SET assumes the use of the Secure Sockets Layer for customer address information (Lewis, 1996).

The Secure Sockets Layer replaces the telephone line in the credit card transactions described in Chapter 10 with an encrypted Internet connection.

The Secure Sockets Layer has an extremely limited scope: it offers only an encrypted tunnel through the Internet that enables the secure delivery of financial information.

The Secure Sockets Layer enables traditional credit card transactions over the Internet. It is most useful for charges sufficiently large that they do not need to be aggregated. Thus the lower bound on transaction scalability in terms of transaction size that exists with traditional credit card purchases also applies to transactions using the Secure Sockets Layer. (Recall the discussion of the costs of credit card transactions.) The Secure Sockets Layer requires that merchants be credit card merchants in the traditional sense: each merchant that uses Secure Sockets Layer must have a merchant credit card account with an acquiring bank.

A Transaction

The Secure Sockets Layer protocol begins with an exchange of certificates and ends with an exchange of keys. The information transmitted in further exchanges has no relationship with the SSL handshake, just as the development of a human relationship not determined by the introduction. completely transparent to the protocol. The figure below illustrates a Secure Sockets Layer transaction. The bank is not shown because communication with the bank takes place according the to the pre-determined association with the merchant and the bank. It may take place off-line, i.e. not on the Internet but over a private leased line, or over the Internet in an encrypted connection. The connection between the bank and the merchant is not in any way determined by the Secure Sockets Layer.

A Secure Sockets Layer Transaction

There are options within Secure Sockets Layer for authentication and key exchange for users with and without certificates. For consistency across protocols in the discussion here the customer and merchants are assumed to have certificates.

In the first two steps the customer and merchant authenticate themselves to one another generate a shared key. Although authentication and key generation require more than two messages, this exchange can reasonably be modeled as two functional steps. There is a message for use by the customer for requesting the merchant's certificate, and thus the customer is assumed to have the certificate. The customer uses the merchant's public key to initiate a transaction. The merchant replies and may request the customer's certificate. The customer and merchant use the public keys contained within their certificates for authentication their respective identities and the generation of a symmetric key. Secure Sockets Layer as implemented in Netscape's Navigator assumes that the customer has a certificate from a given number of public-key providers, with the first and earliest being Verisign and RSA. (Verisign, 1996).

In the third step, using the protection provided by symmetric encryption, the customer sends her credit card number. In the fourth step the merchant delivers the goods.

Notice that after step three the merchant will almost certainly obtain authorization from the customer's credit card provider, and therefore could authorize the amount of the transaction through the bank at any time after that step. Since the communication with the bank is not included in Secure Sockets Layer this is not shown in the figure.

The Secure Sockets Layer provides a handshake for authentication and the generation of a shared key. Thus it clearly cannot provide atomicity. Since credit card companies treat Internet purchases as telephone orders, the customer can refuse payment to the merchant. Thus the lack of money atomicity in the off-line financial system suggests that there is no money atomicity in a transaction using the Secure Sockets Layer, either. There is also neither goods atomicity nor certified delivery.

Consistency, durability, and isolation are as in a telephone order, as described in the first example in Chapter 10.

Security

The greatest security threat with the Secure Sockets Layer is that merchants who use the protocol must keep servers secure in order for credit card numbers to remain secure. Thus, the customer must trust not only the merchant and his employees, but also his technical acumen in computer security. The theft of 20,000 credit card numbers from Netcom in the early nineties illustrates that extending this trust is a problematic proposition. If a merchant's employees are dishonest, his organizational security procedures inadequate, or installation of his software is faulty the consumer is at risk for credit card fraud.

Even the merchant is honest, his employees may present a security problem. Replay attacks are a trivial matter to initiate for a dishonest employee with no access the information provided to the merchant.

The effective regulatory limitation of key length to forty bits is a weakness, since the payment authorization information is not transaction-specific. (Recall the discussion of cryptography policy in Chapter 8.) Forty bits does not provide adequate cryptographic protection against today's processing power. Thus observers could obtain credit card authorization information, using attacks as described in the introductory chapters on security.

Privacy

The Secure Sockets Layer provides not financial services, but rather software to create an encryption-secured connection through the Internet. The off-line bank is the financial service provider, Netscape is the transmission security software provider only, provides no cryptographic certificates, and initiates no financial transaction authorization. Netscape neither receives not maintains any information about any transaction conducted using the Secure Sockets Layer

Information

Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Full

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf

w/warrant

Full

Full

Full

Full

Full

Netscape

No

No

No

No

No

Bank

Full

Full

Full

No

Full

Observer

Full

Full

Full*

No

No

Information Available in a Transaction Using Secure Sockets Layer

Table 11.2 shows information available to various parties in a transaction using the Secure Sockets Layer. Identity information is available as shown in the table because the certificates for customer and merchant authentication are sent without encryption.

An asterisk marks the observer as being uncertain about the date of the transaction. This is because the observer cannot determine if a transaction actually took place -- only that there was communication between the customer and the merchant.

Information concerning transactions s concentrated at the off-line financial services provider, the acquirer bank. The bank has the abilities to correlate and distribute this information as it would for information in any other kind of transaction.

Governance

The Secure Sockets Layer sets up secure connections through an open network. Netscape as an entity does not have any information about what data has passed through a Secure Sockets connection so there is no central repository for information for governance.

In a Secure Sockets Layer transaction the merchant retains the authorization information it receives when processing the customer's credit card payment. Thus the merchant has responsibility for protecting all customer information. This has proven problematic in the off-line payment world, with disbarred merchants and dishonest employees using credit card information to make unauthorized charges to a customer's credit card.

Finally the amount of consumer information held by the merchant and financial services providers further after a customer participates in a transaction that involves the Secure Sockets Layer strengthens the argument that limits on secondary financial information need to be expanded. As Internet commerce becomes increasingly common and long-standing merchants will build extensive computerized records of consumer purchases. A clear business opportunity exists for the merchants in selling these records.

Secure Electronic Transactions

The Secure Electronic Transaction protocol is (Mastercard, 1996) a combination of Mastercard's Secure Electronic Payment Protocol (SEPP) (Mastercard, 1995) and the Visa Secure Transaction Technology (STT)(Visa, 1995) protocol.

The Secure Electronic Transaction protocol does not necessarily aggregate purchases, although a merchant may choose to send requests for verification and payment in batches. This is feasible for the obvious reason that there is no more need to aggregate large purchases made over the Internet than large purchases made anywhere else. The same customer support, order processing, administration, and promotion savings that can be obtained by other purveyors of electronic commerce can be obtained by traditional credit card acquirers. The Secure Electronic Transaction protocol may not compete with so much as complement the approaches of the previously mentioned Internet commerce providers.

Internet commerce using the Secure Sockets Layer is modeled, in terms of risk, as mail order and telephone commerce. The merchant, rather than the acquirer, takes the risk for invalid purchases as in mail and telephone orders because a physical card is not presented to the merchant in the transaction -only the information on the card is used. However, Internet commerce using the Secure Electronic Transaction protocol in terms of risk is modeled as a card present transaction. In card present transactions the merchant is guaranteed payment by virtue of having a physical imprint of the customer's card. In case of the Secure Electronic Transactions protocol the merchant has a digitally signed record that guarantees payment. Thus although the Secure Sockets Layer and the Secure Electronic Transaction protocol use the same credit card clearing system the risk allocation is fundamentally different.

The Secure Electronic Transaction protocol allows only traditional merchants to sell goods. This means that small publishers, small manufacturers, independent programmers, and professionals working at home cannot the Secure Electronic Transaction protocol since they are not likely to have merchant accounts with an acquirer authorized to clear the customer's requests credit card purchases.

The Secure Electronic Transaction protocol was developed using an open process of issuing drafts and requesting comments. Originally, Visa proposed a proprietary system with Microsoft, possibly in an attempt to leverage the dominance of the Microsoft operating system to popularize Visa's proposed proprietary technology. Visa's decision to pursue an open process is significant in terms of the promise of future interoperability. However, in contrast open source, in this case the ownership of the open Internet standards is retained by Visa and Mastercard.

The weakness of the Secure Electronic Transactions protocol in terms of interoperability is that a requirement for credit card ownership sharply limits the pool of consumers for Internet commerce. In addition, the requirement that merchants have traditional merchant accounts in order to accept funds seriously limits its potential for Internet commerce use by limiting the merchant population to traditional merchants. Contrast this with the low barriers to being a merchant at Ebay, and consider the rate at which Ebay has grown in comparison with the far slower adoption of the Secure Transaction protocol.

The probability that the Secure Electronic Transaction protocol will eventually emerge a common standard for Internet commerce is supported by simple observation of the financial strength of its founders, Visa and Mastercard. CyberCash, American Express, and Europay have the Secure Electronic Transaction protocol as a standard.

A Transaction

the Secure Electronic Transaction protocol offers multiple protocols for electronic commerce that reflect the different types of Internet access available. Transactions are possible for customers with email connectivity and Web connectivity. Transactions can be implemented by customers with or without certificates. In the discussion here I consider transactions in which a customer has public key certificates and Web access. This is the appropriate model for maintaining consistency across comparisons of different protocols. An alternate version assumes that customers can only calculate hash values of payment information. This protects payment information from merchants.

The Secure Electronic Transaction protocol uses the standard language in the credit card industry. I use slightly different language in the discussion here in order to be consistent with other descriptions. Normally credit card verification is referred to as authorization and payment is referred to as capture. Since the words "authorization" and "capture" have other, specific meanings in computer security, however, the terms verification and payment, respectively, are used here instead. The bank as shown in figure 11.3 is an acquirer gateway, a service provider for acquirer banks. That is, the gateway is the Internet presence of the bank or of a set of banks. With these changes in terminology the figure corresponds to the description in the Secure Electronic Transaction protocol specifications.

A Secure Electronic Transaction .

Since the Secure Electronic Transaction protocol specifications permit batching of verification and payment of multiple transactions the contents of a specific message may vary slightly from the single transaction model shown here. In fact, steps 5, 6 ,9, and 10 can precede steps 4, 7, and 8. However, this does vary the distribution of risk so I am illustrating a single transaction in which verification and payment both occur at the time of the transaction.

Figure 11.3 shows a transaction using the Secure Electronic Transaction protocol for an interactive medium. Notice that browsing and price negotiation are not included in the Secure Electronic Transaction protocol. A corresponding diagram can be found on page 133 of the Secure Electronic Transaction Technical Specifications (Mastercard, 1996).

In step 1 the customer identifies her desire to make a purchase to the merchant. This first message includes a customer-specific message identifier (LID_C), a corresponding nonce (Chall_C), the customer selected payment method (BrandID) and a list of certificates with the appropriate hashes for verification. (Recall from the description of cryptography that a nonce is a random number included in a message to prevent replay attacks.) There is no encryption used to protect this information; however, one presumes that only the cardholder has any interest in sending the cardholder's certificate. (this is because only the cardholder has the secret key which correspond to the public key about which the certificate attests.) Thus after step 1 the merchant (and an observer that may be lurking) know the customer's identity, the merchant to whom the message was directed, the item the customer has requested and the item's price. The merchant now knows the customer's credit card type, limits on the customer's account (including credit limit), and any customer attributes implied by this credit information.

In step 2 the merchant acknowledges the customer's request to begin a transaction. The merchant begins a record that includes the customer's transaction identifier (a unique number assigned to the transaction so it doesn't get confused with other simultaneous transactions) and brand (credit card type) in the database. Presumably the customer's email address (for responses) is also included, although this is not noted in the specifications. According to the Secure electronic Transactions documentation, the merchant is supposed to obtain the customer's billing address out of band. Thus the message that contains this information is not specified, although the Secure Sockets Layer is an obvious choice, since it is provides confidentiality and is ubiquitous. The merchant must know the customer's billing address for authorization.

The message in Step 2 includes the shared transaction identifier48, a response to the first challenge, a time stamp, and a new nonce from merchant to customer. The merchant also sends his digital certificate with the second message, so that after this message the customer has the merchant's certificate.

Step 3 is the customer's purchase request. This is the customer's conditional commitment to completing the transaction. Note that the customer is not committed, and the payment is not durable, for some weeks after this step from the perspective of the customer. However, as soon as the customer commits the merchant is ensured payment. The customer maintains the right of dispute in the transaction until after she reviews her monthly credit card charge account summary.

The purchase request is the most complex message: it includes payment and order information. The payment information is encrypted so that the merchant cannot read it but the bank can. Notice that the customer's digital certificate proved that the customer had a credit card and provided information about the customer's credit limit, but in this message the customer provides the information necessary to actually authorize a charge. The order information and purchase amount themselves are sent in verifiable but unreadable form, i.e. they are hashed. The merchant obtains the order information external to the protocol, again using an out of band technique. The purchase request is digitally signed by the customer. The message includes a general description of goods ordered, transaction amount, and nonces in the clear. The payment information includes account number, transaction identifier, transaction amount, and card expiration date encrypted for the bank.

In step 4 the merchant sends a message verifying the receipt of the purchase order sent in step 3. The fourth message is from the merchant to the customer. Note that the merchant may choose to obtain verification first, or to respond to the customer immediately and batch verification later with other transactions. The former is assumed both in the figure and the discussion. In this case the merchant sends the customer the results of his attempt to obtain payment verification.

The next message is the merchant's indication to the customer that the merchant will complete the transaction, contingent on authorization, and possibly payment. This message is signed by the merchant. This message includes the transaction identifier, the customer's transaction identifier, and the status of the transaction. The status indicates if the merchant has requested verification, payment, or has made no request to the bank. If credit card verification has been completed previously, then the verification amount would be included. If the transfer of funds, e.g. payment, had been completed, the payment status, payment amount, and the ratio of amount paid to purchase price would be included.

In step 5 the merchant request verification from the bank. (Recall that the bank as represented here is actually a gateway to the clearance system.) This message is encrypted so that only the bank can read it. The message is digitally signed for authentication, then encrypted using a one-time DES key. The DES key itself is then encrypted in the bank's public key.

The verification request includes transaction-specific and merchant-specific data. Transaction-specific data include the transaction identifier, the date of the transaction, and the order information. The order description, the transaction amount, and nonce are hashed together and also included. The merchant's transaction identifier, the customer's transaction identifier, the date, merchant identifying information and the brand identifier are hashed together for inclusion. (The brand identifier is a code for the credit card brand, e.g. Diner's Club, American Express.) The merchant-generated data transmitted include the amount, the merchant's business area, and one byte identifying a specific purchase area. This single letter is referred to as the MarketSpecData and identifies the industry involved -- hotel, auto, etc. The customers' billing address is included in the verification request, and there is also an option for requesting additional verification, above the purchase amount, called AdditionalAmount. Finally there is a flag to identify the message as part of a batch, and fields for associated batch information.

Step 6 is the bank's response to the merchant's verification request. Before responding the bank authenticates the customer using her digital signature and the signatures of the hash values signed by the customer and sent by the merchant match. The verification response is encrypted and signed by the bank.

In step 7 the customer may contact the merchant to determine the status of the transaction. Only the merchant's transaction identifier and the customer's transaction identifier are included in this message.

In step 8 the merchant responds with a signed message of the same form as the message in step 4. That is, the merchant reiterates his commitment to completing the transaction and notifies the customer of status of the transaction.

In step 9 the merchant requests payment from the bank. Payment is not equivalent to verification. In verification a certain amount is reserved on the credit line of the customer. In payment, a lesser or equal amount is transferred to the merchant. Payment is reversible in telephone and mail order transactions from the perspective of the merchant but not in transactions using the Secure Electronic Transactions protocol. The payment messages are both signed and encrypted using a one-time DES key, which is then protected using the recipient merchant's public key.

In step 9 the merchant sends the transaction identifier, the transaction date, transaction-specific data (from the verification request), and the amount of the transaction. Data are added for ease of processing if the order is batched. This message is the merchant's commitment to the bank to complete the transaction.

In step 10 the bank confirms payment.

Consider now the transactional characteristics of the Secure Electronic Transaction protocol. The Secure Electronic Transaction protocol does not assure isolation because of the inclusion of an AdditionalAmount field in the verification request from the merchant to the bank. The customer neither approves nor has knowledge of this field. This fact has proven problematic with physical card transactions as described in Chapter 10.

The ability of electronic customers to travel between merchants at a much higher rate than physical customers may exacerbate this lack of isolation. If a consumer visits many Web pages, making a purchase request at each one, and each merchant blocks off an amount through the verification process that assures maximum possible payment (using the AdditionalAmount field), the consumer may quickly be drained of available credit.

Credit card transactions using the Secure Electronic Transaction protocol are normally consistent in that the customer and merchant agree on the amount paid. They are also durable.

The Secure Electronic Transaction protocol provides money atomicity, but does not provide either goods atomicity or certified delivery. It could be strengthened by the addition of certified delivery by increasing the level of atomicity, particularly for information goods.

Security

The most dramatic improvement of the Secure Electronic Transactions protocol over the mail order and telephone protocol for Mastercard is that the merchant gets only enough information for only one purchase. Merchants cannot use the Secure Electronic Transaction protocol information for replay attacks. Not only are transaction identifiers unique to a transaction (and never repeated) , the date, and time of the transaction are included in the verification. A merchant cannot produce a purchase order signed with the customer's private key with different time and transaction identifiers then the one the customer originally transmitted.

The Secure Electronic Transaction protocol does not include negotiation or verification of delivery of information goods. A customer can claim not to have received goods already consumed, and a merchant can claim to have provided goods not sent. Therefore the security of the Secure Electronic Transaction protocol depends upon the delivery mechanism used. Nonrepudiation has limited strength when the promise can be verified, but the fulfillment of the promise cannot be. (The delivery would be verified for information goods if the Secure Electronic Transaction protocol were expanded to include certified delivery.)

The Secure Electronic Transaction protocol's lack of goods atomicity creates the potential for fraud. The addition of certified delivery could address this for information goods.

The Secure Electronic Transaction protocol includes the possibility of using a pseudonym in terms of the account number. That is, the customer can choose to use a fake account number rather than her real one. Since the possession of an account number creates the possibility for fraud, there is no reason that this should be an option rather than a standard feature. Requiring that all account numbers be pseudonyms is a low-cost technique for increasing security. Having pseudonymous identities linked to the pseudonymous account numbers would increase privacy as well as security.

Customer address and order data are provided to the merchants in a separate channel from the Secure Electronic Transaction protocol by the customers. Thus this information is potentially available to observers. How problematic such an information leak would be depends upon the importance of customer base information to the merchant and the importance of transactional information to the customer.

Customer address information is used for verification of the credit card used in the transaction. Thus, one element of verification information is sent in a way that is neither secure nor private. This is similar to the separate channels used for purchase and verification in First Virtual.

Privacy

Table 11.3 shows the information available in a transaction using the Secure Electronic Transaction protocol.

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Partial

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf w/warrant

Full

Full

Full

Full

Full

Bank

Full

Full

Full

Full

Full

Electronic Observer

Full

Full

Full

Full

Full

Information Available In a Transaction using the Secure Electronic Transaction Protocol

The Secure Electronic Transaction protocol provides more privacy than standard credit card transactions outside the Internet, since the customer can choose a pseudonymous account number. This implies that the capacity for using pseudonyms is built into the Secure Electronic Transaction protocol, although it is not currently explicit. Note that the fact that financial information is hidden from the merchant increases security, not privacy.

An electronic observer can obtain complete knowledge about a transaction using the Secure Electronic Transactions protocol because the certificates containing identity information of the transaction parties are transmitted in the clear. Encryption is used to obscure payment information, not order information. Message in the Secure Electronic Transaction protocol messages could be sent over a connection protected from observers using the Secure Sockets Layer.

Recall that the merchant in a transaction using the Secure Electronic Transactions protocol knows not only the customer identity but also other customer attributes, including address. This information has the potential to be more than just a privacy violation. The availability of this information, and the ability to correlate it in real time with other ethnographic and economic data create the potential for electronic red-lining. (red-lining is the denial of services to certain neighborhood. It is called red-lining because of the practice of drawing a red line on a map to identify 'unacceptable' regions, and is traditionally associated with racial discrimination.)

The Secure Electronic Transaction protocol offers a medium level privacy because the bank (through the acquirer gateway) knows the item(s) purchased. It offers more privacy than First Virtual, since the merchant is not apparently required to maintain records of customer purchases for any length of time after payment, as opposed to First Virtual's requirement that records are kept for three years. The nonrepudiation enabled with public key cryptography, the contractual limits on merchant loss, and the statutory limits on customer loss make this retention of customer data unnecessary.

Governance

the Secure Electronic Transaction protocol is an open standard that provides all information necessary for regulatory purposes. the Secure Electronic Transaction protocol offers very little privacy, primarily because the customer's name and address are required from the merchant for verification. With the use of certificates and public keys the security advantage gained by requiring inclusion of such information is questionable for items not requiring physical delivery. In fact, the use of a pseudonymous certificate with no physical customer information would require no change in the protocols and would offer a vast improvement in consumer privacy.

The concentration at the bank, actually an acquirer gateway, of information obtained in transactions using the Secure Electronic Transactions protocol reinforces the need to extend the legal constraints on secondary use of information beyond credit reporting agencies.

According to the Secure Electronic Transaction Business Strategy documentation the stated reason that privacy is limited in the Secure Electronic Transaction protocol is the desire to export this protocol and the current controls of the export of cryptography. That is, the amount of privacy the protocol currently offers is limited by the credit card companies' desire to export the technology and the legal restrictions on encryption which is used to protect private information. This regulatory-driven limit on privacy reflects a need to recognize in regulatory guidelines that transactional information correlated with identity is itself valuable and worth protecting. The limit on the export of cryptography to the purely financial weakens not only the privacy but also the security of the Secure Electronic Transaction protocol, since information which itself is not strictly financial but which would be useful for identity theft is sent in the clear.

Token Currencies

In token currency the strings of bits transferred in a transaction are themselves legitimately valuable, in comparison to the notational examples where the value is altered in ledgers and the string of bits in the transactions entails instructions to alter the ledger. A dollar has value in and of itself and is not a promissory note for a particular transaction from a specific account, like a credit card purchase slip. Because of this independence, in value token currency need not be linked to specific transactions or identities.

Digicash was the first and remains the canonical token currency system for Internet commerce. Digicash introduced the concept of blind signatures, which allow the bank to verify currency for users without being able to identify the currency verified as it is later spent. (Currency is verified as valid and not yet spent.) Prior to the innovation of blinded signatures any token currency provided by a bank to a customer could have been identified by the bank at the time it returned for deposit. The invention of blind signatures created the possibility of anonymous electronic token currency.

Electronic token currency is particularly interesting in that each new token proposal presents a novel mathematical technique or a novel application of a known technique. The most difficult problem remains the prevention of double-spending of coins. It requires trivial effort to duplicate -- and when a bitstring is itself token currency, then there is universal motivation to do so. In the absence of secure hardware, the dominant approaches to solving the problem of double-spending have involved limits on anonymity and on-line clearing.

The issue of double-spending is related to the issue of isolation. If a token can be spent more than once then the transactions involving that token are not isolated. This creates a race condition among those who received the token in transactions: whoever gets to the bank first is paid, the second to arrive is unpaid, along with any subsequent arrivals. With on-line clearing the payee can clear the token before accepting it. (Clearing refers to a payee's ability to confirm the validity of a token before accepting it and attempting to deposit it. It is comparable to the verification/authorization process in a credit cards.) Clearing enables something akin to two phase commit to occur: clearing is the customer's commitment nested through the merchant, deposit is the merchant's commitment, and acceptance of the deposit is the bank's global commitment. A token thus cleared is locked and cannot be used again until the transaction for which was cleared is complete, so no race condition is possible.

Atomicity in the token currencies is complicated by both the nature of token currency and it's anonymity. In a transaction involving token currency restoration to the previous consistent state can be quite difficult precisely because tokens are not necessarily linked to a specific transaction. Anonymous restoration of a previous state is particularly problematic: what would happen if any anonymous individual could present a claim to the money in other person's wallet?

Consider the three classes of atomicity with respect to token currency. For a token transaction to be money atomic, a customer's payment must be linked with the merchant's payment. That is, it must be the case that if the customer loses the value of a token then the merchant gains the value of the token and that the merchant gains the value of a token only if the customer loses it. For a token transaction to be goods atomic, the merchant must obtain the token from the customer only if some merchandise has been delivered to the customer. For a token transaction to provide certified delivery, the merchant must obtain the token from the customer if the specifically promised merchandise has been delivered to the customer.

To illustrate the issues of security, reliability and privacy in electronic token currency chapter 10 included a discussion of a remote transaction with physical token currency: sending cash through the mail.

Here we consider these same lines for several electronic versions of token currency; the original Digicash proposal, followed by a description of MicroMint, and finally Millicent.

Digicash

In Digicash (Chaum, 1985) customers hold the monetary value in the form of electronic tokens. Customers and merchants exchange tokens, and these tokens are validated by a bank. The bank validates that the signature on the token is valid (i.e. that it was approved by the bank) and that the token has not been previously spent.

Digicash provides only a mechanism for electronic payment. Digicash protocols do not provide mechanisms for discovery, negotiation, delivery or conflict resolution. The scope of Digicash is both its strength and weakness. The advantage is that Digicash can provide an elegant and simple protocol. The disadvantage is that Digicash cannot offer to decrease the cost associated with collection and dispute resolution. In fact, Digicash is specifically designed to mimic cash so that only the purchase itself and the detection of counterfeits are properly the business of Digicash.

A Transaction

Figure 11.4 shows the steps in a Digicash purchase. This digital cash protocol was the first use of blinded tokens for electronic cash. Recall that with blinded tokens any party X has asymmetric public keys with public key K and secret key k.

A Digicash Transaction

Before a Digicash transaction begins the customer selects a random number (r) and constructs a token (t). She then the customer encrypts the random number with the public key of the bank's asymmetric key and multiplies it with the token and send it to the bank for validations: rBt. (Recall from the table of notation in chapter 10 that this means the random number r is encrypted in the banks public key B and multiplied by the token, t.) The bank signs the token with its corresponding secret key, and returns: (rBt)b = (rB)btb. (Recall from Chapter 3 that anything encrypted with a public key is decrypted by the private key. Thus the random number is decrypted by the bank's signature with its secret key.) In the third step the customer then divides by the random number she originally selected and gets a token: (rB)btb)/r = (rtb)/r = tb. This token has been signed by the bank therefore will be recognized by the banks as valid. However, the bank has never seen the token and could not distinguish it as the token given to the customer.

In the fourth step the customer sends the request for an item and the token (as payment) to the merchant. In the fifth step the merchant confirms that the signature of the bank is valid. The token is expected to have some specific form and validating the bank's signature using the bank's public key verifies that the token has been signed by the bank.

In the sixth step the merchant deposits the token.

In the sixth step the bank confirms the deposit. Because the token has a valid signature does not mean that the token inherently has value because it is a trivial matter to duplicate a digital token. Thus it is possible that the token has already been redeemed . This would make the copy of the token by the merchant worthless.

In the seventh step the merchant delivers the items which were requested and paid for by the customer in step four.

Note that the token received by the bank in step five cannot be identified as the same token sent out in step two. This is the critical element that makes Digicash anonymous. Also note that descriptions of many forms of digital cash, including the one referenced here, end at the point where the merchant confirms the deposit.

Digicash as originally designed does not provide the information necessary for conflict resolution or dispute prevention.

In fact, if the protocol is interrupted between step four and the delivery of goods to the customer, then the customer has effectively been defrauded. Since the customer is anonymous, he or she cannot simply contact the merchant and ask for the goods to be resent. Otherwise it would be reasonable for strangers simply to show up and demand goods and money on the presumption that it belong to them. Imagine a person demanding a dollar from your wallet on the basis that he had once held it. The partial loss of anonymity resulting from location information has a positive effect here in that merchants could send lost information goods a second time to the same IP address.) The merchant could also claim not to have received a token while cashing the token in at the bank because the bank has no way of tracing the token to the customer. In this case the customer is again defrauded.

Security

Digicash assumes the privacy of cryptographic keys: the bank's, and the customer's. Consider the results if one of these assumptions is invalid.

An adversary who gains access to a bank's private key can generate counterfeit tokens that are indistinguishable from valid tokens. These tokens can be generated in any amount desired, so compromise of the key compromises all tokens in circulation.

An adversary who gains access to a customer's private key can create tokens until he drains a customer's account. Including a challenge and response series, where the bank keeps only the hash values of answers for a set of questions could mitigate the problem. 49 Since digitally signing a token is four orders of magnitude more processor intensive than verifying a hash value (an unavoidable result of the mathematics involved), this appears to be a reasonable addition to total processor load required of the bank for generating a token.

Two tokens can be multiplied to construct a third counterfeit token, as follows: (n1)b (n2)b = (n1 n2)b. Two signed documents multiplied together result in a signed document. In most cases, such a multiplication results in signed gibberish with no meaning. In the case of the original token design the result could be to print money. Notice the counterfeiter still has possession of the original tokens. Tokens can be multiplied to form new valid tokens; that is, consumers and merchants can trivially manufacture cash.

Digicash transfers are not money-atomic (Yee, 1994). The customer may attempt to resolve this problem by canceling a token (by cashing it in), but if the merchant who has received the token also does this, the result is a race condition. (This also violates consistency and isolation.) Since the bank has no means of determining where a token originated or the agreement between merchant and customer, dispute resolution can be a problem.

One option for addressing issues of customer double-spending and merchant fraud would be to assume in a dispute between a merchant and a customer , the customer is always right. This would require only keeping the names of customers that complain and the merchants that are the subjects of their complaints. This would maintain the anonymity in a successful transaction of Digicash while reducing the risk to consumers. An aggressive technique of disallowing merchants suspected of fraud might limit the popularity of the system since consumers and merchants are drawn to popular systems.

A second option would be to assume that the customer is always the fraudulent party in a dispute. This is the option chosen in the a later alternative system also designed by Chaum, where the detection of double spending is enabled through embedding identity information in the token.

Privacy

Table 11.4 shows the information available to the parties in a transaction using the form of digital cash considered here. Recall that partial identity information trapped by an observer and a seller result from location information, as describing in the section on browsing information in Chapter 7 .

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Partial

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf

w/warrant

Full

No

No

No

No

Bank

Full

No

Full

Full

No

Electronic

Observer

Partial

Partial

Full

No

No

Information Available In a Digicash Transaction

Digicash is a high privacy system. The merchant in a transaction using Digicash has only the information necessary to ensure payment, and the bank in the transaction has only the information necessary to credit or debit an account.

Governance

Digicash does not provide any information to law enforcement about the customer in a transaction . This implies that Digicash would thus be an excellent instrument for money laundering or other illicit purchases. However, this potential drawback is mitigated by the fact that tokens must be verified on-line, and therefore banks can identify someone who makes large deposits or transfers. The Know Your Customer regulations (31 CFR 103) apply no matter what the type of currency a customer deposits and this ensures that bank transactions above a certain size remain accessible for auditing. This suggest that limits on anonymous account transfers apply as much to Digicash, as they do to analog cash. That Mark Twain Bank in the U.S. has encountered no regulatory resistance to its offering Digicash accounts supports the conclusion that regulators are willing to accept anonymous currency so long as it enters and exits the electronic realm through easier channels that are as easy to audit as regular channels.

A customer who loses her private key currently has unlimited liability for fraudulent Digicash transactions that result and this may violate the Electronic Funds Transfer Act. Thus any cost of fraud is transferred to the customer. The Electronic Funds Transfer Acts specifically limits consumer loss in electronic funds transfers to $50 per lost instrument. It is not certain if a Digicash account meets the definition of an instrument under the act.

The lack of any receipt and the ease of merchant fraud seem to create problems with the Truth in Lending Act and Electronic Funds Transfer Act requirements for receipts and billing50, as implemented in Regulations E and Z, respectively. A technique exists for providing receipts and certification of merchant's commitment in an anonymous system (Camp, Harkavy, Tygar, and Yee 1996). However, this technique significantly adds to the complexity of anonymous exchange of digital tokens. Furthermore it significantly extends the scope of the transaction beyond that currently considered by many electronic token mechanisms, including the one under consideration here.

Digicash-based banks can provide aggregate information, on deposits and withdrawals and are certainly capable of storing records on individual on deposits and withdrawal. Anonymity in Digicash means that the bank cannot link the deposits to the withdrawals. It also means that coins cannot be traced through their path if they change hands more than once. However, transferring a coin more than once creates the risk that previous possessors of the coin will return it before the subsequent owners, depriving the subsequent owners of payment.

MicroMint

MicroMint and PayWord are a set of electronic commerce protocols (Rivest and Shamir, 1996) that use the difficulty of calculating hash values and the birthday paradox to provide electronic currency. Despite their mathematical similarities, however, PayWord is a notational, credit-based scheme and MicroMint is a token, debit-based scheme.

MicroMint is problematic because it assumes the solution to the problem of electronic commerce. That is, there is a bootstrap problem. Once the coins are established and the consumers have coins, the customer purchases with the coins and merchant redeems them. However, the distribution of the first generation coins --how the coins get established in the first place -- remains unspecified.

MicroMint calls the bank involved in MicroMint transactions a "broker". Instead of holding deposits, the broker generates coins and exchanges them with customers. (The broker has the float --the time value of money i.e. the interest on investing the money while the customer holds it -- until the coin is spent.) To defeat double-spending of the coins it generates the broker must know the identity of the customer to whom it gives the coins. This knowledge requirement, the consumer's need to be able to draw coins, the merchant's ability to deposit them, and the resulting requirement for customer and merchant accounts all suggest that it is a reasonable conjecture that the broker is a bank in all but semantic terms.

The bank mints MicroMint coins by using k hash values and the birthday paradox, as discussed in the chapter 3. Rivest and Shamir (1996) have calculated that to obtain one k-way collision 51requires calculating expected 2(k-1/k) hash values; however, to get c k-way collisions requires expected c2(k-1/k) hash values. Rivest and Shamir compare this to the initial investment in a mint (or an illicit facility to a counterfeiting operation), with high initial costs and low marginal cost for each additional bill printed. A MicroMint coin is a set of numbers that all have the same hash value, thus a coin is a set of k numbers (x1,x2,x3,. . .xk) where h(x1)=h(x2)=h(x3)=. . .=h(xk).

Using hash values, which are publicly known after the coins have been hashed and released, merchants can verify off-line that currency sent by customers is of the valid form. To prevent double spending, the customer's identity is embedded in the hash values sold to a customer, like so:

h(coin) = h(x1,x2,x3,. . .xk) = h(customer identity).

Thus any customer who double-spends would be detected and identified when the tokens were deposited. By using a lower hash value this becomes computationally feasible. A 16-bit hash value is recommended given the state of computing at the turn of the century. Table 11.5 below (as excerpted from Rivest and Shamir, 1996) shows the cost of generating coins. This table illustrates that small scale attacks are not feasible because of the large numbers of hashes required to produce the first coin. The table presents the case where a coin consist of four numbers, i.e. a coin requires four collisions. The numbers are 36-bits. Thus there is a tremendous initial investment in preparing the first coin, but as the number of coins created increases, the cost decreases.

Number of Hashes

Coins Produced

Hashes/coin

20. . .226

0

227

1

227

229

28

221

232

220

212

236

232

23

Returns to Scale in Minting Money through Hash

The potential for large scale attacks is addressed by changing the hash value52 monthly. (This requires customers submit unused coins for updated coins monthly.) This renders all forged coins are invalid at the beginning of the month, and the forger cannot begin generating hash values for the new coins until the hash function for the month has been announced. Thus any forged coins have value for only a short time and the huge initial investment required to mint the first coin must be undertaken not just once, but every month. Furthermore, the broker can detect forged coins, announce a new hash function at any time, and use hidden predicates for daily updating. [A hidden predicate is a special characteristic of a number which is not apparent upon simple examination. To generate a number with hidden predicates, some of the bits in the number are made a function of other, random bits.]

A Transaction

MicroMint transaction begins when the customer obtains a coin from the broker. Figure 11.5 shows the steps in a MicroMint transaction are shown in the following figure.

A MicroMint Transaction .

In step 1 the customer requests a coin. This customer must authenticate her identity in order to obtain the coin. Exactly how the customer authenticates her identity for purposes of coin generations is not specified in the Millicent protocol. Thus exactly what information is exchanged at that step cannot be determined. Note that merchant authentication and broker authentication are similarly unspecified min the protocol . Of course there are many techniques for authentication but these range from techniques where much information is exchanged (e.g. digital certificates) to techniques where no information but authentication is exchanged (e.g. zero-knowledge authentication).

In step 2 the bank, having obtained the customer's identity in step 1, constructs the coin. The bank then constructs a provably valid coin linked to the customer's identity as described in the previous section. Notice that the bank maintains an extremely large database of known hashes from which to construct a coin -- it does not begin hashing anew at every customer request.

In step 3 the bank delivers the requested coins to the customer. After the third step the customer can prove that she has a valid Millicent tokens, and can prove her ownership of those tokens. Proving ownership, however, requires exposing her identity.

In step 4 the customer transmits to the merchant the information necessary for the transactions: the item, price, token(s), and her identity. At the time of the order the merchant will also have any information transmitted while the customer was browsing. Notice that the information the customer transmits includes the customer's identity, which verifies her ownership of the coins she presents.

In step 5 the merchant verifies the token and the customer's ownership of these coins. In step 6 the merchant delivers the item to the customer. The protocol does not include this step explicitly, however, is assumed to be part of the protocol since the delivery is assumed to be off-line. The deposits can be batched.

In step 7 the merchant verifies that the token has not been already been spent by the customer, by depositing it.

However, the cost of verifying every coin (in terms of processing power, connectivity, and overhead) may not be worthwhile so a merchant may prefer to batch transactions. Given that MicroMint is designed for small transactions the assumption of batching is reasonable.

The MicroMint protocol assumes that only users double spend. As noted above the delivery of merchandise is not included in the protocol. Thus the protocol cannot be money atomic or goods atomic.

Off-line transactions are not isolated. If a merchant presents for redemption a coin that has been previously spent, he does not receive payment and is not reimbursed. Thus the outcome of one transaction depends on the existence of another.

On-line transactions in MicroMint are consistent. Since MicroMint coins are not anonymous, the customer can inquire as to whether a merchant to whom she has sent the coins has deposited those coins, and such inquiries do not necessitate the broker allowing anonymous inquiries into merchant records. The merchant can also verify a coin before accepting it. (Given that the customer has no verification of delivery, the option included above where the merchant delivers goods first distributes the risk somewhat more evenly. In the case that the merchant deposits the coins first the customer has only her claim not to have received merchandise and the merchant take no risk at all.) The customer may not have received any merchandise, however, the money transfer will be consistent.

MicroMint transactions are durable after coin clearance with the broker but not before. If a customer spends the same coins in two locations this creates a race condition. Again the party which receives the second coin, and subsequent coins, will not be credited at the broker.

Security

Security parameters in the MicroMint protocol include the strength of the hash value used to construct the coins, the secrecy of a hash value before it is released, and the number of collisions required to create a coin.

If the broker does in fact hold deposits (like a bank) then the issue of customer authentication to the broker needs to be addressed. MicroMint recommends that the broker have a shared DES key with each customer for the purpose on keeping the transmission of coins from the broker to the customer secure. This DES key could be used for authentication as well.

If the hash value chosen for a particular month is leaked to an attacker in the month before it goes into effect, then the attacker can create coins as quickly as the broker. Since the attacker has lower costs than a legitimate broker presumably the attacker can invest as much as the broker in processing power. (Notice an attacker doe not actually have to redeem his coins -- he just uses them. A broker has to create only valid coins and redeem them.) If hidden predicates are used in the coin generation then an attacker would need both the predicates and the hash parameters to commit forgery. Thus the use of predicates addresses the security issues of hash values in a cost-effective way.

An increase in the number of collisions required for the creation of a coin increases the cost of manufacturing coins to the broker and the cost of verifying coins to the merchant. Increasing the number of collisions required also increases the security of the coins. In this way the number of collisions for coin creation required is a parallel to certificate lifetime. (Recall the discussion on managing risk by altering certificate attributes. The same principles apply here.)

Observation of traditional paper systems suggests that some proposed security measures against large scale fraud may be ineffective. The broker can recognize false coins, just as in the physical world the bank can recognize bad checks, but his has not proven effective in preventing check fraud, precisely because checks are verified at the bank, not at the merchant. The merchant in an off-line MicroMint takes the risk of fraud while the broker has the ability to detect and prevent the fraud.

The broker can also combat fraud by declaring a current hash period to be over and recall all coins. Attackers can invest in computing power equal to the broker, have no customer overhead and obtain all goods purchased with false coins for no cost. Consumers may pay an attacker for coins at a discount rate and the attacker does not have to reimburse merchants for coins redeemed or goods purchased with those counterfeit coins. However, the use of daily predicates, the ability of the broker to select the hash function, frequent changes of hash functions, and the computational overhead required to produce the first coin each time this hash function changes all provide strong barriers to potential attackers.

Privacy

The information available to various parties in the MicroMint system is shown in table 11.6 below. Here again the ability of law enforcement to obtain information about purchases depends on the merchant's record keeping.

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Full

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf w/warrant

Full

Full

Full

Full

Full

Bank/ Broker

Full

Full

Full

No

Full

Electronic

Observer

Full

Full

Full

Full

Full

Information Available In a MicroMint Transaction.

MicroMint is a low privacy system. Since the creator of coins is modeled as simply a broker, it has only a limited ability to provide pseudonymous services. If the broker were in fact an account holder for the various consumers then the broker could easily offer pseudonymous coins. The cost of pseudonymity would be one search of the consumer database. Consumers could change pseudonyms whenever no coins were held under the previous pseudonym. Because of re-spending the broker would store pseudonyms until the hash value for the pseudonymously released coins was invalidated.

Consumers can spend only their own coins. This means that there is no threat of security loss if one person copies another's coins during a transaction. Because of this security, the MicroMint protocol offers no extension for encrypting negotiation and payment. Thus an observer can obtain all transactional information about a purchase using the MicroMint protocol. To eliminate this possibility, MicroMint could be combined with any product or protocol which provides encrypted peer to peer communication on the Internet.

By offering pseudonyms and protecting merchant to consumer transaction the information matrix for MicroMint would be changed as shown below. Changes in the table 11.7 from table 11.6 appear in boldface. The changes would make MicroMint a medium privacy system.

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Partial

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf w/warrant

Full

Full

Full

Full

Full

Bank/ Broker

Full

Full

Full

No

Full

Electronic

Observer

Partial

Partial

No

Full

No

Information Available in an Enhanced MicroMint Transaction

Notice that offering pseudonyms would require some changes in authentication for the MicroMint protocol. The broker could either sign pseudonymous keys or provide pseudonymous certificates. Presumably the latter would be preferable because the protocol could then remain off-line from the perspective of the merchant.

Governance

MicroMint offers inexpensive transactions at the cost of anonymity and individual security. It can fulfill all the requirements for information for regulatory purposes.

Individual security is lost in MicroMint that there is limited protection against malicious framing. The arguments against providing such security is that "the known mechanisms for protecting against such behavior are too cumbersome for a light-weight payment system." Given the amount of motivation individuals have to harm one another, as clearly illustrated in the records of law enforcement in every community, I would argue that such harm presents is a significant hazard in the MicroMint system.

The case of malicious framing presents a clear case where the risks are taken by customers and merchants to save effort on the part of the bank. This is not a system that appears to be prohibited by current regulation: the consumer cannot lose more than $50 for a lost instrument. Yet this is a system which clearly violates a basic principle of public policy: the risks of loss should fall upon the party most able to prevent those losses. The merchants can lose money; the customers can lose commerce privileges; yet only the broker can prevent such losses by implementing and requiring an extremely strong proof of customer authentication from merchant's depositing coins.

In this case, there is a clear policy principle at stake. The market has not yet failed to address this issue; but, neither has this system has been adopted. Presumably the broker is actually a bank, since there must be some deposits against which the customer draws. Either that, or this system is interoperable with other electronic commerce systems. If it is the former, regulators can examine the system and decide if it is acceptable. If it is the latter, then other providers of electronic commerce will decide. If the other providers of electronic commerce provides the user the ability to contest charges, the market may in fact push the final cost of lost money on the broker, as customers object to denial of service or charges for stolen coins. I would advocate including authentication in the protocol.

Millicent

Millicent is an electronic ode to the days of independent banking. It considers the token coins it creates it creates scripts because the coins are specific to a bank or merchant. Millicent vendors create their own currency.

Millicent provides merchants with the ability to create their own coins by using hash values to create low cost digital certificates. Digital certificates are usually associated with public key certificates, although in fact a digital certificate need not be based on public key cryptography. In order to prevent confusion about this I refer to digital certificates based on the cryptographic security of hashing rather than on public key cryptography as hash certificates.

Tokens in the form of Millicent script are of the following form:

[merchant|value|token specific ID#| customer ID| expiration date| properties][certificate]

where the certificate is:

hash certificate = H(token, merchant_secret)

Thus a Millicent token is a string of text, validated at issue via hashing with a secret known only to the issuer. Since only the issuer accepts the script, managing this "merchant secret" does not require the key management techniques of public key cryptography.

A Transaction

Figure 11.6 shows a Millicent transaction using a broker, or issuer of script as presented in the Millicent documentation.

A Transaction Using Millicent Tokens

In step 1 the customer requests merchant-specific script from the broker. In general it would be reasonable to assume that the merchant has such script. However periodically the broker must obtain additional script from the merchant. Step 2 shows the merchant requesting additional merchant script from the merchant to fulfill the customer's request. In step 3 the merchant provides script to the broker, In step 4 part of the merchant script is forwarded from the broker to the customer.

At this point the customer has the appropriate script and can transact with the merchant. In step 5 the customer requests items from the merchant. In step 6 the merchant records the script as having been spent (to prevent double-spending). In step 7 the merchant delivers the items requested by the customer in step 5 to the customer.

In Millicent the customer must trust both the broker and the bank. An explicit decision is made and documented that the most trustworthy party in the party is the broker, and the least is the customer. This is borne out by an examination of the transaction - the transaction lacks atomicity.

Millicent is not interoperable. As with First Virtual the lack of interoperability is a strength. When using Millicent on-line, customers, merchants and brokers can all determine the level of risk they will take. Each user determines his or her own risk exposure rather than it being set at a cut-off by some third party (e.g. a bank or credit card association).

Consider now the transactional characteristics of Millicent.

If the message in step 1 fails there is no transaction -- the transaction fails completely. If the message in step 2 amount fails the broker may have already debited an amount from the customer's account to cover the tokens she has requested from the broker. In this case the customer may lose funds by beginning again with a new request.

If the message in step 3 fails the merchant has debited the broker's account for a token that never makes it the customer and therefore cannot be spent. Thus the vendor has been paid. The broker has been paid by the customer and has paid the vendor. The customer loses money; the merchant gains money. Alternatively there may be an arrangement by which the merchant does not get credited until the broker receives the funds. In this case the broker, rather than the merchant, comes out ahead.

If message in step 4 fails again it's the customer who loses funds.

If the message in step 5 fails the customer must return the script to the broker in order to avoid a race condition with a potential thief. Thus the only loss is the cost of token re-issue. If the merchant errs in recording the token (step 6) so that the sixth steps fails, then either the customer loses funds or the customer is able to spend the token again.

If the message in step 7 fails than then the customer has no goods.

Clearly the Millicent system lacks atomicity in all its forms. If the customer believes the message in step five fails and it has not, this result is a race condition -- which will be recorded first the purchase or the script reissue? Thus the system lacks isolation. Since when step 7 fails the customer can lose funds and the merchant can nevertheless believe the transaction is complete the protocol lacks consistency. However, once committed (step 6) funds are durable.

Millicent could be made atomic with the addition of an atomicity-generating layer, that consists of additional messages to ensure transactional reliability. (An atomicity generating layer adds a signed contract, merchant receipt of payment, and customer receipt in the case of digital goods.)

Security

Brokers which create their own Millicent currency must keep a secret that , if compromised will lead to loss to the merchant. That is, if an attacker obtains the merchant's secret the attacker can generate merchant-specific funds. The merchant keeps records of what script has been spent, not necessarily what script is outstanding.

Customers are at the greatest risk and have the least control over the security parameters of the system. Merchants are at least risk and have the most control over those system parameters which ensure security. This design encourages-under investment in security.

Privacy

Table 11.8 depicts the information available to various parties in a transaction using Millicent script. Millicent offers little or no privacy. What Millicent refers to as a private system is in fact merely a system in which all information is not known to observers. Compare this with Digicash where the customer is anonymous to both the merchant, and the bank.

Information

<p>Party

Merchant

Customer

Date

Amount

Item

Merchant

Full

Full

Full

Full

Full

Customer

Full

Full

Full

Full

Full

Law Enf

w/warrant

Full

Full

Full

Full

Full

Bank/ Broker

Full

Full

Full

Full

Partial

Electronic

<p>Observer

Full

Full

Full

Partial

Partial

Information Available In a Millicent Transaction.

Observers of a Millicent transaction can obtain identity information from the domain name of the customer, as explained in the section on browsing information in chapter 7. Transactions amounts can be determined from observing the site and noting that Millicent is a low value transaction system, providing an observer probabilistic information about the range of a particular transaction. Similarly a bank can observe transaction amounts and merchants involved and obtain partial information about a purchase. Law enforcement access depends on merchant record keeping.

Governance

Since Millicent is a low privacy system in initially it would appear that information for law enforcement needs would be met. However, Millicent, like cash sent through the mail, provide opportunities for fraud, and fraud prevention is also a law enforcement need. Millicent is not adequate on this count.

Millicent's design for low value purchase means that the record keeping requirements for large transactions would not be applicable.

Summary

Money takes two basic forms: token and notational. These forms are fundamentally different from one another and imply different trust requirements. Not all forms of money fulfill all the possible functions of money. Differences in scope, duration, and interoperability can increase or decrease risk, depending on the implementation. For example, a long term notational exchange such as a credit card includes the ability for the customer to dispute merchandise quality after purchase. A cash transaction, on the other hand, ends at the exchange of goods for money.

Internet Notational Currencies

First Virtual offers a low security system for Internet commerce. First Virtual assumes the Internet will remain without security, and addresses that lack of security through risk management and loss allocation. Unfortunately, this loss allocation (merchant losses) limits the goods suitable for sale using First Virtual. First Virtual is also a low privacy system that requires merchants to keep extensive records on customers. This reinforces the argument that the controls created on consumer financial data under the Fair Credit Reporting Act should be expanded to cover compilations of non-bank institutions like First Virtual and its associated merchants that gather detailed consumer records.

The Secure Sockets Layer is a first generation Internet commerce protocol that has taken an approach opposite to that of First Virtual. First Virtual assumes that the Internet is without security and merchant losses are negligible; the Secure Sockets Layer assumes that the Internet can be made secure and limits merchant losses by off-line financial management. It does not attempt to provide atomicity, and thus does not do so. The Secure Sockets Layer is a medium privacy system. Its level of security of is limited by the constraints on exporting strong cryptography. This provides an argument for the removal of restraints on the export of such cryptography.

The Secure Sockets Layer may create risk as a side-effect in that many merchants keep records of customer's credit card information on machines connected to the Internet and subject to remote attack. There are at least three possible solutions to this problem: security, including cryptography, could be embedded in popular operating systems (and those operating systems could be redesigned to be secure); computer operators with inadequate security practices could be held liable for all losses caused by their negligence; or data could be deleted as soon as possible. Clearly the third solution would be the easiest to implement, and the only option for a single merchant.

Secure Electronic Transactions is a payment protocol that considers all steps in an electronic transaction excluding account acquisition. Secure Electronic Transactions offers low privacy system, since purchase information is transmitted in the clear. It provides high level of security by design, as it removes the opportunities for replay attacks and shared merchant terminals that are problematic in the current credit card systems.

The Secure Electronic Transactions standards uses the Secure Sockets Layer for information to be transmitted out of band. Thus the regulatory requirement that limits software for export to using weak cryptography has affected the design of electronic commerce systems. This illustrates the ubiquitous effects of constraints on cryptographic exports on electronic commerce and offers an additional argument for removing these constraints.

The analysis of this set of protocols for Internet commerce illustrates that with notational currency, reliability can be simplified by creating a single ledger where all accounts are finally settled. Creation of a single ledger means concentrated information -- thus implying a threat to privacy. One way to address this threat is to accept increased complexity as the cost of protecting privacy. However, the relationship between distribution of information and provision of privacy does not always hold true in that increased centralization does not always imply decreased privacy.

Internet Token Currencies

Digicash is graceful in its simplicity and offers complete anonymity to the customer. Yet Digicash offers this complete privacy at the cost of low reliability. Further it offers neither money nor goods atomicity.

In the later version of Digicash (not detailed in this book), Chaum attempted to prevent double spending, thereby increasing system reliability, through encoding identity into each token to be spent. Encoding identity allows double-senders to be identified, thereby resolving the conflict between anonymity and accountability in the case of double spending. This addition of integrity provides sufficient information for dispute resolution in issues of payment, but not enough information to resolve disputes over goods delivery.

MicroMint has the potential to create anonymous currency economically for a large number of users. By creating digital currency using a process with decreasing marginal cost, MicroMint can provide anonymous token currency to a large number of consumers. MicroMint would be economical for micro-transactions, which are too small for billing or collection using current techniques.

MicroMint in its most simple form offers no money atomicity. To provide money atomicity MicroMint can be extended so that customer identity is included in every coin. Thus the extension of MicroMint to preclude double spending depends on the requirement that every consumer identify herself to the merchant to verify the right to spend any coin she sends to a merchant. Millicent creates a lightweight digital token network meant for low volume transactions.

MicroMint, along with the two versions of Digicash, illustrates the trade-off between atomicity and anonymity.

All three token systems meet their design criteria: to create mechanisms for digital token commerce. However, all fail with remote commerce as physical cash does in the networked world. Thus although all three systems as they are currently described are suitable for secure hardware or smart card systems, they do not meet the criteria to excel as Internet commerce systems.

The problems with token currency are inherent problems in all token currency. There are three ways to address this problem: use secure hardware, add identity information and add atomicity-ensuring steps

Secure hardware would provide parties to the transaction (at least customers) with records of every transaction which cannot be falsified. Thus customers would not longer have their only word to back their claims: cryptographically verifiable records would support legitimate claims of fraud. This would require physically secure hardware using cryptographic protocols. Although there is no doubt much research to be done in secure hardware, IBM currently offers a secure co-processor card that is safe from even the most James Bond-type exotic attacks. (The processor cannot be attacked with lasers, eaten with acid, or probed in any way to obtain cryptographic key information. Of course it can be destroyed.) Thus secure hardware is a viable option in theory; however, it will only be viable in practice when the infrastructure (including card readers) is widely available. The Internet is the opportunity that it is because it grew organically, from consensus. Adoption of secure hardware may not follow so naturally.

A second option for improving token currency systems is to add identity information to the token. If customer and merchant identity information is embedded in the token this would at least prevent other customers from spending it. Of course, tokens would then no longer be anonymous

Alternatively, vendors of token currency systems could add a layer to provide transactional atomicity and a document trail to better detect the source of fraud (e.g. Camp, Harkavy, Tygar, and Yee, 1996.). This is simply a set of steps that provides certain types of documentation. Money atomicity would require that there be a step that guarantees a receipt for funds provided to the customer. Goods atomicity would require that there be steps that provides to the merchant documentation of receipt of good from the customer. Certified delivery would require a receipt which includes a description of the goods promised and of the goods received. All this documentation would need to be secured with strong cryptography, meaning its validity could not be denied assuming no cryptographic keys were lost. This is both possible and processor-intensive.

A final approach would be changing the placement of fraud risks so that brokers and merchants would have no incentive to produce bogus funds. Widespread fraud will drive adoption of additional measures to prevent transactional fraud -- even if the widespread fraud is from the loss of a system secret.


12: The Coming Collapse of Internet Commerce

Every currency must come to an end. Some currencies end more spectacularly than others. It is a reasonable assumption that there will be some failures of currencies used for Internet Commerce.

Internet commerce will see failures and falls, but electronic commerce on a packetized stupid network53 will happen. That paper money has had many collapses has not killed it. (This may be small solace, of course, to someone holding the wrong pieces of paper.) Every type of paper commerce was subject to its own unique form of collapse. Consider all the monetary mechanisms enabled by paper and printing: standardized bookkeeping, derivatives, checks, paper money, stocks, and bonds. There were multiple collapses in paper currencies. Entire nations have been thrown into disarray. Hyperinflation was not possible before paper money. Paper money has proven to be too powerful, too important, and too critical to abandon. Similarly digital moneys are too powerful to reject.

What some may view as failures or collapses I would argue are not: they are simply neglected offerings, undesired products. As an analogy, consider the Susan B. Anthony dollar. Because of design flaws this dollar looked and felt much like a quarter. It is still legal tender, but it was never adopted. Consumer hostility or uncertainty can be identified as a key ingredient in the failure of the Susan B. Anthony dollar. This does not constitute a collapse. The "vendor" still stands behind the value of these dollars, but few want to use the coins. Although these dollar coins have never been embraced by the populace, they still have value, and that value can be transferred. This is analogous to an offering from an established vendor and not a collapse.

That something is a failure and not a collapse does not make it unworthy of mention. The trick in avoiding electronic currencies that will not be adopted is determining which elements will prove to be unacceptable to the consumer. Given the discussion in the foregoing pages, in which I argue that trust and risk are the critical variables, it will come as no surprise that those mechanisms, that may seem to be ideal to the merchant, may not suit the customer. It is the merchant's goal to put the risk inherent in a transaction on the consumer and the consumer's desire to place that risk on the merchant. Markets have proven quite effective in striking the balance between these opposing, although consumer protection is required beyond what the market itself may offer. The point in all this is that a commerce system need not be subject to collapse to be one to avoid.

Predicting the when, why, and where of commerce failures is beyond bold, and therefore beyond me. However, the examination of systems in this book has made clear that failures have various possible sources. At least two of those failure modes are new to Internet commerce. The possible modes of failure for Internet commerce systems include:

corporate or vendor failure54,

failure of the integrity of a cryptographic secret,

widespread fraud resulting in a loss of trust, and

network failure resulting in a loss of trust.

Consider each of these four modes of failure. Corporate or vendor failure in Internet commerce will be not unlike previous failures of money vendors. With those electronic systems using credit cards one trusts that the managers of risks in banks can avoid the cost of a sudden collapse due to insolvency. The organization offering the cash, the vendor, should be the organization at risk should the commerce mechanism suffer technical failures. Vendors offering Internet commerce would of course prefer failures to be paid for by the banks, who view these vendors as specialized merchants.

The second and third modes of failures -- failure of the integrity of a cryptographic secret and widespread fraud resulting in a loss of trust -- may be indistinguishable, if the loss of a cryptographic secret allows widespread but not ubiquitous fraud. Consider the systems examined here in which loss of cryptographic integrity would not result in collapse, but rather widespread failure. A collapse occurs when valid currency cannot be distinguished from bogus currency or when assets backing a currency system fail. The failure modes of the systems analyzed here should provide a guide for analyzing those of the other commerce systems being offered. There are more than one hundred commerce mechanisms currently developed to the point of being accepted to peer reviewed publication, being implemented, or being presented as a corporate standard.

In determining how a commerce system may fail one should evaluate whether there is a distributed failure mode as well as a catastrophic failure mode. To consider an engineering example, elevators fail by ceasing to move not by crashing into the ground by virtue of safety designs which depend only on the laws of physics. Ideally a commerce system would fail by grinding to a halt in the face of bogus currencies and their transactions, rather than by accepting bogus transactions as valid. Some systems are built so there exists some category of failure other than catastrophic. Systems with distributed failure modes that have been discussed in this document include Millicent, MicroMint and the Secure Socket Layer. Digicash may also have a distributed failure mode -- if there are multiple root keys. SET has both catastrophic and distributed failure modes. First Virtual has only a catastrophic failure mode, but the implications of such a failure would be limited.

How might these systems fail? The loss of a "secret" --- whether this is a root key, a password file, or a seed value for a hash function --- can cause failure. A failure would enable multiple copies to be made of false instruments, indistinguishable from valid currency. This type of failures would cascade through the different systems in different ways. In some systems (such as centralized token-based systems) failure would come either from fiscal collapse of the currency supplier, or, if possible, from hyperinflation when the generation of fraudulent new instruments causes the value of the extant instruments to collapse. In other systems the fraud may be widely distributed but immediately detectable (at high levels) thereby leading to a distribution of risk across the system. If there was a rash of fraud and merchants s well as banks had to write off large number of losses.

One researcher found a quick way to break the key generating mechanism in an early version of the Secure Sockets Layer55. The encryption had not been generated entirely from random numbers but used more predictable values (and could be affordably and easily broken). Suppose a criminal rather than a researcher had found this failure and used it to obtain customer credit card information. In this case, the failure would be detected by credit card issuers and merchants who would pay for an increase in fraud.

Now consider SET. For SET to be successful the acquirers must spend significant funds to build a public key infrastructure. Webmasters offering electronic commerce must support more processor-intensive transactions. SET-ready Web sites will cost more to merchants than those that do not support the comprehensive payment mechanism. As a result of these expenditures the acquirer will have responsibility for fraud. What could drive the expenditure necessary to adopt any heavy electronic mechanism but widespread fraud?

Thus, I do boldly predict that if there is a failure of SSL there will be a transition period of very high fraud followed by widespread adoption of acquirer safeguards, such as adoption of SET or other heavy duty transaction mechanisms. An alternative may be the development of secure hardware, so that additional encryption in the application would be less necessary. Alternatively, secure hardware might complement encryption at the application level. (For more on this possibility see the arguments for "end to end" encryption.) In general when a system begins to have high fraud levels there will be a move to secure hardware, or to more carefully engineered software, or to a more processor-intensive version of the system.

However, should SET fail consumers would be responsible for payment. SET could fail if the root keys are compromised, i.e. if someone obtained them (other than those authorized to have them). (The root keys are extremely well guarded and well distributed, so this is unlikely.) A criminal who obtained the root keys could assume the identity of a bank and collect funds until discovered. This could result in extremely large institutional losses and be the equivalent of a major bank failure. Conversely acquiring banks could continue the process of signing up merchants with questionable records. In this case customers would bear the cost. This could lead to a decline in the trust necessary for Internet commerce to thrive.

Should an attacker obtain less trusted SET keys he could assume the identity of a false merchant could be assumed. Customers and merchants could suffer losses so it is uncertain how long this would remain undetected. Possibly long enough to be profitable for the criminal. However, this corrupt criminal could lend his key to others as easily as he lends his credit card privileges in real life. The problem of merchant fraud is not uncommon, and can therefore be assumed to be manageable by the charge clearance system. Since replay attacks would not be useful under SET, this would limit the efficiency of merchant-based fraud.

Consider a failure in First Virtual. The effects of a failure in First Virtual would be limited by its lack of interoperability. First Virtual would not be required to pay merchants when consumers refused payment. Merchants would suffer the losses and move to a different commerce system.

With some systems a failure would result in detectable fraud, and there would be an option to adopt other mechanisms to correct the problems with trust. Thus, despite the novelty of the source of the failure, these cryptographic and trust failures would be of an evolutionary sort not of a catastrophic sort

Now consider the possibility that collapse would lead to catastrophic loss of a commerce system. Recall the previous examples. In systems without atomicity and adequate receipts fraud would not be traceable, and therefore the levels of fraud could quickly rise to the intractable. This is not a function of whether a system is notational or token, although token systems are more prone to fail with respect to atomicity than notational systems, as a result of token systems' concentration of trust in a single or a few frequently used cryptographic keys.

The existence and circulation of bogus moneys would cause merchants to lose funds or banks would be called upon to honor funds that they did not posses. Three obvious possibilities suggest themselves for a merchant in the case of currency failure. The first is to pass the cost of fraud on to customers, the second possibility is the merchant accepts the fraud costs, and finally cost may be placed on a third party provider.

If the cost is passed on to the customer, merchants will lose customers. Customers will not return to the same commerce system after detecting fraudulent charges and having to pay for them nonetheless. Unlike the case of rude service at the corner store, customers will not return to a virtual merchant who mistreats them because it has no geographic advantages. Hopefully the prospect of such fraud will inspire customers to appreciate the value of investment in secure hardware.

If the second option is chosen and the cost is passed on to the merchant, the merchant may be subject to fraudulent charges and as well as forced to drop the currency mechanism. This will result in lost funds and loss of some customers.

Should the cost of the bogus funds be passed onto the bank or acquirer (i.e. choosing the third option)? There are yet again two possibilities (this repeated branching of failure possibilities is indeed not unlike a decision tree). The acquirer either itself fails or the system is abandoned. In either case, customer and merchant will know the source of failure. The wise choices are to limit exposure in one system, i.e. consumers should use only one credit card on the Internet and not use debit cards. For merchants, this implies embracing many systems to ensure customer service after a failure, and to prevent a one to one correspondence between all profit and single mechanism. For customers, it means embracing fewer to limit exposure.

The final mode of system failure is a large scale collapse. This would require the collapse of the infrastructure or a failure of the infrastructure resulting in a collapse of trust, rather than the loss of a single instrument. In such a case, information on the end points of the network as well as in the network would be corrupted and denial of service may be the order of the day.

Incidents that foreshadow such an event of this magnitude are the Morris Worm incident56 and the recent explosion of Macro viruses. The Morris worm incident was instigated by a computer science student. On the more theoretical end of the scale information warfare scenarios are frequently played out as an exercise in paranoia or preparedness, or perhaps both. Information warfare scenarios offer a few useful suggestions.

The major suggestion gleaned from information warfare survival scenarios is that electronic safe havens or subnetworks should be created. This translates into protecting data, internal connectivity, and at the most extreme temporary disconnection from the external net. This might work for large organizations but for individuals and small businesses who will not know of widespread attacks until their machine is cut off this is not useful. However, there is an important concept: save the core business information in the case of collapse. Make back-ups, make back-ups and BACK UP ANY CRITICAL DATA.

The ability to segregate systems is important for the small computer as well as the large. Making inventory systems so that purchases be can read without updating the inventory is one way to segregate, i.e. be clear in separation of authority to read and write data. Another is to provide inventory information on a daily basis, while leaving the core machines separate. For small businesses this may mean purchasing two machines -- consider it a one time insurance payment. One, the web server, can be seen in the near term as a single cash register. A cash register would not be used for accounting, inventory, etc. The second machine will be connected only occasionally.

A difference between a paper collapse and an electronic collapse would be the speed at which each happens. The Morris worm incident took perhaps five days. Yet the worm incident identified a critical need for centralized support for systems under siege -- what has become an international network of incident response teams. These incident response teams are the best source of information on the status of any network with respect to security. At the Computer Emergency Response Team site current information about the latest attacks and defenses are available at no charge. Note that any attacks can be reported to an incident response team and confidentiality will be respected.

Having listed all the myriad ways a radical collapse could occur, it is important to note that reactionary paranoia to the threat is not only a waste of time, but also bad security policy. Should there be a crippling attack on the infrastructure it will be critical to act fast to take countermeasures. It will also be important not to act unless necessary or there will be a ridiculous amount of unnecessary thrashing. Overreaction can create a denial of service attack as effectively as genuine hostilities. Hoaxes must be identified and ignored while attempts at attack must be recognized.

Electronic commerce will be as critical to business in the next century as paper has been in this century (and the previous three). At this point the risks in transacting over the Internet may seem high, but adoption of some sort of Internet commerce by society is inevitable. This book has only focused only on the risks in these first years of Internet commerce.

Internet commerce is happening, and will continue to happen, until it is so varied and interwoven with life that the phrase "Internet commerce" will seem as academic as the phrase "paper commerce."

What part will you play?


Bibliography

5 USC 552 Privacy Act

12 USC 1829 Money Laundering Act

12 USC 2903 Community Reinvestment Act

12 USC 3403 Financial Privacy Act

15 USC 1601 Truth In Lending Act

15 USC 1691 Equal Credit Opportunity Act

15 USC 1692 Fair Debt Collection Practices Act

15 USC 1694 Electronic Funds Transfer Act

18 USC 1029 Computer Fraud and Abuse Act

22 CFR 121 International Traffic in Arms Regulation

22 USC 2571 Arms Control Act

26 USC 6103, 31 USC 3711 Debt Collection Act

31 CFR 103 Know Your Customer Requirements

35 USC 3401 Right to Financial Privacy Act

42 USCS 3608, 15 USC 1681, 12 USCS 1708 Fair Credit Reporting Act

49 USC 1666 Fair Credit Billing Act

50 USC 2401 Export Administration Act

Alderman, E. and Kennedy, C. 1995. The right to privacy. New York: Alfred A Knopf.

Anderson, R. E., Johnson, D. G., Gotterbarn, D. and Perrolle, J. 1993. Using the ACM code of ethics in decision making. Communications of the ACM. 36: 98-107.

Anderson, R. H., and Hearn, A. C. 1996. An exploration of cyberspace security R & D investment strategies for DARPA: The Day After . . . in Cyberspace II, MR-797-DARPA. [Online: web]: URL: http://www.rand.org/publications/MR/MR797/summary.html

Baird, Z. 1996. How have other nations balanced legal and national security threats and responded to a changed world? American Bar Association Standing Committee on Law and National Security Law Enforcement and Intelligence Conference. 19 September.

Baker, A. 1994. A concise introduction to the theory of numbers. New York: Cambridge University Press.

Bernam, J. 1991. Establishing a legal framework for freedom and privacy on the electronic frontier. Conference on Computers, Freedom and Privacy. Washington D.C.

Bickford, D. 1996. The changed threat to U.S. national security -- new problems and priorities. American Bar Association Standing Committee on Law and National Security Law Enforcement and Intelligence Conference. 19 September.

Bloustein, E. 1968. Privacy as an aspect of human dignity: An answer to Dean Prosser. New York University Law Review. 39: 962-970.

Brands, S. 1993. Untraceable off-line cash in wallet with observers. In Advances in cryptology--CRYPTO '93. 302-318. Berlin: Springer-Verlag.

Brennan, J. 1989. Florida v. Riley. 488 U.S. 445, 466 (J. Brennan, dissenting).

Brickell, E., Gemmell, P., and Kravitz D. 1995. Trustee-based tracing extensions to anonymous cash and the making of anonymous change. Proceedings of the Sixth Annual ACM-SIAM Symposium on Discrete Algorithms. San Francisco. 22-24 January. 457-466.

Britt, P. 1994. Moving forward with smart cards. Savings and Community Banker. 3.11: 6-7.

Business Week. 1993. ATM shouldn't stand for 'artfully taken money.' Business Week (Industrial/Technology Edition), 31 May, 1994: 110.

Camp, L. J., Harkavy, M., Tygar, J. D. and Yee, B. 1996. Anonymous atomic transactions. 2nd Annual Usenix Workshop on Electronic Commerce. Oakland, CA. November, 1996.

Camp, L. J., Sirbu M. and Tygar, J. D. 1995. Token and notational money in electronic commerce. Usenix Workshop on Electronic Commerce. New York, NY. July, 1995.

Camp, L. J. and Tygar, J. D. 1994. Providing auditing while protecting privacy. The Information Society. 10: 59-71.

Cerf, V. 1993. How the Internet came to be. In B. Aboba, ed. The on-line user's encyclopedia. New York: Addison-Wesley.

Cerf, V. and Kahn, R. E. 1974. A protocol for packet network interconnection. IEEE Transactions on Communications. 5: 637-648.

Clark, G. and Acey, M. 1995. Mondex blows users anonymity. Network Week (U.K.). 1.8: Col. 1.

Chaum, D. 1985. Security without identification: Transaction systems to make big brother obsolete. Communications of the ACM. 28: 1030-1044.

Chaum, D. 1989. On-line cash checks. In Advances in cryptology - EUROCRYPT '89. 288-293. Berlin: Springer-Verlag.

Chaum, D. 1992. Achieving electronic privacy. Scientific American. 267: 76-81.

Chaum, D. 1994. Prepaid smart card techniques: A brief introduction and comparison. Holland: Digicash.

Chaves, C. 1992. The death of personal privacy. Computerworld. January, 1992: 25-27.

Cohen, J. 1996. The right to read anonymously. Connecticut Law Review. 28.4: 981-1039.

Coleman, J. S. 1990. Foundations of social theory. Cambridge, MA: Harvard University Press.

CommerceNet. 1995. The CommerceNet/Nielsen Internet demographics survey: Executive summary. [Online: web]. Cited 30 October,1995. URL: http://www.commerce.net/information/surveys/toc.html

Compaine, B. J. 1988. Issues in new information technology. Norwood, NJ: Ablex Publishing.

Computer Science and Telecommunications Board. 1994. Rights and responsibilities of participants in networked communities. Washington: National Academy Press.

Cox, B. 1994. Maintaining privacy in electronic transactions. Pittsburgh: Information Networking Institute, Carnegie Mellon University.

Cox, B., Tygar, J. D. and Sirbu, M. 1995. NetBill security and transaction protocol. Usenix Workshop on Electronic Commerce. New York, NY. July, 1995.

Crosby, A. W. 1997, The measure of reality: Quantification and Western society. In ____. 1250-1600. New York: Cambridge University Press.

Cross Industry Working Group. 1995. Electronic cash, tokens and payments in the national information infrastructure. [Online: web]. Cited September,1995. URL: http://www.cnri.reston.va.us:3000/XIWT/documents/dig_cash_doc/ToC.html

Davies, D. 1981. The security of data in networks. Los Angeles: IEEE Computer Society Press.

Davis, P. 1995. Senate Republicans say the Earned Income Tax Credit is becoming too expensive. Broadcast on National Public Radio Morning Edition, National Public Radio. number quoted by Margaret Richardson, Commission of the Internal Revenue Service, 17 August. Also [Online: web]. URL: http://www.realaudio.com/contentp/npr/nb0817.html

Denning, D. 1982. Cryptography and data security. Reading, MA: Addison-Wesley Publishing.

Diffie, W. and Hellman, M. E. 1976. New directions in cryptography. IEEE Transactions on Information Theory. 7: 644-654.

Diffie, W. and Hellman, M. E. 1979. Privacy and authentication: An introduction to cryptography. Proceedings of the IEEE. 67: 18-48.

Douglas, J. 1974. California Bankers Association v. Schultz. 416 U.S. 21,85, 94 S. Circuit, 1494, 1529, 39 L. Ed. 2d 812, dissent.

Draper, S. 1989. Security aspects of smart cards. In Caelli, ed. Computer security in the age of information. Amsterdam: Elsevier Science Publishers B.V.

Duncan G. and Lambert D. 1986. Disclosure-limited data dissemination. Journal of the American Statistics Association. 81: 10-27.

Duncan G. and Lambert D. 1989. The risk of disclosure for microdata. Journal of Business and Economic Statistics. 7: 207-217.

Echikson, W. 1994. French risk it all on a smart card. Boston Globe. 28 February, 1994: 17:2.

The Economist. 1996. Who's who on the Internet. The Economist. 340.7976.

Edwards, P. N. 1997. The closed world: Computers and the politics of discourse in cold war America. Cambridge MA: MIT Press.

Eisenstein, E. L. 1979. The printing press as an agent of change: Communications and cultural transformations in early-modern Europe. Vols. 1 and 2. New York: Cambridge University Press.

FAIR. 1996. FAIR media bias detector. [Online: web]. Cited 15 April, 1996. URL: http://www.igc.apc.org/fair/media-bias-detector.html

Federal Bureau of Standards. 1977. Federal information processing standards publication 46: Announcing the data encryption standard. Washington: U.S. Government Printing Office.

Federal Communications Commission. 1995. Telephone subscribership in the United States. Washington: U.S. Government Printing Office.

Federal Reserve Bank of New York. 1996. Regulation E - electronic funds transfer - revisions to regulation and official staff commentary. Federal Register 61.86.

Feige, U., Fiat, A. and Shamir, A. 1987. Zero knowledge proofs of identity. In Proceedings of the 19th ACM Symposium on Theory of Computing. 210-217.

Fenner, E. 1993. How mortgage lenders can peek into your files. Money. April, 1993: 44-48.

Financial Service Technology Consortium. 1995. Electronic payments infrastructure: Design considerations. [Online: web]. Cited November, 1995. URL: http://www.llnl.gov/fstc/projects/commerce/public/epaydes.htm

First Virtual. 1995a. Information about First Virtual [Online: web]. Cited 8 October, 1995. URL: http://www.fv.com/info

First Virtual, 1995b, The fine print. [Online: web]. Cited 24 June, 1995. URL: http://www.fv.com/info/terms.html

Fischer, M. J. 1988 Focus on industry. Journal of Accountancy. 130-134.

Freier, A., Karlton, P. and Kocher, P. C. 1996. The SSL protocol. Version 3. Mountain View CA: Netscape Communications Corporation. Also [Online: web] URL: ftp://ietf.cnri.reston.va.us/internet-drafts/draft-freier-ssl-version3-01.txt

Froomkin, A. M. 1995. Anonymity and its enmities. Journal On-line Law. 1.1.

Froomkin, A. M. 1996. Addressing law enforcement concerns in a constitutional framework. SAFE: Security And Freedom through Encryption Forum; Palo Alto, CA. 1 July, 1996.

Fukuyama, F. 1995. Trust: The social virtues and creation of prosperity. New York: Simon and Schuster.

Garfinkle, S. and Spafford, G. 1986. Practical UNIX security. 2nd ed. Sebastopol, CA: O'Reily and Associates.

Goradia, V., Kang, P., Lowe, D., Magruder, P., McNeil, D., Mowry, B., Panjwani, M., Somogyi, A., Wagner, T. and Yang, C. 1994. NetBill: 1994 prototype. Pittsburgh: Carnegie Mellon University. Also INI technical report INI TR 1994-11.

Gray, J. and Reuter, A. 1993. Transaction processing: Concepts and techniques, San Francisco: Morgan Kaufmann Publishers.

Griswold V. Connecticut, Supreme Court Of The United States, 380 U.S. 947; 85 S. Ct 1081; 1965 U.S.

Hagel, J. and Armstrong, A. G. net.gain. Boston: Harvard Business School Press.

Halpern, S. W. 1991. Rethinking the right of privacy: Dignity, decency and the law's limitations. Rutgers Law Review. 43.3: 539-563.

Hansell S. 1995. Mastercard joins banks to plan card that works like cash. The New York Times. 17 August, 1995: D2.

Hanushevsky, A. 1995. Electronic commerce page. [Online: web]. Cited November, 1995. URL: http://abh.cit.cornell.edu/ecom.html

Harrison, C. 1994. Shoppers urged to guard against credit card fraud. Atlanta Constitution. 27 December, 1994: C4.

Hart, A. S. 1996. Personal communication via email. 16 May, 1996.

Harvard Law Review. 1991. Addressing the new hazards of the high technology workplace. Harvard Law Review. 104: 1898-1916.

Heggestad, A. 1981. Regulation of consumer financial services, Cambridge, MA: Abt Books.

Henry v. Forbes. 1976. 433 F. Supp. 5.

Herlihy, M.P. & Tygar, J.D., 1987, "How to Make Replicated Data Secure", Advances in Cryptography-CRYPTO '87, ed. Pomerance, Springer-Verlag, Berlin.

Herlihy, M.P. & Tygar, J.D., 1991, "Implementing Distributed Capacities Without a Trusted Kernel", Dependable Computing for Critical Applications, ed. A. Avizienis & J.C. Caprie, Springer-Verlag, Berlin.

Hodges, A. 1983. Alan Turing: The enigma. New York: Simon and Schuster.

Hoffman, L. and Clark P. 1991. Imminent policy considerations in the design and management of national and international computer networks. IEEE Communications Magazine. February, 1991: 68-74.

Hoffman, L., Kalsbeek, W. D. and Novak, T. P. 1996. Internet use in the United States: 1995 baseline estimates and preliminary market segments. Project 2000 Working Paper. Also [Online: web]. URL: http://www2000.ogsm.vanderbilt.edu/baseline/1995.Internet.estimates.html

Ingramham, D. G. 1991. Coming of age in cyberspace. Conference on Computers, Freedom and Privacy. Washington, DC.

Internet Domain Survey. 1998. Connected to the Internet. [Online: web]. Cited March, 1998. URL: http://www.nw.com/zone/WWW//top.html

Jennifer, G., Steiner, B., Neuman, C. and Schiller, J. I. 1988. Kerberos: An authentication service for open network systems. In Proceedings of the USENIX Winter Conference. 191-202.

Johnson, B. S. 1989. A more co-operative clerk: The confidentiality of library records. Law Library Journal. 81: 769-804.

Johnson, D. 1989. Documents disclose FBI investigations of some librarians. New York Times. 7 November, 1989: A1.

Johnson, K. 1993. One less thing to believe in: Fraud at fake cash machine. New York Times. 13 May, 1993: A1.

Kailer, 1995. Reasoning about accountability in protocols for electronic commerce. In Proceedings of the IEEE Symposium on Security and Privacy. Oakland, CA. May, 1995.

Kalven, 1966. Privacy in tort law: Were Warren and Brandeis wrong? Law and Contemporary Problems. 31: 326-332.

Kaplan, E. H. 1991. Needles that kill: Modeling human immunodeficiency virus transmission via shared drug injection equipment in shooting galleries. Reviews of Infectious Diseases. 11: 289-298.

Karasik E. 1990. A normative analysis of disclosure, privacy and computers: The state cases. Computer Law Journal. 10: 603-634.

Katz v. United States. 1967. 389 U.S. 351, 369 F2d 130 (9th Cir).

Kaylin, J. 1992. When the needles do the talking. Yale. April, 1992: 34-37.

Kohnfelder, L. M. 1978. Towards a practical public-key cryptosystem. Bachelor's thesis. MIT.

Lamont v. Postmaster General. 1965. 381 U.S. 301, 301.

LaPlante, A. 1994. Citibank's smart move. Information Week. 492.12: 42.

Lewis, T. 1996. Personal communication.

Low, S., Maxemchuk, N. F. and Paul, S. 1993. Anonymous credit cards. First ACM Conference on Computer and Communications Security. Fairfax, VA. 3-5 November, 1993.

Madsen, W. 1992. Handbook of personal data protection. New York: Stockton Press.

Markoff, J. 1995. Security flaw is discovered in software used in shopping. The New York Times. 19 September 1995: A1, D21.

Marx, G., 1986, "Chapter 9: The Iron Fist and The Velvet Glove", The Social Fabric: Dimensions and Issues, ed J. E. Short, Sage Publications, Bevely Hills CA, pp 135-162

Mastercard. 1995. Secure electronic payment protocol specification draft. Version 1.1. Pt. 2. [Online: web]. Cited November, 1995. URL: http://www.mastercard.com/Sepp/sepptoc.htm

Mastercard. 1996. Secure electronic transaction technology, draft. [Online: web]. URL: http://www.mastercard.com/SETT

Mayland, P. F. 1993. EFT network risk begs CEO attention. Bank Management. 69.10: 42-46.

McClellan, D.1995. Desktop counterfeiting. Technology Review. [Online: web]. Cited February/March, 1995. URL: http://web.mit.edu/afs/athena/org/techreview/www/articles/feb95/mcclellan.html

McGraw, D. 1992. Facing the specter of AIDS. Boston Globe. 13 March 1992: 3-5.

McKnight, L. W. and Bailey, J. P., eds. 1997. Internet economics. Cambridge, MA: MIT Press.

Medvinski, G. and Neuman, B. C. 1993. NetCash: A design for practical electronic currency on the Internet. First ACM Conference on Computer and Communications Security. Fairfax, VA. 3-5 November, 1993.

Miller, B. C., Neuman, C., Schiller, J. I. and Saltzer, J. H. 1987. Section E.2.1: Kerberos authentication and authorization system. Project Athena. Cambridge, MA: MIT.

Miller M.W. 1992. Data tap: Patients' records are treasure trove for budding industry. Wall Street Journal. 27 February, 1992: A1.

Morgan, G. 1992. Balancing national interest. The Institute. 16.

Mosteller, F. 1965. Fifty challenging problems in probability with solutions. Toronto: General Publishing Company, Ltd.

Mundt K. H. 1992. New dimensions in data security. In Proceedings of the 15th National Computer Security Conference. Baltimore, MA. 438-447.

NAACP v. Alabama. 1958. 357 U.S. 449.

National Bureau of Standards. 1977. Federal information processing publication 46: Specifications for the digital encryption standard. Gaithersburg, MD: U.S. Government Printing Office.

National Center for Supercomputing Applications. 1995. NCSA mosaic web index. [Online: web]. Cited November, 1995. URL: http://www.ncsa.uiuc.edu/SDG/Software/Mosaic/Docs/web-index.html

National Computer Security Center. 1985. Trusted systems evaluation criteria DOD-5200.28-STD. Gaithersburg, MD: U.S. Government Printing Office.

National Computer Security Center. 1990. Trusted network interpretation environments guideline NCSC-TG-011. Gaithersburg, MD: U.S. Government Printing Office.

National Institute of Standards and Technology. 1991. Proposed federal information processing standard for digital signatures. Federal Register. 56: 42980-42982.

National Institute of Standards and Technology. 1994. Federal information processing standards publications 185: Escrowed encryption standard. Gaithersburg, MD: U.S. Government Printing Office.

National Research Council. 1996. Cryptography's role in securing the information society. Washington: National Academy Press.

Netscape. 1996. Netscape commerce server. [Online: web]. Cited May, 1996. URL: http://home.netscape.com/comprod/netscape_commerce.html

Newberg, P. 1989. New directions in telecommunications policy. Durham, NC: Duke University Press.

New York Times. 1995a. Woman missing bank card finds she is overdrawn $346,770. New York Times. 12 February, 1995: 1, 36.

New York Times. 1995b. Credit union's error is thieves' delight. New York Times. 9 February, 1995: B9.

Nimmer, R. T. 1992. The law of computer technology. Boston: Warren, Gorham and Lamont.

Office of Technology Assessment. 1985. Electronic surveillance and civil liberties. OTA-CIT-293. Gaithersburg, MD: U.S. Government Printing Office.

Office of Technology Assessment. 1986. Management, security and congressional oversight. OTA-CIT-297. Gaithersburg, MD: U.S. Government Printing Office.

Office of Technology Assessment. 1995. Information technologies for control of money laundering. OTA-ITC-630. Gaithersburg, MD: U.S. Government Printing Office.

Okamoto, T. and Ohta, K. 1991. Universal electronic cash. In Advances in Cryptology- CRYPTO '91. 324-336. Berlin: Springer-Verlag.

O'Keefe, M. 1994 Portable POS debit terminals mean greater convenience. Bank Systems and Technology. 31.11: 35-37.

Olmstead v. United States. 1928. 277 U.S. 438, 48 SCt 564, 72 LEd2d 944.

Pfleeger, C. P. 1989. Security in computing. Carmel, IN: Prentice-Hall.

Pool, I. 1983. Technologies of freedom. Cambridge, MA: Harvard University Press.

Privacy Protection Commission Study. 1977. Personal privacy in an information society. Washington: U.S. Government Printing Office.

Prosser W.L. 1941. Handbook of the law of torts. St. Paul, MN: West Publishing Co.

Rabin, M. O. 1978. Digital signatures, foundations of secure communication. New York: Academic Press. 155-168.

Randell, B., 1983," Recursively Structured Distributed Computing Systems", Proceedings, Third Symposium on Reliability in Distributed Software and Database Systems

Randell, B. & Dobson, J., 1986, "Reliability and Security Issues in Distributed Computing Systems", Proceedings, Fifth Symposium on Reliability in Distributed Software and Database Systems

Reid, M. A. and Madam, M. S. 1989. IC card design: Technology issues. Information Age. 11.4: 211-216.

Rivest, R. L. and Shamir, A. 1996. PayWord and MicroMint: Two simple micropayment schemes. submitted to Eurocrypt '96.

Rivest, R. L., Shamir, A. and Adleman, L. 1978. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM. 21: 158-164.

Rodman, P. 1996. Loss of national sovereignty and control by nation states. American Bar Association Standing Committee on Law and National Security Law Enforcement and Intelligence Conference. 19 September.

Rubin, L. and Cooter, R. 1994. The payment system: Cases materials and issues. St. Paul, MN: West Publishing Co.

Sandberg, J. 1995. Netscape software for cruising Internet is found to have another security flaw. The Wall Street Journal. 25 September, 1995: B12.

Schambelan, B., 1992, Roe v Wade: The complete Text of the Official U.S. Supreme Court Decision, Annotated, Running Press, Philadelphia, PA.

Schlossberg, H. 1993. Victims tired of researchers getting away with murder. Marketing News. 16 August 1993: A16.

Schneier, B. 1995. Applied cryptography. 2nd ed. New York: John Wiley and Sons, Inc.

Schnorr, C. P. 1990. Efficient signature generation of smart cards. In Advances in Cryptology-CRYPTO '89. 239-252. Berlin: Springer-Verlag.

Schuba, C. L., Krsul, I. V., Kuhn, M. G., Spafford, E. H., Sundaram, A. and Zamboni, D. 1997. Analysis of a denial of service attack on TCP. 1997 IEEE Symposium on Security and Privacy. Oakland, CA. 4-7 May, 1997.

Shamir, A. 1979. How to share a secret. Communications of the ACM. 22: 612-613.

Simpson. 1996. The effects of electronic credentials lifetime on the risks and costs of electronic commerce. Qualifier report, Carnegie Mellon University.

Sirbu, M. and Tygar, J. D. 1995. NetBill: an Internet commerce system optimized for network delivered services. IEEE ComCon. San Francisco, CA. 6 March, 1995.

Smith, S. 1992. A theory of distributed time. Ph.D. text, Carnegie Mellon University. Also CMU technical report CMU-CS-92-231.

Spafford, E. H. 1989. The Internet worm: Crisis and aftermath. Communications of the ACM. 32.6: 678-687.

Speiser, S. M., Krause, C. F. and Gans, A. W. 1991. The American law of torts. New York: Clark Boardman Callaghan.

Sproull L. & Kiesler S., 1991, Connections, The MIT Press, Cambridge, MA, 1991

St Laurent, S., 1998, Cookies, McGraw-Hill, NY, NY

Trubow, G., ed. 1991. Privacy law and practice. New York: Times Mirror Books.

Trubow, G. 1992. When is monitoring e-mail really snooping? IEEE Software. 9.2: 97-98.

Tunstall, J. 1989 Electronic currency. In Chaum, D. and Schaumuller-Bichl, I., eds. Smart card 2000: The future of IC Cards: Proceedings of the IFIP. Amsterdam: Elsevier Science Publishers B.V.

Turn, R. and Ware, W. 1976. Privacy and security in information systems. IEEE Transactions on computers. C-25: 1353-1361.

Tygar, J. D. 1996 Atomicity and electronic commerce. In Proceedings of 1996 Symposium of Principles of Distributed Computing. Philadelphia: ACM Press.

Tygar, J. D. and Yee, B. 1991. Strongbox: A system for self securing programs. In Rashid, R., ed. CMU computer science: A 25th anniversary commemorative. 163-198. New York: Addison-Wesley and ACM Press.

United Nations. 1995. The United Nations and human rights 1945-1995. The United Nations Blue Book Series. 7. New York: United Nations.

United States v. Miller. 1976. 425 U.S. 435.

United States v. Payner. 1980. 447 U.S. 727, 100 S. Ct. 2439, 65 L. Ed. 2d 468.

U.S. Bureau of Census. 1995. Statistical abstracts of the United States 1995. 115th ed. Washington: Department of Commerce.

U.S. Council for International Business. 1993. Statement of the United States Council for International Business on the key escrow chip. New York: U.S. Council for International Business.

U.S. Department of Defense. 1985. Department of Defense trusted computer system evaluation criteria. Fort Meade, MD: National Computer Security Center.

U.S. District Court. 1992. United States v. Julio Fernandez, John Lee, Mark Abene, Elias Ladopoulos, and Paul Stira. Indictment 92 CR S63.

Van Natta, D. 1995. Five phone marketers arrested in credit card sting. New York Times. 15 August, 1995: A14.

Verisign. 1995. Verisign expands digital ID offerings to leading web servers. [Online: web]. Cited November, 1995. URL: http://www.verisign.com/pr/pr_servers.html

Verisign. 1996. Frequently asked questions about digital ID's. [Online: web]. Cited 26 May, 1996. URL: http://digitalid.verisign.com/id_faqs.htm

Visa. 1995. Secure transaction technology specifications. Version 1.1. [Online: web]. Cited November, 1995. URL: http://www.visa.com/visa-stt/index.html

Wacker, J. 1995 Drafting agreements for secure electronic commerce. In Proceedings of the World Wide Electronic Commerce: Law, Policy, Security and Controls Conference. 6.

Walden, I. 1995. Are privacy requirements inhibiting electronic commerce. In Proceedings of the World Wide Electronic Commerce: Law, Policy, Security and Controls Conference. 10.

Warren, S. and Brandeis, L. 1890. The right to privacy. Harvard Law Review. 4: 193-220.

Waters v. Fleetwood. 1956. 91 SE2d 344.

Wood, J. C. and Smith, D. S. 1991. Electronic transfer of government benefits. Federal Reserve Bulletin. 77.4: 203-217.

Woodyard, C. 1991. Lungren joins suit accusing TRW of illegal practices. Los Angeles Times. 9 July, 1991: 1.

Yee, B. 1994. Using secure co-processors. Ph.D. text, Carnegie Mellon University. Also CMU technical report CMU-CS-94-149.

Ziegler, R. F., Brodsky, D. E. and Sanchez, C. M. 1993. U.S. securities crime. International Corporate Law. Criminal Investigations Supplement: 69-74.

Zimmerman, P. 1995. The official PGP user's guide. Cambridge, MA: MIT Press.

Zuckerman, G. 1994. Insider trading is back. Investment Dealers Digest. 60.2: 12-15.

Add ref: (Herlihy, 1991; Herlihy, 1987; Randell, 1986; Randell, 1983; Marx, 1986