ref as FC01 Notes, L. Jean Camp, www.ljean.net/FC01notes.html, March 1, 2001.

Financial cryptography 2001

These notes are from the morning session of Financial Cryptography 2001 in The Cayman Islands in the British West Indies. My sometimes-relevent asides are in italitcs.

The conference refelected both a field that was coming of age and an academic area that is getting entangled in the nasty interdisicplinary world. Issues of computation efficiency, device capacities, implementation in real world financial systems, and risk management were all addressed. Although I did not count I would guess something more than 100 attendees. Because the conference is front-loaded, that is the morning sessions are academic and the afternoon sessions are commercial, the attendence at the earliest sessions was highest.

The afternoon sessions were vendor sessions. You can go to their sites to check out what the vendors say for themselves. The sponsers were nCipher, Bibit, Intertrust, Hush , Zero Knowledge, IBM, CertCo, CertiCom, RSA and Microsoft.

As in all conferences the value is in the dialogue as much as the presentations. I did not review my paper, for obvious reasons, the slides are at http://www.ljean.org/files/FC01.html. However, I also did not review the paper immediately before mine, as I was quite busy tensing up.

There were two non-crypto papers. There was one paper on analysis of trust from difference disciplinary perspectives (mine) and one paper on analyzing the cost of MicroMint. I wonder if the financial privacy laws happen in the US if some of the papers next year may be on regulatory compliance.

MONDAY FEB 19

The first paper was Amortized E-cash by Moses Liskov and Silvio Micali at MIT. Moses presented it.

Okamoto was the first to offer a proposal for divisible e-cash, and is used by the authors as the basis for comparison of their own coinage. In addition to past work on divisible e-cash, there has been much work to create definitions for the requirements for divisible e-cash. In this case the authors developed the requirements and worked from there.

The core idea was to amortize expensive crypto operations over multiple coins by dividing the coin into two two parts. The goal is provide cash that is both off-line and anonymous. Off-line implies double spending, so identity is embedded in the coin. Spent once, anonymous, spent twice identity is released. The basic idea is that two expenditures create two equasions, thus solving for identity.

Embedding identity requires uses zero knowledge protocol. However, zero knowledge protocols are expensiove. Consider the cost when $100=10,000cents

Coins from one wallet can be linked to coins with other wallets. Models the identity lost under a single wallet on the conceptual model of a single ATM transaction. Anonymity is lessened because the wallet becomes a pseudonym. Each wallet has a single wallet-defining subcoin. Authenticated by bank, includes user’s identity, specifies all coins.

Wallet is a Merkle hash tree. Root & depth defines coins. Wallet of depth d has 2^d subcoins. Common subcoin is the root.

Each coin is an ephemeral k, or commitment. Common subcoin includes root, depth, PK, Ek(identity). Essentially core for double spending is Schnorr signature which exposes the signed identity by exposing the key when there are two transactions using the same subcoin (which is a subcoin-specific key). The existence of two transactions and the shared wallet/coin that contains identity.

Q& A argues that the contribution of this paper is not primarily efficiency given the ZK protocol but rather off-line divisible coins with strict security assumptions. Moses agrees. The questions were rather pointed.

The second paper was Offline payments without trusted hardware by Matt Blaze, John Ioannidis, Angelos Keromytis (ATT, ATT, Upenn)] presented by Angelos Keromytis.

The design decisions began with a single observation: avoid hardware because trusted hardware fails catastrophically. When it is broken, all the wallets are broken.

As opposed to concentrating risk, distribute risk in an appropriate per-transaction manner. Tolerate and manage risks of bad transactions. Use credentials to encode risk management. User has credential, which encodes risk profile. Assumes occasional communication for credential validation but does not require constant connectivity.

In order the create a reasonable level of risk management the maximum value is low (e.g. $1.25) and credentials are short-lived. So there is no CRL rather risk management is embedded with credential extension. The authors argue that this is not unlike the credit system.

This is based on the keynote microcheck system developed by Upenn and AT&T. Based on a Trust Management Language. Claims other systems require their own clearance systems, although this was not the case with NetBill or FirstVirtual, or any early aggregator.

Payments are vendor-specific microchecks. Vendors aggregate. Vendor determines how much risk to accept. Keynote compliance system is the vendor-specific risk-management mechanism that requires that the vendor manage his or her own risks.

Compared to newspaper stands, where there is little theft. Implemented on a Palm with a PocariSweat machine using a Linux PC as merchant hardware and ssl port, gnu utilities on Palm hardware under Linux.

Billing or clearing service must have pre-existing relationship by payers. Note that AT&T would be a good party for billing given their micro-billing transactions. In order to support this the central party would have clearance, party fees, or transaction percent.

No claims on anonymity.

The work of Ian Simpson isn't referenced but could inform their implementation IMHO. Ian's paper, Modeling the Risks and Costs of Digitally Signed Certificates in Electronic Commerceis at http://www.ini.cmu.edu/NETBILL/pubs/certlife/certlife.html.

The next paper was one of two non-crypto papers. This paper was focused on order of magnitude caluculations which examined the possibility of a trial Microcmint, Practical Problems for Building a MicroMint by Nicko van Someren of www.ncipher.com

Someren is the CTO of a company that is trying to implement secure hardware and argues that the economics of MicroMint make it impossible to introduce because you can scale up but not down.

As you may recall, using the birthday paradox MicroMint uses coins that are hash collisions. Minting is cost —effective because of great economies of scale. Extra criteria can be added, so the scheme can be n-way hash. Coins have a life span based on the work factor of the has collision. Once spending begins anyone can mint coins but it takes time, and the bank has a head start.

"It is traditional at this conference to present an e-cash scheme so I thought I would make one up". (but doesn’t offers secure hardware)

Not anonymous. Double spending is tolerable because of very low value transactions and rather traditional risk-management schemes.

Problem: dirty money is cheap. Business model is spending false money while banks are living off interest and fees. Argues that attackers have far more money, but Moore’s Law helps the attacker. So Someren argues that in fact the attacker can produce fraudulent cash because a borderline attack, including Moore’s law Argues that for the system for function effectively the initial investment will be about $100M. MicroMint scales up but not down.

Paper in the proceeding argues you need dozens of floor to ceiling racks and uses .5 megawatt of electricity to create hash engine. 10,000 sq.-ft.. then sorting the hash requires .25 terabyte of storage and then sort it. Scales up but not down.

Basically only a government could do it. In general argues that assumptions were cryptographically valid at the time (six yrs ago) given hardware costs. Thus MicroMint is not feasible today. The need for massive scale in MicroMint makes this system far more expensive. However he neglects the secret predicate option. His focus that a system must be able to function in small prototype. Digital signatures allow this. He proposes using secure hardware since it is easier to scale up.

The next section was a panel on Digital Rights Management which was chaired by Yair Frankel who argues the keys to DRM are efficacy, customer acceptance, and diffusion.

David Kravits, WAVE

Management of conditional Access keys

Wave is concerned with set-top boxes where consumers make multiple content copies. CAM not only determines what consumers watch but must "log attacks". Proposes that only compliant monitors are allowed.

Pirates may not be backward compatible but those who offer consumer hardware must be. (This is a bit ironic given that the speaker advocates only allowing content to go into compliant monitors.)

Consolidated hardware deployment offers the opportunity to replace point solutions by shared solutions and preserve user privacy yet handle revocation.

Barb Fox: Microsoft Web TV

No one has been deterred by the total historical failure of detailed controls. At least three competing DRM standards: W3C, CableLabs Thuis DRM is a life form which is amazingly resiliant. </humor>

Conditional access currently consist of a massive number of supplication-specific systems including Digital RM systems, black box crypto

Barbara suggests that the problem with digital information is that there is a completely immature risk model. Where in physical or analog systems the risk model is mature.

The stakeholders have completely differeent design parameters. First industry want authentication, encryption, renewable, revocable. However for there to be widespread adoption the system must be coherent, cheap, privacy, and use rational risk-mgt. Finally for efficaicy the system must be robust, scalable, and support flexible decentralized mgt.

Jeremy Wyant: Ntru

Device-based systems are what he proposes. He sells a toolkit.

He provided an overview of the possible set of technologies: watermarking, fingerprinting, encryption, authentication, network scanning, fees on blank media, and an honor system . He agreed with/repeated/expanded on what Barb said, in detail, e.g. flexibility in terms of computability and ease of use.

Proposes that a minimalist DRM on trusted devices that is supported by a critical mass of content providers would be widely adopted.

Thomas Sander of Intertrust

DRM is dynamic in both business model and technology.

The current goals is convenient anytime anywhere accesses to your music.

The ultimate goal is to try to tie the right of use of content to a particular person. Thomas proposes lockers which track what you have rights to access, argues that portability is the core current technical challenge.

Usability is terrible. However, second generation products are improving. Must be backward compatible.

Strong legal protection will protect DRM investment because there is no way to run a legitimate business using copyrighted content on the net without consent of rights holders. OTOH, technology companies influence content strategies.

NCa: Southern Ca doesn’t get it

SCa: Yes but we already Have It, and to get content on-line you have to play our way.

Loss management needs to be a critical element. Consider that in Pay TV most protection schemes have been broken but there is still plenty of money to be made. In DVD there was no proven monetary damage, in terms of crypto it was broken but in terms of business DVD still works.

He suggested that versioning is not like QoS, as it exist in the net access market. If you could combine ease of use with Napster with fidelity that commercial service can provide people are willing to pay for it.

Audit v privacy is a major research issue. Intertrust has deployed technology with Universal, BNG, and Bertelsmann. Learned that the shopping cart model works better than the wallet model in US. In Europe the wallet model works better.

Conclusion: economics is on the side of digital distribution and the technology for adequate DRM already exists.

Scott Moslowitz, BlueSpike

Scott starts by saying his has very different views. Advocates stenographic ciphers with traditional encryption. Believes that artist should sign work, and then trace and bust the thieves.

There are two different risk models. The current concept of DRM is pay before viewing. In constrast watermarking models a "look and then pay or we trace you".

Market note: 10 movies account for 90% of store revenue and 83 recordings provide 25% of sales. This implies that under the current blockbuster model DRM could focus on very few titles. Soctt belives the highly skewed blockbuster model is a result of fre consumer choice not market strucutre and that this distribution will continue on the net.

Argues that consumers advocate DRM that ‘reduces value’; fair use and first sale are culturally ingrained and reasonable. He believes in risk Mt but calls it balancing copyright and privacy. Security system must first add value and then do no harm.

Audience: I am disturbed by this discussion. DRM is not a technological problem. DRM takes away rights like Fair Use

Audience: How to balance rights and detecting infraction vs. anonymity and privacy.

Joan Feigenbaum: There is an unstable bubbling legal cauldron. The whole challenge is not technology or law but rather business.

The panel agrees. (DRM is narrow and economically deterministic.)

Audience: Napster allows you to find music on the harddrive of someone who has similar interests. Napster allows consumers to be dj’s etc. Napster allows customers to listen first. Napster has radically incresed CD sales. Scott: Napster has radically reduced singles sales.

Matt Blaze suggests that those who would remove fair use and first sale through technology lose the social bargain of copyright. David suggests that putting the ball in the owner’s courts.

Thomas: It is not the lack of DRM -- it is the content owners. The content owners are making plenty of money and are not interested in cannibalization of their own business model. DRM is just an excuse or attempt to keep business models unchanged.

TUESDAY FEB 20

The first session is on Groups and Anonymity. Stuart acknowledges that this was kind of an umbrella session, of good papers that need not necessarily be grouped together.

Group authentication protocol: prove one belongs to a group without revealing identity. Requires the following steps:

Biblio:F C 99: Shechter, Parnell, Hartemink and FC 00: Handley

I however think that the work of Brands provides the ability to prove group membership. But that is a general expensive solution and this may be a cheaper specific solution.
Setup: p, g generator, u public constant
z,w private keys of authority
v=uw mod (p-1)
User data:
User secret key x
User public y=gx mod p
Keys are used for other applications to prevent users handing off anonymous membership
Registration
Authority chooses a in space of g
Authority computer
a1=(gyz)a mod p
a2=a w mod (p-1)

User now has a certificate (a1, a2). User can prove she knows discrete log. You can use Schnorr or zero knowledge protocol to prove this.

Criticisms of Homage in the paper.

Suppose a user knows n,m such that using m=nz mod p. Then such a user can construct a valid, bogus, proof of membership. The user can obtain the zth root of some number m by constructing her challenge in verification of soundness. A solution is proposed. (Sending hash of verification, rather than verification which is zth root of m)

Authority can break anonymity by using different secret keys z to computer certificates and then distinguish users during authentication. In order to avoid this it is necessary to avoid verification. The author proposes a new mechanism using zero knowledge proofs for verification.

Audience member argues that protocol is not completely broken. In fact, audience member argues that all the flaws found and fixed in the protocol were found and fixed last year. Is the protocol broken or going through normal iterative improvement? The first paper was On the Security of the Homage Group Authentication Protocol. This paper was presented, and accepted at a previous FC when the author was a high school student. Both respect and concern for a young collegaue resulting in somewhat harsh questioning.

The next paper was Anonymity without Cryptography by Dahlia Malkhi and Elan Pavlov, and Dahlia presented it. This was an excellent presentation. Crypto-free communication assuming secure channels using Anonymous Multi-Party Computation uses Chaums mixnets.

Really it is with less crypto (as Dahlia notes first thing in talk). This would allow use with pervasive devices with low processing power. No infrastructure required.

There are fairly secure channels use without crypto — e.g. regular mail. The initial secure channel assumption is only needed to bootstrap the protocol. Also, there already exist SSL connections so this requires no additional infrastructure after initial communication between users.

Describes mixes, which is effectively onion routing with the removal of timing attacks. Of course onion routing/mixing requires multiple public key encryption.

As opposed to encryption the messages are broken into shares and the initial shares are sent as elements. At each level message values are split and remerge. So any function used must be homomorphic. F(a%b) = F(a) % F(b) where % ={+, *, xor,…} Thus it is restrictive but practical.

Like this:
Initiator and recipient agree on permutation and function
Initiator: m = (x1), (x2)
Sends the tuples as distinct messages
Second layer performs permutation (y1, y2)
Recipient receives (x1, y1) and (x2, y2) and can calculate m
The goal here was the creation of a new primitive. Its main practical feature is that it requires no heavy calculation of the client. This is a building block for various security applications, perhaps ideal for wireless environments.

Then Fair tracing Without Trustees by Dennis Kugler and Holger Vogt was presented.

A protocol which provides coin tracing without owner tracing. Goals of protocol are:


Legal tracing: tracing is legal if approved by judge
Illegal tracing: illegal if not approved by judge
Fair tracing: if legal tracing is possible but illegal tracing is not

In order to enable this tracing is detectable after the fact. Therefore if illegal tracing has occurred it is detectable, traceable is auditing and illegal tracing is provable.

Decision to trace coins must be made at generation of coins. After the coin is spent the validation of the coin both implements and identifies coin tracing. Gross generalization: the coin is generated with a second key, x= a PKB (mod p) as opposed to PKB(mod p). No additional trusted parties that hold sensitive data. The goal is to build in later auditing rather than long term trust. (Generally what is done in terms of governance today.)

Audience has mixed feelings. Is tracable anonymity anonymity?

Why the War on Money Laundering should be Aborted

This was the big event talk by Richard Rahn.

Right wing rant about money laundering which focuses primarily on the drug war, since the drug war is such an exercise that could be called ludicrious it it were not so relentlessly cruel. Rahn is a Pepperdine School honorary doctorate. Opposed to money laundering controls. Notes the political spectrum agress with im, defines it as all the way from the Cato Institute to the Enterprise Institute. Invites all to join him in his plan-free goal to end contols on money laundering. He has a book out, The End of Money and the Struggle for Financial Privacy; Discovery Institute; ISBN: 0963865420. (I do too, mine is Trust and Risk in Internet CommerceMIT Press)

Afterwards I asked him what his plan was. I proposed that the Tobin tax be adopted in exchange for tax evasion money laundering, while the drug war is a distinct issue which could be opposed on its own terms. I still don’t know what he thinks about the Tobin tax. I know we are both against the drug war. But money laundering is also used, for example in the trade of women and children, and the slave trade in general example the press release here:
http://www.ksg.harvard.edu/ksgpress/ksg_news/publications/trafficking.html or http://www.friends-partners.org/partners/stop-traffic/1999/0888.html The drug war issue is tangential to the money laundering issue. On money laundering the issue is: are we willing to give away the rights of the money on the premise that few medium-sized fish will be caught. I agree on the outcome but not on the arguments.

Provable Secure Implicit Certificates by DRL Brown, R Gallant, SA Vanstone

Implicit certificates contain a user’s identity, data D, and CA’s public key which together can be used to reconstruct the public key of the user.

In earlier implementations the CA knows the private key of the user This paper produces a self-certified implicit certificate scheme meaning that user generates private and public keys.

Uses El-Gamal signature key. The CA and the user contribute to the private key. Verifier can compute the user’s public key using public information ([public data includes an elliptical curve point, user identifier, CA identity, and expiration date, serial #)

This system has considerable computational savings because the computation of the digital certificate and verification of user (e.g. user proves knowledge of private key) are integrated. This produces at least a factor of two improvement in processing overhead.

Next Joan Feigenbaum presented Nonmonotonicity, User Interface, and Risk Assessment in Certificate Revocation, by her graduate student Ninghui Li and herself.

Revocation is a method of managing risk rather than providing security. Has three recommendations.

  1. UI of PKI should be clear and simple
  2. Difficult of revocation is caused by temporal non-monotonicity
  3. A certificate illustrates that the issuer believes a statement at issuer time t
  4. A revocation of a certificate should cancel the certificate and nothing else
  5. A certificate that serves diverse applications should have flexible revocation schemes
  6. The UI of a PKI should support auditing
  7. We recommend not allowing certificates to be put on hold in order to simplify auditing

CRLs proves not that a binding is valid but rather to update the fresh time of all valid certificates

Stuart Stubblebine (sp?) suggests that he has previously published these axioms in previous FC

Again I think Ian's work would be helpful.

Next was a paper on Mutual Authentication for Low Power Mobile Devices by Markus Jakobson & David Pointcheval.

Mobile apps have strict power and computational limits. Also, mobile users interact with a range of switches not all of which are trustworthy. Thus mutual authentication is required.Using Schnorr the computation can take place in pre-processing. It enables mutual authentication.

Bob decrypts an El Gamal ciphertext to authenticate himself. Alice who has low power, uses Schnorr to authenticate herself. For a designated server Alice can precompute everything. However, it seems to me that the server usually knows which mobile connection is coming while the client is less aware of handoff management.

Here are a couple of other paperes on wireless commerce: www-csl.cs.colorado.edu/csci7000/mobile_papers/peirce99.pdf and the O project which I am geogrpahically required to mention [PDF] info.lcs.mit.edu/data.ws/Research%20Highlights%20rev.%205.pdf Notes on a wireles e-commerce conf which complement this paper are available at http://www.cs.dal.ca/~akerman/wireless.html

_______________________

Rump Session Notes from Tuesday

The European Bridge-CA Connecting Existing PKIs

Bernhard.esslinger@db.com

Apps today assume PKI but how do you connect multiple PKIs between businesses/consumers. In fact the problem is too many PKIs, and there are problems with all of them

Efficient & Secure Protocol for Stock Market

Shin’ichiro Matsuo NTT

Proposes a broker-free market that uses PKI to identify the right of users to buy and sell.

Proposes hashing (stock, prices) & look for matches, search mechanisms allows users to expose upper and lower bounds of price range and search matches highest acceptable seller price with lowest acceptable buyer price. Provides fairness, verifiability, un-deniability, anonymity

Claim: will always find Paredo optimal price

How to Leak a Secret is presented by Shamir

starts with Relationship between Finance and Crypto: using the Time=money email joke

Seriously, the Goal: anonymous leak that verifies group membership (e.g. in the cabinet) without showing identity

The Deep Throat protocol

Assume PKI exists all have keys well-known

No trusted authorities

Efficient for large groups

Sender remains information theoretic anonymous within group

Sender can later prove he or she sent it

A set is defined by a set of keys <m1, …mn>

A member signature is

x12(mod m1) XOR x22(mode m2) XOR ….. ==h(m)

Proves group membership, sender can recreate later

NanoMint

Donald Beaver, CertCo Inc

Argues Micromint would work with DNA computing because volume would be critical and Moore’s law won’t apply. How serious was he? I cannot say.

Exact Payments in Electronic Cashby Yiannis Tsiounis

CTO, co-founder of InternetCash Corp

Great outline and comparison of various Internet commerce mechanism. I want to use the slides for my class. This is as good an overview of all the mechanisms as I have seen, in the set-up section.

Selective Disclosure Envelopes presented by Rene Pevalta from Yale

His research focus is to develop a primitive that allows selective elements of a message to be exposed by the owner

His goal is not unlike Incogno’s

He uses the complexity of construction of VLSI logic assuming only NAND gates. Minimizes as if for layout and then counts order of uncertainty by delay (e.g count 1 for every gate delay in series)

Self-Protecting Pirates presented by Aggelog Kaiyias

A story about broadcast piracy

Suppose pirates are aware of tracing and that they cooperated and compared outputs of data streams.

Put keys outside box so that only pirates can cooperate in the pirate network

Cooperatively determine data flow — distinct data results from attempts at tracing — pirates detect all attempts at tracing

_________________________

WEDNESDAY FEBRUARY 21

The first session was concerned with exmaining the single use credit card numbers which are ap opular way fo addressing credit card fraud over the net.

Off-line generation of limited-use credit card numbers

by Aviel Rubin and Rebecca Wright was presented by Rebecca. They argue that small changes in the infrastructure would allow much more security. Given the 16 digit limit on credit card numbers, any attack on credit card numbers can fairly easily succeed.

Audience: 13- 16 bit card digit change was a huge expense and ANY change in infrastructure is massively expensive

Matt: benefit to the consumer is not only increase in security but also privacy

Shamir: the cost of infrastructure change is excessive? (I was in the back and could not clearly hear his comments)

A Security Framework for Card-based systems presented by Yiannis Tsiounis

Offers a formal model of credit card and proposes cardsec product

Discusses Internet cash

With card present transaction in credit card numbers one cannot expand space and create card numbers, and that currently the signature authenticate.

In contrast MOTO transactions: (MOTO stands for mail order telephone order it means card not present) have problems. They are forgeable, although the credit card space is unexpandable the amount can be changed. Credit card number can be used multiple times

The Formal ideal is that one cannot expand space and create card numbers, and a PIN authenticates

This would make MOTO transactions still forgeable, at least unexpandable, yet sill the amount can be changed. As long as one reuse requires PIN, merchant can replay if merchant sees PIN.

Credit cards would comply with security definition IF the combination of credit card number and PIN were long enough and alsways keep secret from merchant

Thus the proposal for InterCash cards offers a Card id: 9 digits base 32, with a Card security code: 11 digits base 32 and requires the use of a User PIN: 4-8 chars. (Authentication & signature function: HMAC-SHA1)Total security is 75 bits.

Paper includes cost analysis.

SecureClick: A Web Payment System with Disposable Credit Cards Numbers

Adi Shamir addressed the problme of developing a more secure payment system for web.

Proposes a disposable credit card system. Soon it will be implemented and used for 1//2 of all cards in Israel. He is not a founder of the company but a security consultant. Discusses manner in which it addresses security while still remaining feasible in the market. Practical stuff. Internet Cash Payment Protocol

It offers the following:

Promises an update on the trial in Israel next year.

Panel on Internet Voting

Participants
Chair: Moti Young
Avi Rubin AT&T
Ed Gerck Safevote
Ron Rivest MIT

Ron Rivest

Edison patent 90846 for mechanical voting for Congress, never adopted because it was "too fast" and did not fit the cultural realities (For example the voting period sometimes allows horse trading. Constitutuient service, capital improvements, etc. are traded in order to build coalitions. While we may classify all of this as "pork", removing it is a political problem and not something to be done via technical fiat. )

MIT and CalTech working on alternative voting technologies Secure platform problem is very serious Buying and selling votes must be policed. Anonymous political contributions are currently mited by size, electronic systems should not enable political smurfing.

How can we bring financial crypt to bear on this problem? Absentee ballot analogous to one-time credit card number Casting a vote like an electronic coin

Integrity is MOST IMPORTANT Auditing is incredibly important Lack of uniform technology makes systemic fraud more difficult, heterogeneity is critical Disabled voters are critical Absentee ballots are critical.

The security community is not ready at this tim to enable on-line voting.

Ron’s favorite: fill in bubbles with scanning at site

Ed Gerck, PhD

Ed is the CEO of SAFEVOTE. He spends time on cpsr-activists discussing voting so any questions can easily find a good discussion forum there. Ed thinks the technology is past ready.

Public votes must be: Anonymous, Secure, Reliable, Trusted

Acknowledges valid concerns that need to be addressed. Discusses the situation in Brazil, with zero fraud and 100% logging Ed is very ready to talk about his system. Check his site or engage him on CPSR activists email group

VoteHere: Andy Neff

At the least should electronic voting should not be worse. It’s going to happen no matter what we do.

There are classes of electronic voting not to be confused: e-voting includes digital voting, DRE machines, optical scanning machines

Louisiana voting fraud in 1999, where someone had altered the DRE machines software

i-voting, in contrast, includes voting over the network; using generic platforms

There have been military votes for president using a system provided by Booze Allen. 84 votes at a cost of about 8,000,000. Used full PKI. After authentication personnel selected choices, transmitted over an SSL. Created a paper record.

Arguments Against I-voting: Digital Divide; Security; Loss of community

The events in the election of 2000 put perhaps too much light onto the problem, so that changes may be made for the sake of quick change. Risk that the better technology will be lost.

Successful trails in WA, Fl, OH, IA, VA, AK, AZ

Refused to do AZ Democratic vote or LA republican party because of lack of security. Proposes that the problems in those system were the result of the consultants hired and their lack of strict technical integrity.

Fundamental distinction in SafeVote. SafeVote is built on a subscription model, in that the SafeVote people remain to operate the machines and provide oversight. Safe Vote tries to address some of the human problems with the voting system.

Avi Rubun, AT&T

Approach it as an engineering problem with the realities of the installed base.

Threat variables: election type, expertise skills, resources needed to disrupt, motivation of potential attackers, disruption necessary to sway the election, Internet voting, technical awareness.

Issues:

Technical issues:

Registration

Wintel is installed. It’s real it’s flawed and it exists. Think Backorifice. It is totally inadequate as a voting machineThe Internet is totally inadequate as a network. The ratio of cost of disruption/toresult fo disturption is far too low.

Bob Hettinga (sp?) wants to talk about finance and voting not public voting because this political cryptography no financial cryptography

Matt Blaze says fraud and error is quite possible but the difficulty of accomplishing fraud is proportional to the amount of fraud you can perform however there are catastrophic failures in networked systems. We do not need to accept the idea that Internet voting is what we are going to have. Our role is not to make it happen but rather to prevent it from happen.

I made a very strong statement. Noting that some of the factual statements were wrong. For example, Andy Neff said that there was a divide in plumbing and that was fine. In fact the right was passionately opposed to publicly funded water systems. Cholera epidemics finally motivated widespread investment in public water supplies. I accused him of political autism.My point was meant to be that the distribution of risk is a voting system is deeply social and political. While those debates about risk distribution should be informed by technical understanding, making technical decisions about risk distribution is fundamentally political and social. (My comments were scathing and I meant them to be far less so. I hate it when I do that. Andy Neff is a committed intelligent athical person. But still I think the problem of specifications is an (admittedly irrational) political one, not a strictly scientific one. Having technical expertise in a democracy can be frustrating because people have the right to be wrong.)

The discussion was so passionate that it was determined that the other panel will meet after lunch.

Q: Do we understand the systems that do exist? Is it that the nature of paper itself that gives us a false sense of confidence? Can we inspect a system? Do we understand all modes of failures? Does having four people look at the number make it right?

A speaker said that technical people struggling to do the right thing should not be scathed.

Stefen Brands has put forward a proposal for shared key systems. He proposes this would provide anonymity but certification for voting.

Andy Neff offers threshold privacy, where the VoteHere system offers threshold voting.

I missed the post-lunch session.

THURSDAY THE LAST DAY

The first paper of the last day began as I was getting my tea, so I do not know the name of the presenter. Monotone Signatures is by David Naccache, David Pointcheval, Christophe Tymen

Predicates are monotone if for any input x if Pn(x)=>P n-1(x)=>…=>P1(x)

Monotone signatures requires a ket generation algorithm, a signing algorithm, a list n of monotone verifying algorithms. THis offers the following properties: completeness; soundness (no existential forgery) Indistinguishability missing public keys should not change distribution of the t+1 certificate

In most systems the idea is that keys are kept and that revocation is an unusual event. In this system the possibility/reality of fraud is addressed in the design by depending on a predicate and key generating function as well as creating messages such that the verifier can determine on which dates they may be valid but the attacker cannot.

Properties of Schnorr: to prevent n corruptions requires n random values to prevent n corruptions. And clearly this is incredibly expensive. Okamoto-Schnorr improves on this cost. MSS offers improvements by hidign relationships between the key generation and signing algorithm.

Rivest suggests that this is similar to the revealed predicate in MicroMint.

Shamir says signature size has an upper bound of k.

A: only verifier and signer need to know k, it is hidden

The Power of RSA Inversion oracles and the security of Chaum’s RSA-Based Blind Signature Scheme was presented by C. Namprempre (who goes by Miao)

Critical topics in coin creation: unforgability and anonymity

RSA is secure but other than noting that it is a one-way function the basis of RSA security in a transactional system is not well understood. In order to evaluate digital signature schemes it is important Building on the work of one-more forgery where forger can get q signatures and is successful is forger can create q+1 coins, e.g. forger is assumed to have access to the Oracle

RSA is homomorphic E(a)*E(b) = E(ab) thus coins must be constructed to remove this homomorphism Chaum’s blind signature were quickly reviewed in her talk, necessary for most audiences.

The problem the paper is trying to shop is that the Chosen Target Inversion Problem is easy to solve without an underlying proof that RSA is np complete. If Chaum’s signature protocol is forger then the CTI problem is easy to solve. The CTI problem can be solved in polynomical time. They have proven the unforgability of Chaum’s scheme. RSA inversion problem such as CTI problems capture new issues of RSA and are of independent interests.

No questions.

Optimistic Fair-Exchange with Transparent Signature Recovery

Presented by a colleague, neither author could make the conference.

Transparent means that whatever occurs the participants at the end will have correct signatures. An example of fairness: airline ticket is provided bit-by-bit as is payment.

Trusted Third Party should be used only during set-up and dispute resolution. TTP can either give affidavit and can be transparent. (recommends work of Asokan).

Non-english exchange with Shamir. For my purposes, an encrypted exchange.

The transciton is as follows:

P-> C encrypted item and providers commitment
C->P client’s committed signature on the item’s description
P->C item and provider’s final signature on item’s description
C->P client’s final signature on the item’s description

TTP will produce final signatures from the committed signature and will transmit item to client or payment to provider

Maintains that this provide a high level of atomicity.

Very similar in goals to the work on NetBill by Sirbu, Tygar, etc. The difference between them may be in efficiency. NetBill requires one DES and two public key encryption operations. NetBill requires no operations but read on case of default. This seems to have traded efficiency for grace in design. However, it is more efficient than a zero knowledge protocol. Substitutes computation for communication. Assumes identity of consumer for purposes of dispute resolution.

I believe that anonymioty is sacrificed for dispute resolution. However, the authors were not there so they could not address my concerns. My big thing in e-commerce was anonymous roolback and asystem which would not allow framing of consumers even if all security parameters failed. It's at http://www.ljean.net/acd/acd.html in an old form a newere one should soon be at http://www.ljean.net/tse.html

Quick Auction review common to two following papers:

English: buyers bid up

Dutch: sellers bid down

Sealed bid auctions provide privacy of bidders

Vickrey Auction:

Sealed bid has first price and second price auctions. First price is M=0, second price is M=1

M items, n>M bidders. Bids (mn, m n-1, ….mj…m0) winning price is mm and all winners pay mj. but only parties ' (n, j+1) win where M=n-(j+1)

(M+1)st Price Auction

presented by Hiroaki Kikuchi who offered his email as Kikn@p.u-tokai.ac.jp

Identities and specific bids are usually exposed even in a M>0 auction. This information is not necessary, and can be hidden for all auctions. Hiroaki offers anonymous sealed-bid mechanism.

Threat models: dishonest bidders. Bidders can conspire to control winning prices. Nonrepudiation is required to prevent bidder conspiracy. Auctioneer: leaks information Blackmail: forced bidder conspiracy

Literature review. Work has been done on Dutch auctions and distributed auctioneers/ trust in auctioneers. Also oblivious evaluator.

Here focus on distributed approaches.

Idea: homomorphism of sum of polynomial. Degree of polynomial represents bid. Calculate bid of largest degree. Bidders calculate shares of polynomial (like secret sharing) and each auctioneer has some integer x.

Using verifiable secret sharing bidders can detect cheating by auctioneers. Auctioneers can prove correctness of results. An additional polynomial identifies winners. The prices offered by the individual bidders are never known but the price paid by the winners is known ex. Since this works on Vickrey auction this works on all auctions.

Sorry, couldn’t hear the question. Still sitting in back next to my outlet.

Non-interactive Private Auctions presented by Oliver Baudron

Very good overview of previous approaches to auction approaches previously published. Bids are encrypted via predicates, and the maximizes for each predicate. Bit-comarison of predicates. Winner receives zero as predicate comparisons. Winner proves submission ofhighest bid.

For Vickroy auctions the protocol is run each time until the jth winner is reached. More expensive than previous protocol and exposes more information. For Dutch and English auctions an innovative design.

I am definitely coming to this conference next year.