Regulatory lags for New Technology [2013 notes]

I found some old notes from June 2013 on time de­lays in how fast one can ex­pect Western poli­ti­cal sys­tems & leg­is­la­tors to re­spond to new tech­ni­cal de­vel­op­ments.

In gen­eral, re­sponse is slow and on the or­der of poli­ti­cal cy­cles; one im­pli­ca­tion I take away is that a take­off an AI could hap­pen over half a decade or more with­out any mean­ingful poli­ti­cal con­trol and would effec­tively be a ‘fast take­off’, es­pe­cially if it avoids any ob­vi­ous mis­takes.

1 Reg­u­la­tory lag

“Reg­u­la­tory de­lay” is the de­lay be­tween the spe­cific ac­tion re­quired by reg­u­la­tors or leg­is­la­tures to per­mit some new tech­nol­ogy or method and the fea­si­bil­ity of the tech­nol­ogy or method; “reg­u­la­tory lag” is the con­verse, then, and is the gap be­tween fea­si­bil­ity and re­ac­tive reg­u­la­tion of new tech­nol­ogy. Com­puter soft­ware (and ar­tifi­cial in­tel­li­gence in par­tic­u­lar) is mostly un­reg­u­lated, so it is sub­ject to lag rather than de­lay.

Un­for­tu­nately al­most all re­search seems to fo­cus on mod­el­ing lags in the con­text of heav­ily reg­u­lated in­dus­tries (es­pe­cially nat­u­ral mo­nop­o­lies like in­surance or util­ities), and few fo­cus on com­piling data on how long a lag can be ex­pected be­tween a new in­no­va­tion or tech­nol­ogy and its reg­u­la­tion. As one would ex­pect, the few re­sults point to lags on the or­der of years; for ex­am­ple, Ip­polito 1979 (“The Effects of Price Reg­u­la­tion in the Au­to­mo­bile In­surance In­dus­try”) finds that the pe­riod of price changes goes from 11 months in un­reg­u­lated US states to 21 months in reg­u­lated states, sug­gest­ing the price-change frame­work it­self causes a lag of al­most a year.

Below, I cover some spe­cific ex­am­ples, at­tempt­ing to es­ti­mate the lags my­self:

(Nu­clear weapons would be an in­ter­est­ing ex­am­ple but it’s hard to say what ‘lag’ would be inas­much as they were born in gov­ern­ment con­trol and are sub­ject to no mean­ingful global con­trol; how­ever, if the early pro­pos­als for a world gov­ern­ment or unified nu­clear weapon or­ga­ni­za­tion had gone through, they would also have rep­re­sented a lag of at least 5 years.)

1.1 Hacking

Com­puter hack­ing ex­isted for quite a while be­fore rele­vant laws were passed and se­ri­ous law en­force­ment be­gan, which is typ­i­cally con­sid­ered to have be­gun with Oper­a­tion Sun­devil:

Prior to 1990, peo­ple who ma­nipu­lated telecom­mu­ni­ca­tion sys­tems, known as phreak­ers, were gen­er­ally not pros­e­cuted within the United States. The ma­jor­ity of phreak­ers used soft­ware to ob­tain call­ing card num­bers and built sim­ple tone de­vices in or­der to make free tele­phone calls. A small elite, and highly tech­ni­cal seg­ment of phreak­ers were more in­ter­ested in in­for­ma­tion about the in­ner work­ings of the telecom­mu­ni­ca­tion sys­tem than in mak­ing free phone calls. Phone com­pa­nies com­plained of fi­nan­cial losses from phreak­ing ac­tivi­ties.[5] The switch from ana­log to digi­tal equip­ment be­gan to ex­pose more of the in­ner work­ings of tele­phone com­pa­nies as hack­ers be­gan to ex­plore the in­ner work­ings, switches and trunks. Due to a lack of laws and ex­per­tise on the part of Amer­i­can law en­force­ment, few cases against hack­ers were pros­e­cuted un­til Oper­a­tion Sun­devil.[4]

How­ever, start­ing in 1989, the US Se­cret Ser­vice (USS), which had been given au­thor­ity from Congress to deal with ac­cess de­vice fraud as an ex­ten­sion of wire fraud in­ves­ti­ga­tions un­der Ti­tle 18 (§ 1029), be­gan to in­ves­ti­gate. Over the course of the 18 month long in­ves­ti­ga­tion, the USS gath­ered alleged ev­i­dence of ram­pant credit card and call­ing card fraud over state lines.[6]

This gives a time-de­lay of decades from the first phreaks (eg. Steve Jobs & Woz­niak sel­l­ing blue boxes in 1971 af­ter blue box­ing was dis­cov­ered in the mid-60s) to the mid-1980s with the pas­sage of the Com­puter Fraud and Abuse Act & Com­puter Se­cu­rity Act of 1987; pros­e­cu­tion was spo­radic and light even af­ter that, for ex­am­ple Ju­lian As­sange as Men­dax in 1991 was raided and ul­ti­mately re­leased with a fine in 1995. (Since then, at least 49 states have passed laws deal­ing with hack­ing with an in­ter­na­tional con­ven­tion spread­ing post-2001.)

1.2 High fre­quency trading

HFT, while ap­par­ently only be­com­ing pos­si­ble in 1998, was marginal up un­til 2005 where it grew dra­mat­i­cally and be­came con­tro­ver­sial with the 2010 flash crash and the 2012 Knight Cap­i­tal fi­asco. Early SEC rule-changes did lit­tle to ad­dress the is­sue; no US leg­is­la­tion has been passed, or ap­pears vi­able given Wall Street lob­by­ing. Euro­pean Par­li­a­ment leg­is­la­tion is pend­ing, but highly con­tro­ver­sial with heavy lob­by­ing from Lon­don. Other­wise, leg­is­la­tion has been passed only in places that are ir­rele­vant (eg. Ger­many). Given re­sis­tance in NYC & Lon­don, and the slow move­ment of the SEC, there will not be sig­nifi­cant HFT reg­u­la­tion (for bet­ter or worse) for years to come, and it is likely ir­rele­vant as the area ma­tures and ex­cess prof­its dis­ap­pear—“How the Robots Lost: High-Fre­quency Trad­ing’s Rise and Fall”.

“In­sight: Chicago Fed warned on high-fre­quency trad­ing”, Reuters:

More than two years ago, the Fed­eral Re­serve Bank of Chicago was push­ing the Se­cu­ri­ties and Ex­change Com­mis­sion to get se­ri­ous about the dan­gers of su­per-fast com­puter-driven trad­ing. Only now is the SEC get­ting around to tak­ing a closer look at some of those is­sues…Even as the SEC gears up for a meet­ing on Tues­day to dis­cuss soft­ware glitches and how to tame rapid-fire trad­ing, the eighth pub­lic fo­rum it has had in two years on mar­ket struc­ture is­sues, reg­u­la­tors in Canada, Aus­tralia and Ger­many are mov­ing ahead with plans to in­tro­duce speed limits to safe­guard mar­kets from the ma­chines…To be sure, it is not as if the SEC has sim­ply stood idly by and al­lowed the ma­chines to run amok. The agency did put in place some new safe­guards such as cir­cuit break­ers on stocks, af­ter the May 2010 flash crash. The cir­cuit break­ers are in­tended to pre­vent a mar­ket-wide crash by briefly halt­ing trad­ing in par­tic­u­lar stocks dis­play­ing sharp price moves within a 5-minute win­dow, giv­ing the al­gorithms a chance to let go of trad­ing pat­terns that may have turned into vi­cious cy­cles…And re­cently, the SEC fined the New York Stock Ex­change’s op­er­a­tor, NYSE Euronext, $5 mil­lion for allegedly giv­ing some cus­tomers “an im­proper head start” on pro­pri­etary trad­ing in­for­ma­tion.

“High speed trad­ing begets high speed reg­u­la­tion: SEC re­sponse to flash crash, rash”, Ser­ritella 2010:

The SEC has been quick to re­act to the Flash Crash, de­ter­mined to avoid such mar­ket dis­rup­tions in the fu­ture.57 On June 10, 2010, the SEC pub­lished new rules (Rules), which re­quire trad­ing cen­ters to halt trad­ing in cer­tain in­di­vi­d­ual se­cu­ri­ties and deriva­tives if pric­ing thresh­olds are reached.58 Un­til De­cem­ber 10, 2010, the Rules are in a pi­lot pe­riod so that they may be ad­justed and ex­panded.59

…The SEC pub­lished its new Rules slightly over a month af­ter the Flash Crash and did so in an ex­pe­d­ited man­ner in or­der ―to pre­vent a re­cur­rence‖ of the May 6, 2010 mar­ket dis­rup­tions. RULES, supra note 5, at 4. ―The Com­mis­sion be­lieves that ac­cel­er­at­ing ap­proval of these pro­pos­als is ap­pro­pri­ate as it will en­able the Ex­changes nearly im­me­di­ately to be­gin co­or­di­nat­ing trad­ing pauses across mar­kets in the event of sud­den changes in the value of the S&P 500 In­dex stocks.‖ Id. at 12. The Com­mis­sion was ―con­cerned that events such as those that oc­curred on May 6 can se­ri­ously un­der­mine the in­tegrity of the U.S. se­cu­ri­ties mar­kets. Ac­cord­ingly, it is work­ing on a va­ri­ety of fronts to as­sess the causes and con­tribut­ing fac­tors of the May 6 mar­ket dis­rup­tion and to fash­ion policy re­sponses that will help pre­vent a re­cur­rence.‖ Id. at 4.

…The new Rules tighten the thresh­olds and, for the first time, cen­tral­ize the con­trol of cir­cuit break­ers.63 Cir­cuit break­ers sim­ply re­fer to the abil­ity of ex­changes to tem­porar­ily halt trad­ing in a se­cu­rity or deriva­tive to avert sell-offs dur­ing pe­ri­ods of ex­treme down­ward pres­sure, or to close the mar­kets be­fore the end of the nor­mal trad­ing day.64 While, pre­vi­ously, the ex­changes each con­trol­led their own cir­cuit break­ers, they all gen­er­ally ad­hered to the thresh­olds and for­mu­las set forth in NYSE Rule 80B.65 Rule 80B has three differ­ent thresh­olds-10%, 20% and 30%-each of which is tied to the DJIA and if met would re­sult in a ―time out‖ to mar­ket ac­tivity al­to­gether on any ex­change to ex­e­cute a cir­cuit breaker mechanism.66 De­spite the ex­treme price move­ments of May 6, 2010, the cir­cuit break­ers’ low­est thresh­old was not met, as, at its worst point in the Flash Crash, the DJIA was down 9.16%-lower than the 10% drop re­quired to trig­ger the cir­cuit break­ers un­der Rule 80B.67 The SEC, rec­og­niz­ing that us­ing the DJIA as a bench­mark for cir­cuit break­ers may ob­scure ex­treme price move­ments in in­di­vi­d­ual se­cu­ri­ties or deriva­tives, has ex­tended the util­ity of cir­cuit break­ers to tar­get in­di­vi­d­ual se­cu­ri­ties and deriva­tives.68 Un­der the new Rules, the ex­changes are re­quired to is­sue five minute trad­ing halts in a se­cu­rity if the price of that se­cu­rity moves at least 10% in ei­ther di­rec­tion from its price in the pre­ced­ing five minute pe­riod.69 To avoid in­terfer­ing with the open­ings and clos­ings of mar­kets, these re­quire­ments are only in force from 9:45 a.m. to 3:35 p.m.70 The Rules do not dis­place Rule 80B’s man­dates, rather they sup­ple­ment their pre­ex­ist­ing cov­er­age with the abil­ity to tar­get in­di­vi­d­ual se­cu­ri­ties whose volatility may not have enough of an effect on the DJIA to oth­er­wise trig­ger a cir­cuit breaker.71

…While still in their in­fancy, the new Rules suffer from cru­cial limi­ta­tions which threaten their effi­cacy, not the least of which is that they only ap­ply to stocks in the S&P 50075 and Rus­sell 100076 in­dexes as well as se­lect deriva­tives.77…For ex­am­ple, since the SEC’s new cir­cuit breaker re­quire­ments do not only ap­ply to all se­cu­ri­ties and deriva­tives, it is pos­si­ble that trad­ing could be halted in a given se­cu­rity while sales in one of its deriva­tives con­tinue un­abated, thus, frus­trat­ing the ex­changes’ con­gres­sional man­date to pro­mote mar­ket in­tegrity and pro­tect in­vestors79 as well as fos­ter­ing a dis­con­nect be­tween the prices of the deriva­tive-an ETF, for ex­am­ple-and its un­der­ly­ing trad­ing-halted se­cu­rity.

“U.S. Leads in High-Fre­quency Trad­ing, Trails in Rules”, Bloomberg ed­i­tors op-ed:

In con­trast to this go-slow ap­proach, Ger­many, Canada, Aus­tralia and the Euro­pean Union are tak­ing up some of the tools the U.S. should con­sider to keep com­put­er­ized trad­ing from run­ning amok. To cite a few ex­am­ples:

  • In Ger­many, leg­is­la­tion is pend­ing to re­quire high- fre­quency traders to reg­ister so reg­u­la­tors can bet­ter track their mar­ket moves.

  • Canada charges fees to firms that at­tempt to clog mar­kets with buy and sell or­ders, as well as can­cel­la­tions, a prac­tice known as quote stuffing. High-fre­quency trad­ing firms some­times do this to over­load the less-so­phis­ti­cated trad­ing sys­tems of ri­vals and ex­ploit minus­cule and fleet­ing price dis­crep­an­cies.

  • Aus­tralia will ask trad­ing firms to con­duct stress tests to gauge how they deal with mar­ket shocks.

  • The EU is re­view­ing a num­ber of mea­sures in­clud­ing one that would re­quire a trad­ing firm to honor a bid for half a sec­ond, a life­time in a mar­ket where trades can be ex­e­cuted in microsec­onds.

It wouldn’t hurt if U.S. reg­u­la­tors took a look at these op­tions and con­sid­ered a few oth­ers, as well.

“SEC Leads From Be­hind as High-Fre­quency Trad­ing Shows Data Gap”, Busi­ness­week:

The U.S. Se­cu­ri­ties and Ex­change Com­mis­sion, stung by crit­i­cism that it lacks the knowl­edge to an­a­lyze the com­put­er­ized trad­ing that has come to dom­i­nate Amer­i­can stock mar­kets, is plan­ning to catch up. Ini­ti­a­tives to in­crease the breadth of data re­ceived from ex­changes and to record or­ders from origi­na­tion to ex­e­cu­tion are at the cen­ter of the effort. Gregg Ber­man, who holds a doc­torate in physics from Prince­ton Univer­sity, will head the com­mis­sion’s planned office of an­a­lyt­ics and re­search…“It’s amaz­ing it’s taken 30 years,” Weild, a former vice chair­man of NASDAQ Stock Mar­ket, said in a phone in­ter­view. “Mean­while, there’s been an arms race on Wall Street and the SEC is out­classed in its abil­ity to re­con­struct events and look for vuln­er­a­bil­ities.”…The au­dit trail won’t be in place for sev­eral years and the in­dus­try hasn’t figured out how much it will cost and who will pay for it. Mi­das [overnight anal­y­sis] will be fully rol­led out by the end of 2012, Ber­man said. It won’t in­clude in­for­ma­tion about the one-third of trad­ing that oc­curs away from ex­changes.

“As SEC Listens to HFT and Ex­changes, Europe Drives Dis­cus­sion”:

In the City of Lon­don and Euro­pean trad­ing cen­ters, the wait is on for the pub­li­ca­tion this week of the reg­u­la­tory pro­pos­als pre­viewed last week by the Euro­pean Par­li­a­ment’s Eco­nomic and Mone­tary Af­fairs Com­mit­tee, which sent shock­waves through the wor­ld­wide HFT com­mu­nity. The Com­mit­tee pro­posed a man­dated half-sec­ond freeze for all trad­ing or­ders, not only in equities, but ev­ery mar­ket, in­clud­ing fixed in­come and other as­set classes. While the re­quire­ment to keep all trades al­ive for at least 500 mil­lisec­onds went far be­yond what was ex­pected in terms of or­ders, mar­kets an­a­lyst Re­becca Healey told MNI there is be­lieved to be an even more game-chang­ing pro­posal in the doc­u­ment it­self, one that could threaten the use of so-called dark pools by in­vestors.

“City of Lon­don op­poses tighter reg­u­la­tion of high-fre­quency trad­ing”, Fi­nan­cial News:

MEPs [Mem­bers of Euro­pean Par­li­a­ment] this week unan­i­mously voted through rules that will severely limit the con­tro­ver­sial share-trad­ing prac­tice of high fre­quency trad­ing, as part of a fi­nan­cial sec­tor re­form bill…Mem­bers of the eco­nomic and mon­e­tary af­fairs com­mit­tee voted through pro­pos­als un­der the re­vised Mar­kets in Fi­nan­cial Deriva­tives leg­is­la­tion—known as Mifid 2…Mifid 2 also in­cludes mea­sures de­signed to limit spec­u­la­tion on com­mod­ity mar­kets, which has been blamed for dis­tort­ing food prices and harm­ing the world’s poor­est pop­u­la­tions. And there are rules aimed at pro­tect­ing in­vestors from be­ing sold in­ap­pro­pri­ate prod­ucts. Last week the Bureau re­ported that MEPs were plan­ning tough new curbs on HFT, de­spite stiff op­po­si­tion from the City of Lon­don and the wider fi­nan­cial sec­tor. Stock ex­changes, which re­ceive a sig­nifi­cant por­tion of their in­come from trad­ing fees and high-tech ser­vices for HFT, were said to be ‘vo­cal’ about their de­sire to avoid reg­u­la­tion. UK-based stock ex­changes have ar­gued that they already have cir­cuit break­ers so there is no need to offi­cially man­date them…But it would be pre­ma­ture to an­nounce HFT’s demise in Europe: the bill still has a way to go be­fore it be­comes law. The three-tiered struc­ture of Euro­pean law­mak­ing means that the leg­is­la­tion was pre­vi­ously passed by the Euro­pean Com­mis­sion and draft leg­is­la­tion will now go be­fore the Euro­pean Union’s fi­nance ministers. The three ver­sions will then be rec­on­ciled through a ‘tri­alogue’ pro­cess.

The move to put the brakes on HFT across Europe has met stiff op­po­si­tion from the City of Lon­don and the UK gov­ern­ment.‘We must be care­ful not to in­tro­duce mea­sures based on the as­sump­tion that high fre­quency trad­ing is, per se, harm­ful to mar­kets,’ warned the Fi­nan­cial Ser­vices Author­ity (FSA), re­spond­ing to the draft leg­is­la­tion on be­half of the gov­ern­ment. It re­jected sev­eral of the pro­posed mea­sures. The Bri­tish Bankers’ As­so­ci­a­tion de­scribed the re­quire­ment for some traders to be­come mar­ket mak­ers as ‘par­tic­u­larly oner­ous’. The Trea­sury sup­ports HFT ar­gu­ing it brings liquidity to mar­kets and re­duces costs. Ex­changes and the HFT in­dus­try have cam­paigned against the mea­sures in pub­lic and be­hind closed doors. Ex­changes have been ‘very vo­cal’ about HFT, and a key pri­or­ity has been to ‘pre­serve all HFT’, Kay Swin­burne told the Bureau…Since Jan­uary 2010, the LSE and its lob­by­ists, City law firm Fresh­fields, have met MEPs from the three main Bri­tish par­ties at least 15 times to dis­cuss Mifid and similar leg­is­la­tion, lob­by­ing reg­isters show. Fresh­fields’ pub­lic af­fairs di­rec­tor Chris­ti­aan Smits met with the Con­ser­va­tives’ lead on Mifid, Dr Kay Swin­burne, eight times in two years on be­half of the LSE. Other in­tense dis­cus­sions have been go­ing on be­hind the scenes. In to­tal, ex­changes in­clud­ing Nas­daq, Deutsche Borse, NYSE Euronext, Bats Europe and Chi-X, and pub­lic af­fairs firms hired by them, have met with Bri­tish MEPs at least 49 times over the pe­riod to dis­cuss Mifid 2. Fleish­man Hillard earned up to €50,000 (£40,000) each last year from rep­re­sent­ing ex­change Chi-X and trad­ing plat­form Equiduct, as well as up to €150,000 rep­re­sent­ing in­vest­ment com­pany Ci­tadel, which has a sub­stan­tial HFT arm. Spe­cial­ist HFT com­pa­nies have banded to­gether to form a cam­paign group, the FIA Euro­pean Prin­ci­pal Traders’ As­so­ci­a­tion (Epta), which has is­sued po­si­tion pa­pers and lob­bied poli­ti­ci­ans in the UK and EU…Epta has paid Brus­sels-based lob­by­ist Hume Bro­phy at least €100,000 last year to make its case in Brus­sels, EU lob­by­ing reg­isters show.

“Yet again, the UK gov­ern­ment has sided with the robo­traders on a Robin Hood Tax”, New States­man:

Yet as the Bureau for In­ves­tiga­tive Jour­nal­ism re­vealed last week, of a 31-mem­ber panel tasked by the UK Govern­ment to as­sess Mifid II, 22 mem­bers were from the fi­nan­cial ser­vices, 16 linked to the HFT in­dus­try. A study by the Bureau last year re­vealed that over half the fund­ing for the Con­ser­va­tive Party came from the fi­nan­cial sec­tor, 27 per cent com­ing from hedge funds, fi­nanciers and pri­vate equity firms. This per­haps helps ex­plain how the in­ter­ests of a se­lect group of traders get con­fused with the in­ter­ests of the econ­omy as a whole…Yet the UK Govern­ment has again cho­sen to stand apart in block­ing a Europe wide-FTT, turn­ing down billions in des­per­ately needed rev­enue that could help save jobs, pro­tect the poor­est and avoid the worst in cuts to pub­lic ser­vices. In­stead, ad­vice of pre­vi­ous Party Trea­sur­ers Michael Spencer and Peter Crud­das was heeded, who in­fa­mously lob­bied against the FTT. Both in­ci­den­tally own multi-mil­lion pound fi­nan­cial firms which would be hit by such a tax.

“French Fin Min: Need reg­u­la­tion of high fre­quency trad­ing”, FrenchTribune.com:

The Fi­nance Minister of France, Chris­tine La­garde said on Thurs­day that there is a re­quire­ment for more reg­u­la­tion of high fre­quency trad­ing with the ma­jor­ity of the effects seem­ing to be nega­tive and re­sult­ing in ar­tifi­cial moves in the mar­kets…How­ever, she is hav­ing con­flicts with UK’s Fi­nan­cial Ser­vices Author­ity re­gard­ing the reg­u­la­tion of high fre­quency trad­ing firms, which make use of au­to­mated soft­ware and su­per-fast telecom­mu­ni­ca­tions net­works which is ca­pa­ble of trad­ing in mil­lisec­onds.

“Ger­man gov­ern­ment to pro­pose tighter reg­u­la­tion of high-fre­quency trad­ing”, Wash­ing­ton Post:

Ger­many’s Fi­nance Ministry said Tues­day a draft law will be con­sid­ered by Chan­cel­lor An­gela Merkel’s Cabi­net on Wed­nes­day. The bill would re­quire traders to get spe­cial per­mis­sion be­fore they can de­ploy com­put­ers to carry out mil­lions of trades a sec­ond to ex­ploit split-penny price differ­ences. Such trades would also have to be spe­cially la­beled and stock ex­changes would need to en­sure trad­ing can quickly be sus­pended when an er­ror oc­curs.

“Su­per funds want com­puter trad­ing checks”

The $46 billion Aus­trali­anSu­per said HFT could make a “pos­i­tive con­tri­bu­tion” to fi­nan­cial mar­kets but some strate­gies were de­signed to ex­ploit other par­ti­ci­pants and harmed mar­ket in­tegrity. “HFT strate­gies that are ma­nipu­la­tive in na­ture . . . are prob­le­matic and ul­ti­mately raise the cost of in­vest­ing and un­fairly re­dis­tribute prof­its,” said Innes McKe­and, Aus­trali­anSu­per head of equities…As­so­ci­a­tions such as In­dus­try Su­per Net­work called for a crack­down on HFT be­fore it be­came dom­i­nant in Aus­tralia. The Aus­tralian Coun­cil of Trade Unions also asked for a ban on HFT un­til reg­u­la­tors “com­pleted a de­tailed as­sess­ment” of the role of such trades. Rus­sell In­vest­ments head of im­ple­men­ta­tion in Aus­tralia Adam van Ness said HFT vol­umes were in­creas­ing lo­cally. How­ever, there was no straight an­swer as to whether HFT pro­vided perks, or hin­dered, in­vestors and traders. “It’s an open de­bate of whether it’s good or bad,” said Ness. The Aus­tralian Se­cu­ri­ties and In­vest­ments Com­mis­sion “is tak­ing a closer look at this—the out­come of which will make [HFT] a bit more re­stric­tive here”…“In par­tic­u­lar, a kill-switch re­quire­ment might have limited the ex­tent of the Knight Cap­i­tal losses as it could have fa­cil­i­tated a speed­ier ter­mi­na­tion of faulty or­ders.”

“High fre­quency trad­ing and its im­pact on mar­ket qual­ity”, Bro­gaard 2010:

Congress and reg­u­la­tors have be­gun to take no­tice and vo­cal­ize con­cern with HFT. The Se­cu­ri­ties and Ex­change Com­mis­sion (SEC) is­sued a Con­cept Re­lease re­gard­ing the topic on Jan­uary 14, 2010 re­quest­ing feed­back on how HFTs op­er­ate and what benefits and costs they bring with them (SEC, Jan­uary 14, 2010). The Dodd Frank Wall Street Re­form and Con­sumer Pro­tec­tion Act calls for an in depth study on HFT (Sec­tion 967(2)(D)). The Com­mod­ity Fu­tures Trad­ing Com­mis­sion (CFTC) has cre­ated a tech­nol­ogy ad­vi­sory com­mit­tee to ad­dress the de­vel­op­ment of high fre­quency trad­ing. Talk of reg­u­la­tion on HFT has already be­gun. Given the lack of em­piri­cal foun­da­tion for such reg­u­la­tion, the frame­work for reg­u­la­tion is best sum­ma­rized by Se­na­tor Ted Kauf­man, “When­ever you have a lot of money, a lot of change, and no reg­u­la­tion, bad things hap­pen” (Kar­dos and Pat­ter­son, Jan­uary 18, 2010). There has been a pro­posal (House Re­s­olu­tion 1068) to im­pose a per-trade tax of .25%.

“The rise of com­put­er­ized high fre­quency trad­ing: use and con­tro­versy”, McGowan 2010:

The dra­matic in­crease in HFT is most likely due to its prof­ita­bil­ity. De­spite the eco­nomic re­ces­sion, high- fre­quency trad­ing has been con­sid­ered by many to be the biggest “cash cow” on Wall Street and it is es­ti­mated that it gen­er­ates ap­prox­i­mately $15- $25 billion in rev­enue.16 [See Tyler Dur­den, “Gold­man’s $4 Billion High Fre­quency Trad­ing Wild­card”, ZEROHEDGE (Jul. 17, 2009, 2:16AM) (dis­cussing es­ti­mates by the FIX Pro­to­col, an or­ga­ni­za­tion that main­tains a mes­sag­ing stan­dard for the real-time elec­tronic ex­change of se­cu­ri­ties trans­ac­tions).]

Office space in such ar­eas some­times costs an as­tro­nom­i­cal amount, but firms are will­ing to pay for it. For in­stance, in Chicago, 6 square feet of space in the data cen­ter where the ex­changes also house their com­put­ers can go for $2,000 or more a month.57 De­spite these high prices, the num­ber of firms that co-lo­cate at ex­changes such as the NASDAQ has dou­bled over the last year.58 57 See Moyer & Lam­bert, supra note 6 (stat­ing that some trad­ing firms even spend 100 times that much to house their servers). 58 Sal L. Ar­nuk & Joseph Saluzzi, “Toxic Equity Trad­ing Order Flow on Wall Street: The Real Force Be­hind the Ex­plo­sion in Vol­ume and Vo­latility”, THEMIS TRADING LLC WHITE PAPER.

To­day, la­tency ar­bi­tragers use al­gorithms to cre­ate mod­els of great com­plex­ity that can in­volve hun­dreds of se­cu­ri­ties in many differ­ent mar­kets. This prac­tice is highly lu­cra­tive. For in­stance, the fi­nan­cial mar­kets re­search and ad­vi­sory firm TABB Group has es­ti­mated that an­nual ag­gre­gate prof­its of low-la­tency ar­bi­trage strate­gies ex­ceed $21 billion, an amount which is spread out among the few hun­dred firms that de­ploy them. [See Iati, supra note 13 (quot­ing TABB group’s es­ti­mate).]… Be­cause of high fre­quency trad­ing’s promi­nence, the next few years of chang­ing reg­u­la­tions will be ex­tremely in­ter­est­ing. The SEC has a very difficult job ahead of it in at­tempt­ing to reg­u­late these in­no­va­tive prac­tices while at the same time up­hold­ing the agency’s pri­mary con­cerns: pro­tect­ing the av­er­age in­vestor and en­sur­ing mar­kets re­main rel­a­tively effi­cient.

…Ad­di­tion­ally, the lack of reg­u­la­tion on naked ac­cess al­lows a reck­less high fre­quency trader to con­ceiv­ably pump out hun­dreds of thou­sands of faulty or­ders in the two minute pe­riod it typ­i­cally takes to rec­tify a trad­ing sys­tem glitch. [See MOYER & LAMBERT, supra note 6.] Sang Lee, a mar­ket an­a­lyst from Aite Group, be­lieves that “[i]n the worst case sce­nario, elec­tronic fat fin­ger­ing or in­ten­tional trad­ing fraud could take down not only the spon­sored par­ti­ci­pants, but also the spon­sor­ing bro­ker and its coun­ter­par­ties, lead­ing to an un­con­trol­lable dom­ino effect that would threaten over­all sys­tem­atic mar­ket sta­bil­ity.”107 Be­cause of these dooms­day sce­nar­ios and oth­ers ad­vanced by some Demo­cratic law­mak­ers, the SEC will most likely pro­pose rules to limit this prac­tice in the up­com­ing months.

“High-Speed Trad­ing No Longer Hurtling For­ward”, 14 Oc­to­ber 2012:

Profits from high-speed trad­ing in Amer­i­can stocks are on track to be, at most, $1.25 billion this year, down 35% from last year and 74% lower than the peak of about $4.9 billion in 2009, ac­cord­ing to es­ti­mates from the bro­ker­age firm Rosen­blatt Se­cu­ri­ties. By com­par­i­son, Wells Fargo and JPMor­gan Chase each earned more in the last quar­ter than the high-speed trad­ing in­dus­try will earn this year. While no offi­cial data is kept on em­ploy­ment at the high-speed firms, in­ter­views with more than a dozen in­dus­try par­ti­ci­pants sug­gest that firms large and small have been cut­ting staff, and in some cases have shut down. The firms also are ac­count­ing for a de­clin­ing per­centage of a shrink­ing pool of stock trad­ing, from 61% three years ago to 51% now, ac­cord­ing to the Tabb Group, a data firm…The challenges fac­ing speed-fo­cused firms are many, the biggest be­ing the drop in trad­ing vol­ume on stock mar­kets around the world in each of the last four years. This has made it harder to make prof­its for traders who quickly buy and sell shares offered by slower in­vestors. In ad­di­tion, tra­di­tional in­vestors like mu­tual funds have adopted the high-speed in­dus­try’s au­to­mated strate­gies and moved some of their busi­ness away from the ex­changes that are pop­u­lar with high-speed traders. Mean­while, the tech­nolog­i­cal costs of shav­ing fur­ther mil­lisec­onds off trade times has be­come a big­ger drain on many com­pa­nies…At the same time that the firms are mak­ing trims, reg­u­la­tors around the world have in­creased their scrutiny of high-speed traders, and the struc­ture of the fi­nan­cial mar­kets has con­tinued to shift. Ex­ec­u­tives at the trad­ing firms worry that new reg­u­la­tions could cur­tail busi­ness even more, but so far reg­u­la­tors in the United States have taken few steps to rein in trad­ing prac­tices…The con­trac­tion is also push­ing the firms to move into trad­ing of other fi­nan­cial as­sets, like in­ter­na­tional stocks and cur­ren­cies. High-speed firms ac­counted for about 12% of all cur­rency trad­ing in 2010; this year, it is set to be up to 28 per­cent, ac­cord­ing to the con­sult­ing firm Ce­lent. But ex­ec­u­tives at sev­eral high-speed firms said that trad­ing in cur­ren­cies and other as­sets was not mak­ing up for the big de­clines in their tra­di­tional ar­eas of United States stocks, fu­tures and op­tions. Sun Trad­ing in Chicago bought a firm that al­lowed it to be­gin the au­to­mated trad­ing of bonds ear­lier this year. That did not make up for the 40 em­ploy­ees the com­pany cut in 2011.

1.3 Self-driv­ing cars

LW post

The first suc­cess inau­gu­rat­ing the mod­ern era can be con­sid­ered the 2005 DARPA Grand Challenge where mul­ti­ple ve­hi­cles com­pleted the course. The first leg­is­la­tion of any kind ad­dress­ing au­tonomous cars was Ne­vada’s 2011 ap­proval. 5 states have passed leg­is­la­tion deal­ing with au­tonomous cars.

How­ever, these laws are highly pre­limi­nary and all the analy­ses I can find agree that they punt on the real le­gal is­sues of li­a­bil­ity; they per­mit rel­a­tively lit­tle.

1.3.1 Lob­by­ing, Li­a­bil­ity, and Insurance

(Warn­ing: le­gal anal­y­sis quoted at length in some ex­cerpts.)

“Toward Robotic Cars”, Thrun 2010 (pre-Google):

Ju­nior’s be­hav­ior is gov­erned by a finite state ma­chine, which pro­vides for the pos­si­bil­ity that com­mon traf­fic rules may leave a robot with­out a le­gal op­tion as how to pro­ceed. When that hap­pens, the robot will even­tu­ally in­voke its gen­eral-pur­pose path plan­ner to find a solu­tion, re­gard­less of traf­fic rules. [Rais­ing se­ri­ous is­sues of li­a­bil­ity re­lated to po­ten­tially mak­ing peo­ple worse-off]

“Google Cars Drive Them­selves, in Traf­fic” (PDF), NYT 2010:

But the ad­vent of au­tonomous ve­hi­cles poses thorny le­gal is­sues, the Google re­searchers ac­knowl­edged. Un­der cur­rent law, a hu­man must be in con­trol of a car at all times, but what does that mean if the hu­man is not re­ally pay­ing at­ten­tion as the car crosses through, say, a school zone, figur­ing that the robot is driv­ing more safely than he would? And in the event of an ac­ci­dent, who would be li­able—the per­son be­hind the wheel or the maker of the soft­ware?

“The tech­nol­ogy is ahead of the law in many ar­eas,” said Bernard Lu, se­nior staff coun­sel for the Cal­ifor­nia Depart­ment of Mo­tor Ve­hi­cles. “If you look at the ve­hi­cle code, there are dozens of laws per­tain­ing to the driver of a ve­hi­cle, and they all pre­sume to have a hu­man be­ing op­er­at­ing the ve­hi­cle.” The Google re­searchers said they had care­fully ex­am­ined Cal­ifor­nia’s mo­tor ve­hi­cle reg­u­la­tions and de­ter­mined that be­cause a hu­man driver can over­ride any er­ror, the ex­per­i­men­tal cars are le­gal. Mr. Lu agreed.

“Calif. Green­lights Self-Driv­ing Cars, But Le­gal Kinks Linger”:

For in­stance, if a self-driv­ing car runs a red light and gets caught, who gets the ticket? “I don’t know—who­ever owns the car, I would think. But we will work that out,” Gov. Brown said at the sign­ing event for Cal­ifor­nia’s bill to le­gal­ize and reg­u­late the robotic cars. “That will be the eas­iest thing to work out.” Google co-founder Sergey Brin, who was also at the cer­e­mony, jok­ingly said “self-driv­ing cars don’t run red lights.” That may be true, but Bryant Walker Smith, who teaches a class at Stan­ford Law School this fall on the law sup­port­ing self-driv­ing cars, says even­tu­ally one of these ve­hi­cles will get into an ac­ci­dent. When it does, he says, it’s not clear who will pay.

…Or is it the com­pany that wrote the soft­ware? Or the au­tomaker that built the car? When it came to as­sign­ing re­spon­si­bil­ity, Cal­ifor­nia de­cided that a self-driv­ing car would always have a hu­man op­er­a­tor. Even if that op­er­a­tor wasn’t ac­tu­ally in the car, that per­son would be legally re­spon­si­ble. It sounds straight­for­ward, but it’s not. Let’s say the op­er­a­tor of a self-driv­ing car is ine­bri­ated; he or she is still legally the op­er­a­tor, but the car is driv­ing it­self. “That was a de­ci­sion that de­part­ment made—that the op­er­a­tor would be sub­ject to the laws, in­clud­ing laws against driv­ing while in­tox­i­cated, even if the op­er­a­tor wasn’t there,” Walker Smith says…Still, is­sues sur­round­ing li­a­bil­ity and who is ul­ti­mately re­spon­si­ble when robots take the wheel are likely to re­main con­tentious. Already trial lawyers, in­sur­ers, au­tomak­ers and soft­ware en­g­ineers are queu­ing up to lobby rule-mak­ers in Cal­ifor­nia’s cap­i­tal.

“Google’s Driver­less Car Draws Poli­ti­cal Power: In­ter­net Gi­ant Hones Its Lob­by­ing Skills in State Capi­tols; Giv­ing Test Drives to Law­mak­ers”, WSJ, 12 Oc­to­ber 2012:

Over­all, Google spent nearly $9 mil­lion in the first half of 2012 lob­by­ing in Wash­ing­ton for a wide va­ri­ety of is­sues, in­clud­ing speak­ing to U.S. Depart­ment of Trans­porta­tion offi­cials and law­mak­ers about au­tonomous ve­hi­cle tech­nol­ogy, ac­cord­ing to fed­eral records, near­ing the $9.68 mil­lion it spent on lob­by­ing in all of 2011. It is un­clear how much Google has spent in to­tal on lob­by­ing state offi­cials; the com­pany doesn’t dis­close such data.

…In most states, au­tonomous ve­hi­cles are nei­ther pro­hibited nor per­mit­ted-a key rea­son why Google’s fleet of au­tonomous cars se­cretly drove more than 100,000 miles on the road be­fore the com­pany an­nounced the ini­ti­a­tive in fall 2010. Last month, Mr. Brin said he ex­pects self-driv­ing cars to be pub­li­cly available within five years.

In Jan­uary 2011, Mr. Gold­wa­ter ap­proached Ms. Don­dero Loop and the Ne­vada as­sem­bly trans­porta­tion com­mit­tee about propos­ing a bill to di­rect the state’s de­part­ment of mo­tor ve­hi­cles to draft reg­u­la­tions around the self-driv­ing ve­hi­cles. “We’re not say­ing, ‘Put this on the road,’” he said he told the law­mak­ers. “We’re say­ing, ‘This is le­gi­t­i­mate tech­nol­ogy,’ and we’re let­ting the DMV test it and cer­tify it.” Fol­low­ing the Ne­vada bill’s pas­sage, leg­is­la­tors from other states be­gan show­ing in­ter­est in similar leg­is­la­tion. So Google re­peated its origi­nal recipe and added an ex­tra in­gre­di­ent: giv­ing law­mak­ers the chance to ride in one of its about a dozen self-driv­ing cars…In Cal­ifor­nia, an au­tonomous-ve­hi­cle bill be­came law last month de­spite op­po­si­tion from the Alli­ance of Au­to­mo­bile Man­u­fac­tur­ers, which in­cludes 12 top auto mak­ers such as GM, BMW and Toy­ota. The group had ap­proved of the Florida bill. Dan Gage, a spokesman for the group, said the Cal­ifor­nia leg­is­la­tion would al­low com­pa­nies and in­di­vi­d­u­als to mod­ify ex­ist­ing ve­hi­cles with self-driv­ing tech­nol­ogy that could be faulty, and that auto mak­ers wouldn’t be legally pro­tected from re­sult­ing law­suits. “They’re not all Google, and they could con­vert our ve­hi­cles in a man­ner not in­tended,” Mr. Gage said. But Google helped push the bill through af­ter spend­ing about $140,000 over the past year to lobby leg­is­la­tors and Cal­ifor­nia agen­cies, ac­cord­ing to pub­lic records

As with Cal­ifor­nia’s re­cently en­acted law, Cheh’s [Wash­ing­ton D.C.] bill re­quires that a li­censed driver be pre­sent in the driver’s seat of these ve­hi­cles. While seem­ingly in­con­se­quen­tial, this effec­tively out­laws one of the more promis­ing func­tions of au­tonomous ve­hi­cle tech­nol­ogy: al­low­ing dis­abled peo­ple to en­joy the per­sonal mo­bil­ity that most peo­ple take for granted. Google high­lighted this benefit when one of its driver­less cars drove a legally blind man to a Taco Bell. Bizarrely, Cheh’s bill also re­quires that au­tonomous ve­hi­cles op­er­ate only on al­ter­na­tive fuels. While the Google Self-Driv­ing Car may man­i­fest it­self as an eco-con­scious Prius, self-driv­ing ve­hi­cle tech­nol­ogy has noth­ing to do with hy­brids, plug-in electrics or ve­hi­cles fueled with nat­u­ral gas. The tech­nol­ogy does not de­pend on ve­hi­cle make or model, but Cheh is seek­ing to man­date as much. That could de­lay the tech­nol­ogy’s wide­spread adop­tion for no good rea­son…Another flaw in Cheh’s bill is that it would im­pose a spe­cial tax on drivers of au­tonomous ve­hi­cles. In­stead of pay­ing fuel taxes, “Own­ers of au­tonomous ve­hi­cles shall pay a ve­hi­cle-miles trav­el­led (VMT) fee of 1.875 cents per mile.” Ad­minis­tra­tive de­tails aside, a VMT tax would re­quire drivers to in­stall a record­ing de­vice to be pe­ri­od­i­cally au­dited by the gov­ern­ment. There may be good rea­sons to re­place fuel taxes with VMT fees, but greatly re­strict­ing the use of a po­ten­tially rev­olu­tion­ary new tech­nol­ogy by singling it out for a new tax sys­tem would be a mis­take.

“Driver­less cars are on the way. Here’s how not to reg­u­late them.”

“How au­tonomous ve­hi­cle policy in Cal­ifor­nia and Ne­vada ad­dresses tech­nolog­i­cal and non-tech­nolog­i­cal li­a­bil­ities”, Pinto 2012:

The State of Ne­vada has adopted one policy ap­proach to deal­ing with these tech­ni­cal and policy is­sues. At the urg­ing of Google, a new Ne­vada law di­rects the Ne­vada Depart­ment of Mo­tor Ve­hi­cles (NDMV) to is­sue reg­u­la­tions for the test­ing and pos­si­ble li­cens­ing of au­tonomous ve­hi­cles and for li­cens­ing the own­ers/​drivers of these ve­hi­cles. There is also a similar law be­ing pro­posed in Cal­ifor­nia with de­tails not cov­ered by Ne­vada AB 511. This pa­per eval­u­ates the strengths and weak­nesses of the Ne­vada and Cal­ifor­nia approaches

Another prob­lem posed by the non-com­puter world is that hu­man drivers fre­quently bend the rules by rol­ling through stop signs and driv­ing above speed limits. How does a po­lite and law-abid­ing robot ve­hi­cle act in these situ­a­tions? To solve this prob­lem, the Google Car can be pro­grammed for differ­ent driv­ing per­son­al­ities, mir­ror­ing the cur­rent con­di­tions. On one end, it would be cau­tious, be­ing more likely to yield to an­other car and strictly fol­low­ing the laws on the road. At the other end of the spec­trum, the robo­car would be ag­gres­sive, where it is more likely to go first at the stop sign. When go­ing through a four-way in­ter­sec­tion, for ex­am­ple, it yields to other ve­hi­cles based on road rules; but if other cars don’t re­cip­ro­cate, it ad­vances a bit to show to the other drivers its in­ten­tion.

How­ever, there is a time pe­riod be­tween a prob­lem be­ing di­ag­nosed and the car be­ing fixed. In the­ory, one would dis­able the ve­hi­cle re­motely and only start it back up when the prob­lem is fixed. How­ever in re­al­ity, this would be ex­tremely dis­rup­tive to a per­son’s life as they would have to tow their ve­hi­cle to the near­est me­chanic or au­tonomous ve­hi­cle equiv­a­lent to solve the is­sue. Google has not de­vel­oped the tech­nol­ogy to ap­proach this prob­lem, in­stead rely­ing on the hu­man driver to take con­trol of the ve­hi­cle if there is ever a prob­lem in their test ve­hi­cles.

[pre­vi­ous Lu quote about hu­man-cen­tric laws] …this can cre­ate par­tic­u­larly tricky situ­a­tions such as de­cid­ing whether the po­lice should have the right to pull over au­tonomous ve­hi­cles, a ques­tion yet to be an­swered. Even the chief coun­sel of the Na­tional High­way Traf­fic Safety Ad­minis­tra­tion ad­mits that the fed­eral gov­ern­ment does not have enough in­for­ma­tion to de­ter­mine how to reg­u­late driver­less tech­nolo­gies. This can be­come a par­tic­u­larly thorny is­sue when there is the first ac­ci­dent be­tween au­tonomous and self driv­ing ve­hi­cles and how to go about as­sign­ing li­a­bil­ity.

This ques­tion of li­a­bil­ity arose dur­ing an [un­pub­lished 11 Feb 2012] in­ter­view on the fu­ture of au­tonomous ve­hi­cles with Roger Noll. Although Pro­fes­sor Noll hasn’t read the cur­rent liter­a­ture on this is­sue, he voiced con­cern over what the ver­dict of the first trial be­tween an ac­ci­dent be­tween an au­tonomous ve­hi­cle and nor­mal car will be. He be­lieves that the jury will al­most cer­tainly side with the hu­man driver de­spite the de­tails of the case, as he elo­quently put in his husky Utah ac­cent and sub­se­quent laugh­ter, “how are we go­ing to defend the au­tonomous ve­hi­cle; can we ask it to tes­tify for it­self?” To an­swer Roger Noll’s ques­tion, Brad Tem­ple­ton’s blog elab­o­rates how he be­lieves that li­a­bil­ity rea­sons are a largely unim­por­tant ques­tion for two rea­sons. First, in new tech­nol­ogy, there is no ques­tion that any law­suit over any in­ci­dent in­volv­ing the cars will in­clude the ven­dor as the defen­dant so po­ten­tial ven­dors must plan for li­a­bil­ity. For the sec­ond rea­son, Brad Tem­ple­ton makes an eco­nomic ar­gu­ment that the cost of ac­ci­dents is borne by car buy­ers through higher in­surance pre­miums. If the ac­ci­dents are deemed the fault of the ve­hi­cle maker, this cost goes into the price of the car, and is paid for by the ve­hi­cle maker’s in­surance or self- in­surance. In­stead, Brad Tem­ple­ton be­lieves that the big ques­tion is whether the li­a­bil­ity as­signed in any law­suit will be sig­nifi­cantly greater than it is in or­di­nary col­li­sions be­cause of puni­tive dam­ages. In the­ory, robo­cars should drive the costs down be­cause of the re­duc­tions in col­li­sions, and that means sav­ings for the car buyer and for so­ciety and thus cheaper auto in­surance. How­ever, if the cost per col­li­sion is much higher even though the num­ber of col­li­sions drops, there is un­cer­tainty over whether au­tonomous ve­hi­cles will save money for both par­ties.

Cal­ifor­nia’s Propo­si­tion 103 dic­tates that any in­surance policy’s price must be based on weighted fac­tors, and the top 3 weighted fac­tors must be, 1. driv­ing record, 2. num­ber of miles driven and 3. num­ber of years of ex­pe­rience. Other fac­tors like the type of car some­one has (i.e. au­tonomous ve­hi­cle) will be weighed lower. Sub­se­quently, this law makes it very hard to get cheap in­surance for a robo­car.

Ne­vada Policy: AB 511 Sec­tion 8 This short piece of leg­is­la­tion ac­com­plishes the goal of set­ting good stan­dards for the DMV to fol­low. By set­ting gen­eral stan­dards (part a), in­surance re­quire­ments (part b), and safety stan­dards (part c), this sets a prece­dent for these ar­eas with­out be­ing too limited with de­tails, leav­ing them to be de­cided by the DMV in­stead of the poli­ti­ci­ans. …part b only dis­cusses in­surance briefly, say­ing the state must, “Set forth re­quire­ments for the in­surance that is re­quired to test or op­er­ate an au­tonomous ve­hi­cle on a high­way within this State.” The defi­ni­tions set in the sec­ond part of Sec­tion 8 are not spe­cific enough. Fol­low­ing the open-ended stan­dards set in the ear­lier part of the Sec­tion 8 is good for con­ti­nu­ity, but not tech­ni­cally ad­dress­ing the prob­lem. Ac­cord­ing to Ryan Calo, Direc­tor of Pri­vacy and Robotics for Stan­ford Law School’s Cen­ter for In­ter­net and So­ciety (CIS), the bill’s defi­ni­tion of “au­tonomous ve­hi­cles” is un­clear and cir­cu­lar. In the con­text of this leg­is­la­tion, au­tonomous driv­ing is seen as a bi­nary sys­tem of ex­is­tence, but in re­al­ity, it falls more un­der a spec­trum.

Over­all, AB 511 did not ad­dress ei­ther the tech­nolog­i­cal li­a­bil­ities and barely men­tioned the non-tech­nolog­i­cal li­a­bil­ities that are nec­es­sary to over­come for fu­ture suc­cess of au­tonomous ve­hi­cles. Since it was the first type of leg­is­la­tion to ever ap­proach the is­sue of au­tonomous ve­hi­cles, it is un­der­stand­able that the poli­cy­mak­ers did not want to go into speci­fics and in­stead rely on fu­ture reg­u­la­tion to de­ter­mine the de­tails.

Cal­ifor­nia Policy: SB 1298…would re­quire the adop­tion of safety stan­dards and perfor­mance re­quire­ments to en­sure the safe op­er­a­tion and test­ing of “au­tonomous ve­hi­cles” on Cal­ifor­nia pub­lic roads. The bill would al­low au­tonomous ve­hi­cles to be op­er­ated or tested on the pub­lic roads on the con­di­tion they meet safety stan­dards and perfor­mance re­quire­ments of the bill. SB 1298’s 66 lines of text is also con­sid­er­ably longer than AB 511’s 12 lines of rele­vant text (the en­tirety of AB 511 is much longer but con­sists of ir­rele­vant in­for­ma­tion for the pur­poses of au­tonomous cars). would re­quire the adop­tion of safety stan­dards and perfor­mance re­quire­ments to en­sure the safe op­er­a­tion and test­ing of “au­tonomous ve­hi­cles” on Cal­ifor­nia pub­lic roads. The bill would al­low au­tonomous ve­hi­cles to be op­er­ated or tested on the pub­lic roads on the con­di­tion they meet safety stan­dards and perfor­mance re­quire­ments of the bill. SB 1298’s 66 lines of text is also con­sid­er­ably longer than AB 511’s 12 lines of rele­vant text (the en­tirety of AB 511 is much longer but con­sists of ir­rele­vant in­for­ma­tion for the pur­poses of au­tonomous cars).

SB 1298 has clear in­ten­tions to have com­pany de­vel­oped ve­hi­cles by say­ing in Sec­tion 2, Part B that, “au­tonomous ve­hi­cles have been op­er­ated safely on pub­lic roads in the state in re­cent years by com­pa­nies de­vel­op­ing and test­ing this tech­nol­ogy” and how these com­pa­nies have set the stan­dard for what safety stan­dards will be nec­es­sary for fu­ture test­ing by oth­ers. This part of the leg­is­la­tion im­plic­itly sup­ports Google’s au­tonomous ve­hi­cle be­cause it has the most ex­ten­sively tested fleet of ve­hi­cles out of all the com­pa­nies, and all this test­ing has been nearly ex­clu­sively done in Cal­ifor­nia. This bill is an im­prove­ment over AB 511 by putting more con­trol in the hands of Google to fo­cus on de­vel­op­ing the tech­nol­ogy, which is a sig­nal by the poli­cy­mak­ers to cre­ate a cli­mate fa­vor­able for Google’s in­no­va­tion within the con­straints of keep­ing so­ciety safe.

To avoid set­ting a dan­ger­ous prece­dent for li­a­bil­ity in ac­ci­dents, poli­cy­mak­ers can con­sider pro­tect­ing the car com­pa­nies from frivolous and mal­i­cious law­suits. Without such leg­is­la­tion, fu­ture plain­tiffs will be jus­tified to sue Google and put full li­a­bil­ity on them. There are also po­ten­tial free rid­ing effects of the eco­nomic moral haz­ard of putting the blame on the com­pany that makes the tech­nol­ogy, not the com­pany that man­u­fac­tures the ve­hi­cle. Since we are as­sum­ing that au­tonomous ve­hi­cle tech­nol­ogy will all come from one source of Google, then any ac­ci­dent that oc­curs will pin the blame pri­mar­ily on Google, the com­mon de­nom­i­na­tor, not as much as on the car man­u­fac­turer…Policy that en­sures the costs per ac­ci­dent re­mains close to to­day’s cur­rent cost will save money for both the in­surer and cus­tomer. This could po­ten­tially mean putting a cap on re­wards to­wards the re­cip­i­ents or pun­ish­ments to­wards the com­pany to limit shocks to the in­dus­try. Over­all, a poli­cy­maker can choose to cre­ate a grad­ual limit on the amount of li­a­bil­ity placed on the ven­dor based on cer­tain tech­nol­ogy or scal­ing is­sues that are met with­out ac­ci­dents.

SB 1298 man­ages to cover some of the short­com­ings of AB 511, such as how to im­prove upon the defi­ni­tion of an au­tonomous ve­hi­cle, as well as look­ing more to­wards the fu­ture by giv­ing Google more re­spon­si­bil­ity and alle­vi­at­ing some of the non-tech­ni­cal li­a­bil­ity by con­sid­er­ing their product “un­der de­vel­op­ment”. How­ever, both pieces of leg­is­la­tion fail to ad­dress the spe­cific tech­ni­cal li­a­bil­ities such as bugs in the code base or com­puter at­tacks, and non-tech­ni­cal li­a­bil­ities such as in­surance or ac­ci­dent li­a­bil­ity.

“Can I See Your Li­cense, Regis­tra­tion and C.P.U.?”, Tyler Cowen; see also his “What do the laws against driver­less cars look like?”:

The driver­less car is ille­gal in all 50 states. Google, which has been at the fore­front of this par­tic­u­lar tech­nol­ogy, is ask­ing the Ne­vada leg­is­la­ture to re­lax re­stric­tions on the cars so it can test some of them on roads there. Un­for­tu­nately, the very ne­ces­sity for this lob­by­ing is a sign of our am­bivalence to­ward change. Ideally, poli­ti­ci­ans should be call­ing for ac­cel­er­ated safety tri­als and promis­ing to pass li­a­bil­ity caps if the cars meet ac­cept­able stan­dards, whether that be sooner or later. Yet no ma­jor pub­lic figure has taken up this cause.

En­abling the de­vel­op­ment of driver­less cars will re­quire squadrons of lawyers be­cause a va­ri­ety of state, lo­cal and fed­eral laws pre­sume that a hu­man be­ing is op­er­at­ing the au­to­mo­biles on our roads. No state has any­thing close to a func­tion­ing sys­tem to in­spect whether the com­put­ers in driver­less cars are in good work­ing or­der, much as we rou­tinely test emis­sions and brake lights. Or­di­nary laws change only if leg­is­la­tors make those re­vi­sions a pri­or­ity. Yet the mun­dane poli­ti­cal is­sues of the day of­ten ap­pear quite press­ing, not to men­tion poli­ti­cally safer than en­abling a new product that is likely to en­gen­der con­tro­versy.

Poli­tics, of course, is of­ten geared to­ward pre­serv­ing the sta­tus quo, which is highly visi­ble, fa­mil­iar in its risks, and lu­cra­tive for com­pa­nies already mak­ing a profit from it. Some parts of gov­ern­ment do foster in­no­va­tion, such as DARPA, the Defense Ad­vanced Re­search Pro­jects Agency, which is part of the Defense Depart­ment. DARPA helped cre­ate the In­ter­net and is sup­port­ing the de­vel­op­ment of the driver­less car. It op­er­ates largely out­side the pub­lic eye; the real prob­lems come when its in­no­va­tions start to en­ter ev­ery­day life and meet poli­ti­cal re­sis­tance and dis­turb­ing press re­ports.

…In the mean­time, trans­porta­tion is one area where progress has been slow for decades. We’re still fly­ing 747s, a plane de­signed in the 1960s. Many rail and bus net­works have con­tracted. And traf­fic con­ges­tion is worse than ever. As I’ ar­gued in a pre­vi­ous column, this is prob­a­bly part of a broader slow­down of tech­nolog­i­cal ad­vances.

But it’s clear that in the early part of the 20th cen­tury, the origi­nal ad­vent of the mo­tor car was not im­peded by any­thing like the cur­rent mélange of reg­u­la­tions, laws and law­suits. Po­ten­tially ma­jor in­no­va­tions need a path for­ward, through the cur­rent thicket of re­stric­tions. That de­bate on this is­sue is so quiet shows the ur­gency of do­ing some­thing now.

Ryan Calo of the CIS ar­gues es­sen­tially that no spe­cific law bans au­tonomous cars and the threat of the hu­man-cen­tric laws & reg­u­la­tions is overblown. (See the later Rus­sian in­ci­dent.)

“SCU con­fer­ence on le­gal is­sues of robo­cars”, Brad Tem­ple­ton:

Li­a­bil­ity: After a tech­nol­ogy in­tro­duc­tion where Sven Bieker of Stan­ford out­lined the challenges he saw which put fully au­tonomous robo­cars 2 decades away, the first ses­sion was on civil li­a­bil­ity. The short mes­sage was that based on a num­ber of re­lated cases from the past, it will be hard for man­u­fac­tur­ers to avoid li­a­bil­ity for any safety prob­lems with their robo­cars, even when the sys­tems were built to provide the high­est statis­ti­cal safety re­sult if it traded off one type of safety for an­other. In gen­eral when robo­cars come up as a sub­ject of dis­cus­sion in web threads, I fre­quently see “Who will be li­able in a crash” as the first ques­tion. I think it’s a largely unim­por­tant ques­tion for two rea­sons. First of all, when the tech­nol­ogy is new, there is no ques­tion that any law­suit over any in­ci­dent in­volv­ing the cars will in­clude the ven­dor as the defen­dant, in many cases with jus­tifi­able rea­sons, but even if there is no eas­ily seen rea­son why. So po­ten­tial ven­dors can’t ex­pect to not plan for li­a­bil­ity. But most of all, the re­al­ity is that in the end, the cost of ac­ci­dents is borne by car buy­ers. Nor­mally, they do it by buy­ing in­surance. But if the ac­ci­dents are deemed the fault of the ve­hi­cle maker, this cost goes into the price of the car, and is paid for by the ve­hi­cle maker’s in­surance or self-in­surance. It’s just a ques­tion of figur­ing out how the ve­hi­cle buyer will pay, and the mar­ket should be ca­pa­ble of that (though see be­low.) No, the big ques­tion in my mind is whether the li­a­bil­ity as­signed in any law­suit will be sig­nifi­cantly greater than it is in or­di­nary col­li­sions where hu­man er­ror is at fault, be­cause of puni­tive dam­ages…Un­for­tu­nately, some li­a­bil­ity his­tory points to the lat­ter sce­nario, though it is pos­si­ble for statutes to mod­ify this.

In­surance: …Be­cause Prop 103 [spec­i­fy­ing in­surance by weighted fac­tors, see pre­vi­ous] is a bal­lot propo­si­tion, it can’t eas­ily be su­per­seded by the leg­is­la­ture. It takes a 23 vote and a court agree­ing the change matches the in­tent of the origi­nal bal­lot propo­si­tion. One would hope the courts would agree that cheaper in­surance to en­courage safer cars would match the voter in­tent, but this is a challenge.

Lo­cal and crim­i­nal laws: The ses­sion on crim­i­nal laws cen­tered more on the traf­fic code (which isn’t re­ally crim­i­nal law) and the fact it varies a lot from state to state. In­deed, any robo­car that wants to op­er­ate in mul­ti­ple states will have to deal with this, though for­tu­nately there is a fed­eral stan­dard on traf­fic con­trols (signs and lights) to rely on. Some global stan­dards are a con­cern—the Geneva con­ven­tion on traf­fic laws re­quires ev­ery car has a driver who is in con­trol of the ve­hi­cle. How­ever, I think that gov­ern­ments will be able to quickly see—if they want to—that these are laws in need of up­dat­ing. Some prece­dent in drunk driv­ing can cre­ate prob­lems—peo­ple have been con­victed of DUI for be­ing in their car, drunk, with the keys in their pocket, be­cause they had clear in­tent to drive drunk. How­ever, one would hope the pos­ses­sion of a robo­car (of the sort that does not need hu­man man­ual driv­ing) would ex­press an en­tirely differ­ent in­tent to the law.

“Defi­ni­tion of nec­es­sary ve­hi­cle and in­fras­truc­ture sys­tems for Au­to­mated Driv­ing”, Euro­pean Com­mis­sion re­port 29 June 2011:

Yet an­other paramount as­pect tightly re­lated to au­to­mated driv­ing at pre­sent and in the near fu­ture, and cer­tainly re­lated to au­tonomous driv­ing in the long run, is the in­ter­pre­ta­tion of the Vienna Con­ven­tion. It will be shown in the re­port how this Euro­pean leg­is­la­tion is com­monly in­ter­preted, how it cre­ates the frame­work nec­es­sary to de­ploy on a large scale au­to­mated and co­op­er­a­tive driv­ing sys­tems, and what le­gal limi­ta­tions are fore­seen in mak­ing the new step to­ward au­tonomous driv­ing. The re­port analy­ses in the same con­text other con­ven­tions and leg­is­la­tive acts, searches for gaps in the cur­rent leg­is­la­tion and makes an in­ter­est­ing link with the avi­a­tion in­dus­try where sev­eral les­sons can be learnt from.

It seems ap­pro­pri­ate to end this sum­mary with a few re­marks not di­rectly re­lated to the sub­ject of this re­port, but worth in the pro­cess of think­ing about au­to­mated driv­ing, co­op­er­a­tive driv­ing, and au­tonomous driv­ing. The progress in the hu­man his­tory has sys­tem­at­i­cally taken the path of the short­est re­sis­tance and has of­ten by­passed gov­ern­men­tal rules, busi­ness mod­els, and the ob­vi­ous think­ing. At the end of the 1990s no­body was an­ti­ci­pat­ing the promi­nent role the smart phone would have in 10 years, but sci­en­tists were busy plan­ning jour­neys to Mars within the same timeframe. The lat­ter has not hap­pened and will prob­a­bly not hap­pen soon… One les­son hu­man­ity has learnt dur­ing its ex­is­tence is that his­tor­i­cal changes that fol­lowed the path of the min­i­mum re­sis­tance trig­gered at a later stage fun­da­men­tal changes in the so­ciety. “A car is a car” like David Strick­land, ad­minis­tra­tor of the Na­tional High­way Traf­fic Safety Ad­minis­tra­tion (NHTSA) in the U.S. said in his speech at the Tele­mat­ics Up­date con­fer­ence in Detroit, June 2011, but it may drive soon its progress along a his­tor­i­cal path of min­i­mum re­sis­tance.

An au­to­mated driv­ing sys­tems needs to meet the Vienna Con­ven­tion (see Sec­tion 3, as­pect 2). The pri­vate sec­tor, es­pe­cially those who are in the end re­spon­si­ble for the perfor­mance of the ve­hi­cle, should be in­volved in the dis­cus­sion.

The Vienna Con­ven­tion on Road Traf­fic is an in­ter­na­tional treaty de­signed to fa­cil­i­tate in­ter­na­tional road traf­fic and to in­crease road safety by stan­dard­iz­ing the uniform traf­fic rules among the con­tract­ing par­ties. This con­ven­tion was agreed upon at the United Na­tions Eco­nomic and So­cial Coun­cil’s Con­fer­ence on Road Traf­fic (Oc­to­ber 7, 1968 - Novem­ber 8, 1968). It came into force on May 21 1977. Not all EU coun­tries have rat­ified the treaty, see Figure 13 (e.g. Ire­land, Spain and UK did not). It should be noted that in 1968, an­i­mals were still used for trac­tion of ve­hi­cles and the con­cept of au­tonomous driv­ing was con­sid­ered to be sci­ence fic­tion. This is im­por­tant when in­ter­pret­ing the text of the treaty: in a strict in­ter­pre­ta­tion to the let­ter of the text, or in­ter­pre­ta­tion of what is meant (at that time).

The com­mon opinion of the ex­pert panel is that the Vienna Con­ven­tion will have only a limited effect on the suc­cess­ful de­ploy­ment of au­to­mated driv­ing sys­tems due to sev­eral rea­sons:

  • OEMs already deal with the situ­a­tion that some of the Ad­vanced Driver As­sis­tance Sys­tems touch the Vienna Con­ven­tion to­day. For ex­am­ple, they provide an on/​off switch for ADAS or al­low an over­rid­ing of the func­tions by the driver. They de­velop their ADAS in line with the RESPONSE Code of Prac­tice (2009) [41] fol­low­ing the prin­ci­ple that the driver is in con­trol and re­mains re­spon­si­ble. In ad­di­tion, the OEMs have a care­ful mar­ket­ing strat­egy and they do not ex­ag­ger­ate and do not claim that an ADAS is work­ing in all driv­ing situ­a­tions or that there is a solu­tion to “all” safety prob­lems.

  • Au­toma­tion is not black and white, au­to­mated or not au­to­mated, but much more com­plex, in­volv­ing many de­sign di­men­sions. A helpful model of au­toma­tion is to con­sider differ­ent lev­els of as­sis­tance and au­toma­tion that can e.g. be or­ga­nized on a 1d- scale [42]. Sev­eral lev­els could be within the Vienna Con­ven­tion, while ex­treme lev­els are out­side of to­day’s ver­sion of the Vienna Con­ven­tion. For ex­am­ple, one par­ti­tion­ing could be to have lev­els of au­toma­tion Man­ual, As­sisted, Semi-Au­to­mated, Highly Au­to­mated, and Fully Au­to­mated driv­ing, see Figure 14. In highly au­to­mated driv­ing, the au­toma­tion has the tech­ni­cal ca­pa­bil­ities to drive al­most au­tonomously, but the driver is still in the loop and able to take over con­trol when it is nec­es­sary. Fully au­to­mated driv­ing like PRT, where the driver is not re­quired to mon­i­tor the au­toma­tion and does not have the abil­ity to take over con­trol, seems not to be cov­ered by the Vienna Con­ven­tion.

Cri­te­ria for de­cid­ing if the au­toma­tion is still in line with the Vienna Con­ven­tion could be:

  • the in­volve­ment of the driver in the driv­ing task (ve­hi­cle con­trol),

  • the in­volve­ment of the driver in mon­i­tor­ing the au­toma­tion and the traf­fic en­vi­ron­ment,

  • the abil­ity to take over con­trol or to over­ride the automation

  • The Vienna Con­ven­tion already con­tains open­ings, or is vari­able, or can be changed.

It con­tains a cer­tain vari­abil­ity re­gard­ing the au­ton­omy in the means of trans­porta­tion, e.g. “to con­trol the ve­hi­cle or guide the an­i­mals”. It is ob­vi­ous that some of the cur­rent tech­nolog­i­cal de­vel­op­ments were not fore­seen by the au­thors of the Vienna Con­ven­tion. Is­sues like pla­toon­ing are not ad­dressed. The Vienna Con­ven­tion already con­tains in An­nex 5 (chap­ter 4, ex­emp­tions) an open­ing to be in­ves­ti­gated with ap­pro­pri­ate le­gal ex­per­tise:

“For do­mes­tic pur­poses, Con­tract­ing Par­ties may grant ex­emp­tions from the pro­vi­sions of this An­nex in re­spect of: (c) Ve­hi­cles used for ex­per­i­ments whose pur­pose is to keep up with tech­ni­cal progress and im­prove road safety; (d) Ve­hi­cles of a spe­cial form or type, or which are used for par­tic­u­lar pur­poses un­der spe­cial con­di­tions”. - In ad­di­tion, the Vienna Con­ven­tion can be changed. The last change was made in 2006. A new para­graph (para­graph 6) was added to Ar­ti­cle 8 stat­ing that the driver should min­i­mize any ac­tivity other than driv­ing.

…differ­ent un­der­stand­ings of the term “to con­trol” with no clear con­sen­sus [44]: 1. Con­trol in a sense of in­fluenc­ing e.g. the driver con­trols the ve­hi­cle move­ments, the driver can over­ride the au­toma­tion and/​or the driver can switch the au­toma­tion off. 2. Con­trol in a sense of mon­i­tor­ing e.g. the driver mon­i­tors the ac­tions of the au­toma­tion. Both in­ter­pre­ta­tions al­low the use of some form of au­toma­tion in a ve­hi­cle as it can be seen in to­day’s cars where e.g. ACC or emer­gency brake as­sis­tance sys­tems etc. are available.

The first in­ter­pre­ta­tion al­lows au­toma­tion that can be over­rid­den by the driver or that re­acts in emer­gency situ­a­tions only when the driver can­not cope with the situ­a­tion any­more. Forms of au­toma­tion that can­not be over­rid­den seem to be not in line with the first in­ter­pre­ta­tion [45, p. 818]. The sec­ond in­ter­pre­ta­tion is more flex­ible and would al­low also forms of au­toma­tion that can­not be over­rid­den and are within the Vienna Con­ven­tion as long as the driver mon­i­tors the au­toma­tion [44]. …In the liter­a­ture, some other as­sis­tance and au­toma­tion func­tions were ap­praised by ju­ridi­cal ex­perts. For ex­am­ple, [46] pos­tu­lates that au­to­matic emer­gency brak­ing sys­tems are in line with the Vienna Con­ven­tion as long as they re­act only when a crash is un­avoid­able (col­li­sion miti­ga­tion). Other­wise a con­flict be­tween the driver’s in­ten­tion (here, steer­ing) and the re­ac­tion of the au­toma­tion (here, brak­ing) can­not be ex­cluded. Albrecht [47] con­cludes that an In­tel­li­gent Speed Adap­ta­tion (ISA) which can­not be over­rid­den by the driver is not in line with the Vienna Con­ven­tion be­cause it is not con­sis­tent with Ar­ti­cle 8 and Ar­ti­cle 13 of the Vienna Con­ven­tion.

…As soon as data from the ve­hi­cle is used for V2X-com­mu­ni­ca­tion or is stored in the ve­hi­cle it­self, data pro­tec­tion and pri­vacy is­sues be­come rele­vant. Direc­tives and doc­u­ments that need to be checked in­clude:

  • Direc­tive 95/​46/​EC on the pro­tec­tion of in­di­vi­d­u­als with re­gard to the pro­cess­ing of per­sonal data and on the free move­ment of such data;

  • Direc­tive 2010/​40/​EU on the frame­work for the de­ploy­ment of In­tel­li­gent Trans­port Sys­tems in the field of road trans­port and for in­ter­faces with other modes of trans­port;

  • WP 29 Work­ing doc­u­ment on data pro­tec­tion and pri­vacy im­pli­ca­tions in the eCall ini­ti­a­tive and the Euro­pean Data Pro­tec­tion Su­per­vi­sor (EDPS) opinion on ITS Ac­tion Plan and Direc­tive.

The bot­tle­neck is that at the cur­rent stage of de­vel­op­ment the risk re­lated costs and benefits of vi­able de­ploy­ment paths are un­known, com­bined with the fact that the de­ploy­ment paths them­selves are wide open be­cause the pos­si­ble de­ploy­ment sce­nar­ios are not as­sessed and de­bated in a poli­ti­cal en­vi­ron­ment. There is cur­rently no con­sen­sus amongst stake­hold­ers on which of the de­ploy­ment sce­nar­ios pro­posed will even­tu­ally pre­vail…Changes in EU leg­is­la­tion might change the role of play­ers and in­crease the risk for them. Any change in EU leg­is­la­tion will change the po­si­tion of the play­ers, and un­cer­tainty in which di­rec­tion this change (gap) would go adds to the risk. This pro­hibits play­ers from hav­ing an out­spo­ken opinion on the is­sue. If an up­date of ex­ist­ing leg­is­la­tion is con­sid­ered, this should be Euro­pean leg­is­la­tion, not na­tional leg­is­la­tion. It would be bet­ter to go for a world-wide har­mo­nized leg­is­la­tion, when it is de­cided to take that path.

A use­ful case study for un­der­stand­ing the is­sues as­so­ci­ated with au­to­mated driv­ing can be found in SAFESPOT [4] which can be viewed as a par­allel to au­to­mated driv­ing func­tions (for more de­tails, see Ap­pendix I. Re­lated to as­pect 3). SAFESPOT pro­vided an in-depth anal­y­sis of the le­gal as­pects of the ser­vice named ‘Speed Warn­ing’, in two con­figu­ra­tions V2I and V2V. It is performed against two fun­da­men­tally differ­ent law schemes, namely Dutch and English law. This anal­y­sis con­cluded that the con­cept of co-op­er­a­tive sys­tems raises ques­tions and might com­pli­cate le­gal dis­putes. This is for sev­eral rea­sons:

  • There are more par­ties in­volved, all with their own re­spon­si­bil­ities for the proper func­tion­ing of el­e­ments of a co­op­er­a­tive sys­tem.

  • Grow­ing tech­ni­cal in­ter­de­pen­den­cies be­tween ve­hi­cles, and be­tween ve­hi­cles and the in­fras­truc­ture, may also lead to sys­tem failure, in­clud­ing sce­nar­ios that may be char­ac­ter­ised as an un­lucky com­bi­na­tion of events (“a freak ac­ci­dent”) or as a failure for which the ex­act cause sim­ply can­not be traced back (be­cause of the tech­ni­cal com­plex­ity).

  • Risks that can­not be in­fluenced by the peo­ple who suffer the con­se­quences tend to be judged less ac­cept­able by so­ciety and, like­wise, from a le­gal point of view.

  • The in-depth anal­y­sis of SAFESPOT con­cluded that (po­ten­tial) par­ti­ci­pants such as sys­tem pro­duc­ers and road man­agers may well be ex­posed to li­a­bil­ity risks. Even if the driver of the probe ve­hi­cle could not suc­cess­fully claim a defense (to­wards other road users), based on a failure of a sys­tem, sys­tem providers and road man­agers may still re­main (par­tially) re­spon­si­ble through the mechanism of sub­ro­ga­tion and right of re­course.

  • Cur­rent law states that the driver must be in con­trol of his ve­hi­cle at all times. In gen­eral, EU drivers are pro­hibited to ex­hibit dan­ger­ous be­havi­our while driv­ing. The po­lice have pros­e­cuted drivers in the UK for drink­ing and/​or eat­ing; i.e. only hav­ing one hand on the steer­ing wheel. The use of a mo­bile phone while driv­ing is pro­hibited in many Euro­pean coun­tries, only use of phones equipped for hands free op­er­a­tion are per­mit­ted. Li­a­bil­ity still rests firmly with the driver for the safe op­er­a­tion of ve­hi­cles.

New leg­is­la­tion may be re­quired for au­to­mated driv­ing. It is highly un­likely that any OEM or sup­plier will risk in­tro­duc­ing an au­to­matic driv­ing ve­hi­cle (where re­spon­si­bil­ity for safe driv­ing is re­moved from the driver) with­out there be­ing a frame­work of new leg­is­la­tion which clearly sets out where their re­spon­si­bil­ity and li­a­bil­ity be­gins and ends. In some ways it could be seen as similar to war­ranty li­a­bil­ity, the OEM war­rants cer­tain qual­ity and perfor­mance lev­els, backed by re­cip­ro­cal agree­ments within the sup­ply chain. Civil (and pos­si­bly crim­i­nal) li­a­bil­ity in the case of ac­ci­dents in­volv­ing au­to­mated driv­ing ve­hi­cles is a ma­jor is­sue that can truly de­lay the in­tro­duc­tion of these tech­nolo­gies…Since there are no statis­ti­cal records of the effects of au­to­mated driv­ing sys­tems, the en­trepreneur­ship of in­sur­ers should com­pen­sate for the is­sue of un­known risks…The fol­low­ing fac­tors are re­garded as hin­der­ing an op­ti­mal role to be played by the in­surance in­dus­try in pro­mot­ing new safety sys­tems through their in­surance poli­cies:

  • Premium-set­ting is based on statis­ti­cal prin­ci­ples, re­sult­ing in a time-lag prob­lem;

  • Com­pe­ti­tion/​sen­si­tive re­la­tion­ships with clients;

  • In­vest­ment costs (e.g. af­ter­mar­ket in­stal­la­tions);

  • Ad­minis­tra­tive costs;

  • Mar­ket regulation

No prece­dence law­suits of li­a­bil­ity with au­to­mated sys­tems have hap­pened to date. The Toy­ota malfunc­tions of their brake-by-wire sys­tem in 2010 did not end in a law­suit. A sys­tem like park­ing as­sist is tech­ni­cally not re­dun­dant. What would hap­pen if the driver claimed he/​she could not over­ride the brakes? For (pre­mium) in­surance a crit­i­cal mass is re­quired, so ini­tially all stake-hold­ers in­clud­ing gov­ern­ments should po­ten­tially play a role.

“Au­to­mo­tive Au­ton­omy: Self-driv­ing cars are inch­ing closer to the as­sem­bly line, thanks to promis­ing new pro­jects from Google and the Euro­pean Union”, Wright 2011:

The Google pro­ject has made im­por­tant ad­vances over its pre­de­ces­sor, con­soli­dat­ing down to one laser rangefin­der from five and in­cor­po­rat­ing data from a broader range of sources to help the car make more in­formed de­ci­sions about how to re­spond to its ex­ter­nal en­vi­ron­ment. “The thresh­old for er­ror is minus­cule,” says Thrun, who points out that reg­u­la­tors will likely set a much higher bar for safety with a self-driv­ing car than for one driven by no­to­ri­ously er­ror-prone hu­mans.

“The fu­ture of driv­ing, Part III: hack my ride”, Lee 2008:

Of course, one rea­son that pri­vate in­vestors might not want to in­vest in au­to­mo­tive tech­nolo­gies is the risk of ex­ces­sive li­a­bil­ity in the case of crashes. The tort sys­tem serves a valuable func­tion by giv­ing man­u­fac­tur­ers a strong in­cen­tive to make safe, re­li­able prod­ucts. But too much tort li­a­bil­ity can have the per­verse con­se­quence of dis­cour­ag­ing the in­tro­duc­tion of even rel­a­tively safe prod­ucts into the mar­ket­place. Tem­ple­ton tells Ars that the avi­a­tion in­dus­try once faced that prob­lem. At one point, “all of the gen­eral avi­a­tion man­u­fac­tur­ers stopped mak­ing planes be­cause they couldn’t han­dle the li­a­bil­ity. They were be­ing found slightly li­able in ev­ery plane crash, and it started to cost them more than the cost of man­u­fac­tur­ing the plane.” Air­plane man­u­fac­tur­ers even­tu­ally con­vinced Congress to place limits on their li­a­bil­ity. At the mo­ment, crashes tend to lead to law­suits against hu­man drivers, who rarely have deep pock­ets. Un­less there is ev­i­dence that a me­chan­i­cal defect caused the crash, car man­u­fac­tur­ers tend not to be the tar­get of most ac­ci­dent-re­lated law­suits. That would change if cars were driven by soft­ware. And be­cause car man­u­fac­tur­ers have much deeper pock­ets than in­di­vi­d­ual drivers do, plain­tiffs are likely to seek much larger dam­ages than they would against hu­man drivers. That could lead to the per­verse re­sult that even safer self-driv­ing cars would be more ex­pen­sive to in­sure than hu­man drivers. Since car man­u­fac­tur­ers, rather than drivers, would be the first ones sued in the event of an ac­ci­dent, car com­pa­nies are likely to pro­tect them­selves by buy­ing their own in­surance. And if in­surance pre­miums get too high, they may take the route the avi­a­tion in­dus­try did and seek limits on li­a­bil­ity. An added benefit for con­sumers is that most would never have to worry about auto in­surance. Cars would come prein­sured for the life of the ve­hi­cle (or at least the life of the war­ranty)…Self-driv­ing ve­hi­cles will sit at the in­ter­sec­tion of two in­dus­tries that are cur­rently sub­ject to very differ­ent reg­u­la­tory regimes. The au­to­mo­bile in­dus­try is heav­ily reg­u­lated, while the soft­ware in­dus­try is largely un­reg­u­lated at all. The most fun­da­men­tal de­ci­sion reg­u­la­tors will need to make is whether one of these ex­ist­ing reg­u­la­tory regimes will be suit­able for self-driv­ing tech­nolo­gies, or whether an en­tirely new reg­u­la­tory frame­work will be needed to ac­com­mo­date them.

http://​​www.917wy.com/​​top­icpie/​​2008/​​11/​​fu­ture-of-driv­ing-part-3/​​2

It’s in­evitable that at some point, a self-driv­ing ve­hi­cle will be in­volved in a fatal crash which gen­er­ates wor­ld­wide pub­lic­ity. Un­for­tu­nately, even if self-driv­ing ve­hi­cles have amassed an over­all safety record that’s su­pe­rior to that of hu­man drivers, the first crash is likely to prompt calls for dras­tic re­stric­tions on the use of self-driv­ing tech­nolo­gies. It will there­fore be im­por­tant for busi­ness lead­ers and elected offi­cials to lay the ground­work by both ed­u­cat­ing the pub­lic about the benefits of self-driv­ing tech­nolo­gies and man­ag­ing ex­pec­ta­tions so that the pub­lic isn’t too sur­prised when crashes hap­pen. Of course, if the first self-driv­ing cars turn out to be sig­nifi­cantly less safe than the av­er­age hu­man driver, then they should be pul­led off the streets and re-tooled. But this seems un­likely to hap­pen. A com­pany that in­tro­duced self-driv­ing tech­nol­ogy into the mar­ket­place be­fore it was ready would not only have trou­ble con­vinc­ing reg­u­la­tors that its cars are safe, but it would be risk­ing ru­inous law­suits, as well. The far greater dan­ger is that the com­bi­na­tion of li­a­bil­ity fears and red tape will cause the United States to lose the ini­ti­a­tive in self-driv­ing tech­nolo­gies. Coun­tries such as China, In­dia, and Sin­ga­pore that have more au­to­cratic regimes or less-de­vel­oped economies may seize the ini­ti­a­tive and in­tro­duce self-driv­ing cars while Amer­i­can poli­cy­mak­ers are still de­bat­ing how to reg­u­late them. Even­tu­ally, the specter of other coun­tries us­ing tech­nolo­gies that aren’t available in the United States will spur Amer­i­can poli­ti­ci­ans into ac­tion, but only af­ter sev­eral thou­sand Amer­i­cans lose their lives un­nec­es­sar­ily at the hands of hu­man drivers.

…One likely area of dis­pute is whether peo­ple will be al­lowed to mod­ify the soft­ware on their own cars. The United States has a long tra­di­tion of peo­ple tin­ker­ing with both their cars and their com­put­ers. No doubt, there will be many peo­ple who are in­ter­ested in mod­ify­ing the soft­ware on their self-driv­ing cars. But there is likely to be sig­nifi­cant pres­sure for leg­is­la­tion crim­i­nal­iz­ing unau­tho­rized tin­ker­ing with self-driv­ing car soft­ware. Both car man­u­fac­tur­ers and (as we’ll see shortly) the law en­force­ment com­mu­nity are likely to be in fa­vor of crim­i­nal­iz­ing the mod­ifi­ca­tion of car soft­ware. And they’ll have a plau­si­ble safety ar­gu­ment: buggy car soft­ware would be dan­ger­ous not only to the car owner but to oth­ers on the road. The ob­vi­ous anal­ogy is to the DMCA, which crim­i­nal­ized unau­tho­rized tin­ker­ing with copy pro­tec­tion schemes. But there are also im­por­tant differ­ences. One is that car man­u­fac­tur­ers will be much more mo­ti­vated to pre­vent tin­ker­ing than Ap­ple or Microsoft are. If man­u­fac­tur­ers are li­able for the dam­age done by their ve­hi­cles, then tin­ker­ing not only en­dan­gers lives, but their bot­tom lines as well. It’s un­likely that Ap­ple would ever sue peo­ple caught jailbreak­ing their iPhones. But car man­u­fac­tur­ers prob­a­bly will con­trac­tu­ally pro­hibit tin­ker­ing and then sue those caught do­ing it for breach of con­tract.

http://​​www.917wy.com/​​top­icpie/​​2008/​​11/​​fu­ture-of-driv­ing-part-3/​​3

The more stalwart ad­vo­cate of locked-down cars is likely to be the gov­ern­ment, be­cause self-driv­ing car soft­ware promises to be a fan­tas­tic tool for so­cial con­trol. Con­sider, for ex­am­ple, how use­ful locked-down cars could be to law en­force­ment. Rather than phys­i­cally driv­ing to a sus­pect’s house, knock­ing on his door (or not), and forcibly re­strain­ing, hand­cuffing, and es­cort­ing a sus­pect to the sta­tion, po­lice will be able to sim­ply seize a sus­pect’s self-driv­ing car re­motely and or­der it to drive to the near­est po­lice sta­tion. And that’s just the be­gin­ning. Locked-down car soft­ware could be used to en­force traf­fic laws, to track and log peo­ples’ move­ments for later re­view by law en­force­ment, to en­force cur­fews, to clear the way for emer­gency ve­hi­cles, and dozens of other pur­poses. Some of these func­tions are in­nocu­ous. Others will be very con­tro­ver­sial. But all of them de­pend on re­strict­ing user con­trol over their own ve­hi­cles. If users were free to swap in cus­tom soft­ware, they might dis­able the gov­ern­ment’s “back door” and re-pro­gram it to ig­nore gov­ern­ment re­quire­ments. So the gov­ern­ment is likely to push hard for laws man­dat­ing that only gov­ern­ment-ap­proved soft­ware run self-driv­ing cars.

…It’s too early to say ex­actly what the car-re­lated civil liber­ties fights will be about, or how they will be re­solved. But one thing we can say for cer­tain is that the tech­ni­cal de­ci­sions made by to­day’s com­puter sci­en­tists will be im­por­tant for set­ting the stage for those bat­tles. Ad­vo­cates for on­line free speech and anonymity have been helped tremen­dously by the fact that the In­ter­net was de­signed with an open, de­cen­tral­ized ar­chi­tec­ture. The self-driv­ing cars of the fu­ture are likely to be built on top of soft­ware tools that are be­ing de­vel­oped in to­day’s aca­demic labs. By think­ing care­fully about the ways these sys­tems are de­signed, to­day’s com­puter sci­en­tists can give to­mor­row’s civil liber­ties their best shot at pre­serv­ing au­to­mo­tive free­dom.

http://​​www.917wy.com/​​top­icpie/​​2008/​​11/​​fu­ture-of-driv­ing-part-3/​​4

In our in­ter­view with him, Con­gress­man Adam Schiff de­scribed the pub­lic’s per­cep­tion of au­tonomous driv­ing tech­nolo­gies as a re­flec­tion of his own re­ac­tion to the idea: one that is a mix­ture of both fas­ci­na­tion and skep­ti­cism. Schiff ex­plained that the pub­lic’s fas­ci­na­tion comes from amaze­ment at how ad­vanced this tech­nol­ogy already has be­come, plus with Google’s spon­sor­ship and en­dorse­ment it be­comes even more al­lur­ing.

Skep­ti­cism of au­tonomous ve­hi­cle tech­nolo­gies comes from a miss­ing el­e­ment of trust. Ac­cord­ing to Clifford Nass, a pro­fes­sor of com­mu­ni­ca­tions and so­ciol­ogy at Stan­ford Univer­sity, this trust is an as­pect of pub­lic opinion that must be earned through demon­stra­tion more so than through use. When peo­ple see a tech­nol­ogy in ac­tion, they will be­gin to trust it. Pro­fes­sor Nass spe­cial­izes in study­ing the way in which hu­man be­ings re­late to tech­nol­ogy, and he has pub­lished sev­eral books on the topic in­clud­ing The Man Who Lied to His Lap­top: What Machines Teach Us About Hu­man Re­la­tion­ships. In our in­ter­view with him, Pro­fes­sor Nass ex­plained that so­cietal com­fort with tech­nol­ogy is gained through ex­pe­rience, and ac­cep­tance oc­curs when peo­ple have seen a tech­nol­ogy work enough times col­lec­tively. He also pointed out that it took a long time for peo­ple to de­velop trust in air trans­porta­tion, some­thing that we al­most take for granted now. It is cer­tainly not the case that au­tonomous cars need to be equiv­a­lent in safety to plane flight be­fore the pub­lic would adopt them. How­ever, as Noel du Toit pointed out, we have a higher ex­pec­ta­tion for au­tonomous cars than we do for our­selves. Sim­ply put, if we are will­ing to re­lin­quish the “con­trol” over our ve­hi­cles to an au­tonomous power, it will likely have to be un­der the con­di­tion that the tech­nol­ogy drives more adeptly than we ever pos­si­bly could. Other­wise, there will sim­ply be no trust­ing it. In­ter­est­ingly, du Toit brought up a re­cent botched safety demon­stra­tion by Volvo in May of 2010. In the demon­stra­tion, Volvo show­cased to the press how its emer­gency brak­ing sys­tem works as part of an “adap­tive cruise con­trol” sys­tem. Th­ese sys­tems al­low a driver to set both a top speed and a fol­low­ing dis­tance, which the ve­hi­cle then au­to­mat­i­cally main­tains. As a con­se­quence, if the pre­ced­ing ve­hi­cle stops short, the sys­tem acts as the foun­da­tion for an emer­gency-brak­ing ma­neu­ver. How­ever, In Volvo’s demon­stra­tion the car smashed di­rectly into a trailer13. Even though the sys­tem worked fine in sev­eral cases dur­ing the day’s worth of demon­stra­tions, video of that one mishap went viral and did lit­tle to help the pub­lic gain trust in the tech­nol­ogy.

Calo pointed out at that fu­ture is­sues re­lated to au­tonomous ve­hi­cles would be ap­proached from a stand­point of “nega­tive li­a­bil­ities”, mean­ing that we can as­sume some­thing is le­gal un­less there ex­ist ex­plicit laws against it. This dis­cus­sion also led to the con­cept of what a driver­less car would look like to by­stan­ders, and the kind of panic that might gar­ner. A real-life ex­am­ple of this oc­curred in Moscow dur­ing the VisLab van trek to Shang­hai11. In this case, an au­tonomous elec­tric van was stopped by Rus­sian au­thor­i­ties due to its ap­par­ent lack of a driver be­hind the wheel. Thank­fully, en­g­ineers pre­sent were able to con­vince the Rus­sian officer who stopped the ve­hi­cle not to is­sue a ticket. The above [Ne­vadan] leg­is­la­tion fits in well with the in­for­ma­tion that we col­lected from Con­gress­man Schiff about po­ten­tial fed­eral in­volve­ment in au­tonomous ve­hi­cle tech­nol­ogy. Ba­si­cally, Schiff re­layed the idea that a strong gov­ern­men­tal role ex­pected for this tech­nol­ogy would come in the form of reg­u­lat­ing safety. Fur­ther­more, he called at­ten­tion to hefty gov­ern­men­tal re­quire­ments for crash test­ing that ev­ery new ve­hi­cle must meet be­fore it is al­lowed on the road.

In au­tonomous driv­ing, li­a­bil­ity con­cerns can be in­ferred through a cou­ple of ex­am­ples. In one ex­am­ple, Noel du Toit de­scribed DARPA’s use of hired stunt drivers to share the test­ing grounds with driver­less ve­hi­cle en­tries in the 2007 Ur­ban Challenge. This be­hav­ior clearly illus­trates the level of pre­cau­tion that the DARPA offi­cials felt it was nec­es­sary to take. In an­other ex­am­ple, Dmitri Dol­gov ex­pounded on how Google’s cars are never driv­ing by them­selves; when­ever they are op­er­ated on pub­lic roads, there are at least two well-trained op­er­a­tors in the car. Dol­gov went on to say that these op­er­a­tors “are in con­trol at all times”, which helps illus­trate Google’s po­si­tion-they are not tak­ing any chances when it comes to li­a­bil­ities. Kent Kresa, former CEO of Northrup Grum­man and in­terim chair­man of GM in 2009, was also con­cerned about the li­a­bil­ity is­sues pre­sented by au­tonomous ve­hi­cles. Kresa felt that a fu­ture with driver­less cars pi­lot­ing the streets was some­what uni­mag­in­able at pre­sent, es­pe­cially when one con­sid­ers the pos­si­bil­ity of a pedes­trian get­ting hit. In the case of such a col­li­sion it is still very un­clear who would be at fault. Whether or not the com­pany that made the ve­hi­cle would be re­spon­si­ble is at pre­sent un­known.

A con­ver­sa­tion we had with Bruce Gill­man, the pub­lic in­for­ma­tion officer for the Los An­ge­les Depart­ment of Trans­porta­tion (DOT), re­vealed that the de­part­ment is very busy putting out many other fires. Gill­man noted that DOT is fo­cused on get­ting peo­ple out of their cars and onto bikes or into buses. Thus, au­tonomous ve­hi­cles are not on their radar. More­over, Gill­man was adamant that DOT would wait un­til au­tonomous ve­hi­cles were be­ing man­u­fac­tured com­mer­cially be­fore ad­dress­ing any is­sues con­cern­ing them. His view­point cer­tainly re­in­forces that idea that sup­port­ive in­fras­truc­ture up­dates com­ing form a city gov­ern­ment level would be un­likely. No mat­ter what adop­tion path­way is used, fed­eral gov­ern­ment fi­nan­cial sup­port could come in the form of in­cen­tives and sub­sidies like those seen dur­ing the ini­tial rol­lout of hy­brid ve­hi­cles. How­ever, Brian Thomas ex­plained that this would only be pos­si­ble if the fed­eral gov­ern­ment was will­ing to do a cost-benefit val­u­a­tion for the main­stream in­tro­duc­tion of au­tonomous ve­hi­cles.

http://​​www.pickar.caltech.edu/​​e103/​​Fi­nal%20Ex­ams/​​Au­tonomous%20Ve­hi­cles%20for%20Per­sonal%20Trans­port.pdf [shades of Amara’s law: we always over­es­ti­mate in the short run & un­der­es­ti­mate in the long run]

Car man­u­fac­tur­ers might be held li­able for a larger share of the ac­ci­dents-a re­spon­si­bil­ity they are cer­tain to re­sist. (A le­gal anal­y­sis by Nidhi Kalra and her col­leagues at the RAND Cor­po­ra­tion sug­gests this prob­lem is not in­su­per­a­ble.) –“Leave the Driv­ing to It”, Brian Hayes Amer­i­can Scien­tist 2011

The RAND re­port: “Li­a­bil­ity and Reg­u­la­tion of Au­tonomous Ve­hi­cle Tech­nolo­gies”, Kalra et al 2009:

In this work, we first eval­u­ate how the ex­ist­ing li­a­bil­ity regime would likely as­sign re­spon­si­bil­ity in crashes in­volv­ing au­tonomous ve­hi­cle tech­nolo­gies. We iden­tify the con­trol­ling le­gal prin­ci­ples for crashes in­volv­ing these tech­nolo­gies and ex­am­ine the im­pli­ca­tions for their fur­ther de­vel­op­ment and adop­tion. We an­ti­ci­pate that con­sumer ed­u­ca­tion will play an im­por­tant role in re­duc­ing con­sumer over­re­li­ance on nascent au­tonomous ve­hi­cle tech­nolo­gies and min­i­miz­ing li­a­bil­ity risk. We also dis­cuss the pos­si­bil­ity that the ex­ist­ing li­a­bil­ity regime will slow the adop­tion of these so­cially de­sir­able tech­nolo­gies be­cause they are likely to in­crease li­a­bil­ity for man­u­fac­tur­ers while re­duc­ing li­a­bil­ity for drivers. Fi­nally, we dis­cuss the pos­si­bil­ity of fed­eral pre­emp­tion of state tort suits if the U.S. Depart­ment of Trans­porta­tion (US DOT) pro­mul­gates reg­u­la­tions and some of the im­pli­ca­tions of elimi­nat­ing state tort li­a­bil­ity. Se­cond, we re­view the ex­ist­ing liter­a­ture on the reg­u­la­tory en­vi­ron­ment for au­tonomous ve­hi­cle tech­nolo­gies. To date, there are no gov­ern­ment reg­u­la­tions for these tech­nolo­gies, but work is be­ing done to de­velop ini­tial in­dus­try stan­dards.

…Ad­di­tion­ally, for some sys­tems, the driver is ex­pected to in­ter­vene when the sys­tem can­not con­trol the ve­hi­cle com­pletely. For ex­am­ple, if a very rapid stop is re­quired, ACC may de­pend on the driver to provide brak­ing be­yond its own ca­pa­bil­ities. ACC also does not re­spond to driv­ing haz­ards, such as de­bris on the road or pot­holes-the driver is ex­pected to in­ter­vene. Si­mul­ta­neously, re­search sug­gests that drivers us­ing these con­ve­niences of­ten be­come com­pla­cent and slow to in­ter­vene when nec­es­sary; this be­hav­ioral adap­ta­tion means drivers are less re­spon­sive and re­spon­si­ble than if they were fully in con­trol (Rudin-Brown and Parker, 2004). Does such ev­i­dence sug­gest that man­u­fac­tur­ers may be re­spon­si­ble for mon­i­tor­ing driver be­hav­ior as well as ve­hi­cle be­hav­ior? Some man­u­fac­tur­ers have already taken a step to­ward en­sur­ing that the driver as­sumes re­spon­si­bil­ity and is at­ten­tive, by re­quiring the driver to pe­ri­od­i­cally de­press a but­ton or by mon­i­tor­ing the driver by sens­ing eye move­ments and grip on the steer­ing wheel. As dis­cussed later, liti­ga­tion may oc­cur around the is­sue of driver mon­i­tor­ing and the dan­ger of the driver rely­ing on the tech­nol­ogy for some­thing that it is not de­signed to ac­com­plish.

…Ay­ers (1994) sur­veyed a range of emerg­ing au­tonomous ve­hi­cle tech­nolo­gies and au­to­mated high­ways, eval­u­ated the like­li­hood of a shift in li­a­bil­ity oc­cur­ring, dis­cussed the ap­pro­pri­ate­ness of gov­ern­ment in­ter­ven­tion, and high­lighted the most-promis­ing in­ter­ven­tions for differ­ent tech­nolo­gies. Ay­ers found that col­li­sion warn­ing and col­li­sion-avoidance sys­tems “are likely to gen­er­ate a host of neg­li­gence suits against auto man­u­fac­tur­ers” and that li­a­bil­ity dis­claimers and fed­eral reg­u­la­tions may be the most effec­tive meth­ods of deal­ing with the li­a­bil­ity con­cerns (p. 21). The re­port was writ­ten be­fore many of these tech­nolo­gies ap­peared on the mar­ket, and Ay­ers fur­ther spec­u­lated that “the li­a­bil­ity for al­most all ac­ci­dents in cars equipped with col­li­sion-avoidance sys­tems would con­ceiv­ably fall on the man­u­fac­turer” (p. 22), which could “de­lay or even pre­vent the de­ploy­ment of col­li­sion warn­ing sys­tems that are cost-effec­tive in terms of ac­ci­dent re­duc­tion” (p. 25). Syverud (1992) ex­am­ines the le­gal cases stem­ming from the in­tro­duc­tion of air bags, an­tilock brakes, cruise con­trol, and cel­lu­lar tele­phones to provide some gen­eral les­sons for the li­a­bil­ity con­cerns for au­tonomous ve­hi­cle tech­nolo­gies. In an­other re­port, Syverud (1993) ex­am­ines the le­gal bar­ri­ers to a wide range of IVHSs and finds that li­a­bil­ity poses a sig­nifi­cant bar­rier par­tic­u­larly to au­tonomous ve­hi­cle tech­nolo­gies that take con­trol of the ve­hi­cle. In this work, Syverud’s in­ter­views with man­u­fac­tur­ers re­veal that li­a­bil­ity con­cerns had already ad­versely af­fected re­search and de­vel­op­ment in these tech­nolo­gies in sev­eral com­pa­nies. One in­ter­vie­wee is quoted as say­ing that “IVHS will es­sen­tially re­main ‘in­for­ma­tion tech­nol­ogy and a few pie-in-the sky pork bar­rel con­trol tech­nol­ogy demon­stra­tions, at least in this coun­try, un­til you lawyers do some­thing about prod­ucts li­a­bil­ity law’” (1993, p. 25).

…While the vic­tims in these cir­cum­stances could pre­sum­ably sue the ve­hi­cle man­u­fac­turer, prod­ucts li­a­bil­ity law­suits are more ex­pen­sive to bring and take more time to re­solve than run-of-the-mill au­to­mo­bile-crash liti­ga­tion. This shift in re­spon­si­bil­ity from the driver to the man­u­fac­turer may make no-fault au­to­mo­bile-in­surance regimes more at­trac­tive. They are de­signed to provide com­pen­sa­tion to vic­tims rel­a­tively quickly, and they do not de­pend upon the iden­ti­fi­ca­tion of an “at-fault” party

…Sup­pose that au­tonomous ve­hi­cle tech­nolo­gies are re­mark­ably effec­tive at vir­tu­ally elimi­nat­ing minor crashes caused by hu­man er­ror. But it may be that the com­par­a­tively few crashes that do oc­cur usu­ally re­sult in very se­ri­ous in­juries or fatal­ities (e.g., be­cause au­tonomous ve­hi­cles are op­er­at­ing at much higher speeds or den­si­ties). This change in the dis­tri­bu­tion of crashes may af­fect the eco­nomics of in­sur­ing against them. Ac­tu­ar­i­ally, it is much eas­ier for an in­surance com­pany to calcu­late the ex­pected costs of some­what com­mon small crashes than of rarer, much larger events. This may limit the down­ward trend in au­to­mo­bile-in­surance costs that we would oth­er­wise ex­pect.

…Sup­pose that most cars brake au­to­mat­i­cally when they sense a pedes­trian in their path. As more cars with this fea­ture come to be on the road, pedes­tri­ans may ex­pect that cars will stop, in the same way that peo­ple stick their limbs in ele­va­tor doors con­fi­dent that the door will au­to­mat­i­cally re­open. The gen­eral level of pedes­trian care may de­cline as peo­ple be­come ac­cus­tomed to this com­mon safety fea­ture. But if there were a few mod­els of cars that did not stop in the same way, a new cat­e­gory of crashes could emerge. In this case, should pedes­tri­ans who wrongly as­sume that a car would au­to­mat­i­cally stop and are then in­jured be able to re­cover? To al­low re­cov­ery in this in­stance would seem to un­der­mine in­cen­tives for pedes­tri­ans to take effi­cient care. On the other hand, al­low­ing the in­jured pedes­trian to re­cover may en­courage the uni­ver­sal adop­tion of this safety fea­ture. Since neg­li­gence is defined by un­rea­son­able­ness, the evolv­ing set of shared as­sump­tions about the op­er­a­tion of the road­ways-what counts as “rea­son­able”-will de­ter­mine li­a­bil­ity. Fourth, we think that it is not likely that op­er­a­tors of par­tially or fully au­tonomous ve­hi­cles will be found strictly li­able with driv­ing such ve­hi­cles as an ul­tra­haz­ardous ac­tivity. As ex­plained ear­lier, these tech­nolo­gies will be in­tro­duced in­cre­men­tally and will ini­tially serve merely to aid the driver rather than take full con­trol of the ve­hi­cle. This will give the pub­lic and courts time to be­come fa­mil­iar with the ca­pa­bil­ities and limits of the tech­nol­ogy. As a re­sult, it seems un­likely that courts will con­sider its grad­ual in­tro­duc­tion and use to be ul­tra­haz­ardous. On the other hand, this would not be true if a per­son at­tempted to op­er­ate a car fully au­tonomously be­fore the tech­nol­ogy ad­e­quately ma­tured. Sup­pose, for ex­am­ple, that a home hob­by­ist put to­gether his own au­tonomous ve­hi­cle and at­tempted to op­er­ate it on pub­lic roads. Vic­tims of any crashes that re­sulted may well be suc­cess­ful in con­vinc­ing a court to find the op­er­a­tor strictly li­able on the grounds that such ac­tivity was ul­tra­haz­ardous.

…Product-li­a­bil­ity law can be di­vided into the­o­ries of li­a­bil­ity and kinds of defect. The­o­ries of li­a­bil­ity in­clude neg­li­gence, mis­rep­re­sen­ta­tion, war­ranty, and strict li­a­bil­ity.22 Types of defect in­clude man­u­fac­tur­ing defects, de­sign defects, and warn­ing defects. A product-li­a­bil­ity law­suit will in­volve one or more the­o­ries of man­u­fac­turer li­a­bil­ity at­tached to a spe­cific alle­ga­tion of a type of defect. In prac­tice, the le­gal tests for the the­o­ries of li­a­bil­ity of­ten over­lap and, de­pend­ing on the ju­ris­dic­tion, may be iden­ti­cal. … While it is difficult to gen­er­al­ize, au­to­mo­bile (and sub­sys­tem) man­u­fac­tur­ers may fare well un­der a neg­li­gence stan­dard that uses a cost-benefit anal­y­sis that in­cludes crashes avoided from the use of au­tonomous ve­hi­cle tech­nolo­gies. Au­tomak­ers can ar­gue that the over­all benefits from the use of a par­tic­u­lar tech­nol­ogy out­weigh the risks. The num­ber of crashes avoided by the use of these tech­nolo­gies is prob­a­bly large. …Un­for­tu­nately, the so­cially op­ti­mal li­a­bil­ity rule is un­clear. Per­mit­ting the defen­dant to in­clude the long-run benefits in the cost-benefit anal­y­sis may en­courage the adop­tion of tech­nol­ogy that can in­deed save many lives. On the other hand, it may shield the man­u­fac­turer from li­a­bil­ity for shorter-run de­ci­sions that were in­effi­ciently dan­ger­ous. Sup­pose, for ex­am­ple, that a crash-pre­ven­tion sys­tem op­er­ates suc­cess­fully 70% of the time but that, with ad­di­tional time and work, it could have been de­signed to op­er­ate suc­cess­fully 90% of the time. Then sup­pose that a vic­tim is in­jured in a crash that would have been pre­vented had the sys­tem worked 90% of the time. As­sume that the adop­tion of the 70-per­cent tech­nol­ogy is so­cially de­sir­able but the adop­tion of the 90-per­cent tech­nol­ogy would be even more so­cially de­sir­able. How should the cost-benefit anal­y­sis be con­ducted? Is the man­u­fac­turer per­mit­ted to cite the 70% of crashes that were pre­vented in ar­gu­ing for the benefits of the tech­nol­ogy? Or should the cost-benefit anal­y­sis fo­cus on the man­u­fac­turer’s failure to de­sign the product to func­tion at 90-per­cent effec­tive­ness? If the lat­ter, the man­u­fac­turer might not em­ploy the tech­nol­ogy, thereby lead­ing to many pre­ventable crashes. In calcu­lat­ing the marginal cost of the 90-per­cent tech­nol­ogy, should the man­u­fac­turer be able to count the lives lost in the de­lay in im­ple­men­ta­tion as com­pared to pos­si­ble re­lease of the 70-per­cent tech­nol­ogy? …Tor­tious mis­rep­re­sen­ta­tion may play a role in liti­ga­tion in­volv­ing crashes that re­sult from au­tonomous ve­hi­cle tech­nolo­gies. If ad­ver­tis­ing over­promises the benefits of these tech­nolo­gies, con­sumers may mi­suse them. Con­sider the fol­low­ing hy­po­thet­i­cal sce­nario. Sup­pose that an au­tomaker touts the “au­topi­lot-like” fea­tures of its ACC and lane-keep­ing func­tion. In fact, the tech­nolo­gies are in­tended to be used by an alert driver su­per­vis­ing their op­er­a­tion. After ac­ti­vat­ing the ACC and lane-keep­ing func­tion, a con­sumer as­sumes that the car is in con­trol and falls asleep. Due to road re­sur­fac­ing, the lane-keep­ing func­tion fails, and the au­to­mo­bile leaves the road­way and crashes into a tree. The con­sumer then sues the au­tomaker for tor­tious mis­rep­re­sen­ta­tion based on the ad­ver­tis­ing that sug­gested that the car was able to con­trol it­self.

…Fi­nally, it is also pos­si­ble that auto man­u­fac­tur­ers will be sued for failing to in­cor­po­rate au­tonomous ve­hi­cle tech­nolo­gies in their ve­hi­cles. While ab­sence of available safety tech­nol­ogy is a com­mon ba­sis for de­sign- defect law­suits (e.g., Ca­ma­cho v. Honda Mo­tor Co., 741 P.2d 1240, 1987, over­turn­ing sum­mary dis­mis­sal of suit alleg­ing that Honda could eas­ily have added crash bars to its mo­tor­cy­cles, which would have pre­vented the plain­tiff’s leg in­juries), this the­ory has met with lit­tle suc­cess in the au­to­mo­tive field be­cause man­u­fac­tur­ers have suc­cess­fully ar­gued that state tort reme­dies were pre­empted by fed­eral reg­u­la­tion (Geier v. Amer­i­can Honda Mo­tor Co., 529 U.S. 861, 2000, find­ing that the plain­tiff’s claim that the man­u­fac­turer was neg­li­gent for failing to in­clude air bags was im­plic­itly pre­empted by the Na­tional Traf­fic and Mo­tor Ve­hi­cle Safety Act). We dis­cuss pre­emp­tion and the re­la­tion­ship be­tween reg­u­la­tion and tort in Sec­tion 4.3.

…Preemp­tion has arisen in the au­to­mo­tive con­text in liti­ga­tion over a man­u­fac­turer’s failure to in­stall air bags. In Geier v. Amer­i­can Honda Mo­tor Co. (2000), the U.S. Supreme Court found that state tort liti­ga­tion over a man­u­fac­turer’s failure to in­stall air bags was pre­empted by the Na­tional Traf­fic and Mo­tor Ve­hi­cle Safety Act (Pub. L. No. 89-563). More speci­fi­cally, the Court found that the Fed­eral Mo­tor Ve­hi­cle Safety Stan­dard (FMVSS) 208, pro­mul­gated by the US DOT, re­quired man­u­fac­tur­ers to equip some but not all of their 1987 ve­hi­cle-year ve­hi­cles with pas­sive re­straints. Be­cause the plain­tiffs’ the­ory that the defen­dants were neg­li­gent un­der state tort law for failing to in­clude air bags was in­con­sis­tent with the ob­jec­tives of this reg­u­la­tion (FMVSS 208), the Court held that the state law­suits were pre­empted. Presently, there has been very lit­tle reg­u­la­tion pro­mul­gated by the US DOT with re­spect to au­tonomous ve­hi­cle tech­nolo­gies. Should the US DOT pro­mul­gate such reg­u­la­tion, it is likely that state tort law claims that were found to be in­con­sis­tent with the ob­jec­tive of the reg­u­la­tion would be held to be pre­empted un­der the anal­y­sis used in Geier. Sub­stan­tial liti­ga­tion might be ex­pected as to whether par­tic­u­lar state-law claims are, in fact, in­con­sis­tent with the ob­jec­tives of the reg­u­la­tion. Re­s­olu­tion of those claims will de­pend on the spe­cific state tort law claims, the spe­cific reg­u­la­tion, and the court’s anal­y­sis of whether they are “in­con­sis­tent.” …Our anal­y­sis nec­es­sar­ily raises a more gen­eral ques­tion: Why should we be con­cerned about li­a­bil­ity is­sues raised by a new tech­nol­ogy? The an­swer is the same as for why we care about tort law at all: that a tort regime must bal­ance eco­nomic in­cen­tives, vic­tim com­pen­sa­tion, and cor­rec­tive jus­tice. Any new tech­nol­ogy has the po­ten­tial to change the sets of risks, benefits, and ex­pec­ta­tions that tort law must rec­on­cile. …Congress could con­sider cre­at­ing a com­pre­hen­sive reg­u­la­tory regime to gov­ern the use of these tech­nolo­gies. If it does so, it should also con­sider pre­empt­ing in­con­sis­tent state-court tort reme­dies. This may min­i­mize the num­ber of in­con­sis­tent le­gal regimes that man­u­fac­tur­ers face and sim­plify and speed the in­tro­duc­tion of this tech­nol­ogy. While fed­eral pre­emp­tion has im­por­tant dis­ad­van­tages, it might speed the de­vel­op­ment and uti­liza­tion of this tech­nol­ogy and should be con­sid­ered, if ac­com­panied by a com­pre­hen­sive fed­eral reg­u­la­tory regime.

…This ten­sion pro­duced “a stand­off be­tween airbag pro­po­nents and the au­tomak­ers that re­sulted in con­tentious de­bates, sev­eral court cases, and very few airbags” (Wet­more, 2004, p. 391). In 1984, the US DOT passed a rul­ing re­quiring ve­hi­cles man­u­fac­tured af­ter 1990 to be equipped with some type of pas­sive re­straint sys­tem (e.g., air bags or au­to­matic seat belts) (Wet­more, 2004); in 1991, this reg­u­la­tion was amended to re­quire air bags in par­tic­u­lar in all au­to­mo­biles by 1999 (Pub. L. No. 102-240). The manda­tory perfor­mance stan­dards in the FMVSS fur­ther re­quired air bags to pro­tect an un­belted adult male pas­sen­ger in a head-on, 30 mph crash. Ad­di­tion­ally, by 1990, the situ­a­tion had changed dra­mat­i­cally, and air bags were be­ing in­stalled in mil­lions of cars. Wet­more at­tributes this de­vel­op­ment to three fac­tors: First, tech­nol­ogy had ad­vanced to en­able air-bag de­ploy­ment with high re­li­a­bil­ity; sec­ond, pub­lic at­ti­tude shifted, and safety fea­tures be­came im­por­tant fac­tors for con­sumers; and, third, air bags were no longer be­ing pro­moted as re­place­ments but as sup­ple­ments to seat belts, which re­sulted in a shar­ing of re­spon­si­bil­ity be­tween man­u­fac­tur­ers and pas­sen­gers and less­ened man­u­fac­tur­ers’ po­ten­tial li­a­bil­ity (Wet­more, 2004). While air bags have cer­tainly saved many lives, they have not lived up to origi­nal ex­pec­ta­tions: In 1977, NHTSA es­ti­mated that air bags would save on the or­der of 9,000 lives per year and based its reg­u­la­tions on these ex­pec­ta­tions (Thomp­son, Segui-Gomez, and Gra­ham, 2002). To­day, by con­trast, NHTSA calcu­lates that air bags saved 8,369 lives in the 14 years be­tween 1987 and 2001 (Glass­bren­ner, un­dated). Si­mul­ta­neously, how­ever, it has be­come ev­i­dent that air bags pose a risk to many pas­sen­gers, par­tic­u­larly smaller pas­sen­gers, such as women of small stature, the el­derly, and chil­dren. NHTSA (2008a) de­ter­mined that 291 deaths were caused by air bags be­tween 1990 and July 2008, pri­mar­ily due to the ex­treme force that is nec­es­sary to meet the perfor­mance stan­dard of pro­tect­ing the un­belted adult male pas­sen­ger. Hous­ton and Richard­son (2000) de­scribe the strong re­ac­tion to these losses and a back­lash against air bags, de­spite their benefits. The un­in­tended con­se­quences of air bags have led to tech­nol­ogy de­vel­op­ments and changes to stan­dards and reg­u­la­tions. Between 1997 and 2000, NHTSA de­vel­oped a num­ber of in­terim solu­tions de­signed to re­duce the risks of air bags, in­clud­ing on-off switches and de­ploy­ment with less force (Ho, 2006). Si­mul­ta­neously, safer air bags, called ad­vanced air bags, were de­vel­oped that de­ploy with a force tai­lored to the oc­cu­pant by tak­ing into ac­count the seat po­si­tion, belt us­age, oc­cu­pant weight, and other fac­tors. In 2000, NHTSA man­dated that the in­tro­duc­tion of these ad­vanced air bags be­gin in 2003 and that, by 2006, ev­ery new pas­sen­ger ve­hi­cle would in­clude these safety mea­sures (NHTSA, 2000). What les­sons does this ex­pe­rience offer for reg­u­la­tion of au­tonomous ve­hi­cle tech­nolo­gies? We sug­gest that mod­esty and flex­i­bil­ity are nec­es­sary. The early air-bag reg­u­la­tors en­vi­sioned air bags as be­ing a sub­sti­tute for seat belts be­cause the rates of seat-belt us­age were so low and ap­peared in­tractable. Few an­ti­ci­pated that seat-belt us­age would rise as much over time as it has and that air bags would even­tu­ally be used pri­mar­ily as a sup­ple­ment rather than a sub­sti­tute for seat belts. Similarly un­ex­pected de­vel­op­ments are likely to arise in the con­text of au­tonomous ve­hi­cle tech­nolo­gies. In 2006, for ex­am­ple, Honda in­tro­duced its Ac­cord model in the UK with a com­bined lane-keep­ing and ACC sys­tem that al­lows the ve­hi­cle to drive it­self un­der the driver’s watch; this com­bi­na­tion of fea­tures has yet to be in­tro­duced in the United States (Miller, 2006). Ho (2006, p. 27) ob­serves a gen­eral trend that “the U.S. mar­ket trails Europe, and the Euro­pean mar­ket trails Ja­pan by 2 to 3 years.” What is the ex­tent of these differ­ences? What as­pects of the li­a­bil­ity and reg­u­la­tory rules in those coun­tries have en­abled ac­cel­er­ated de­ploy­ment? What other fac­tors are at play (e.g., differ­ences in con­sumers’ sen­si­tivity to price)?

“New Tech­nol­ogy—Old Law: Au­tonomous Ve­hi­cles and Cal­ifor­nia’s In­surance Frame­work”, Peter­son 2012:

This Ar­ti­cle will ad­dress this is­sue and pro­pose ways in which auto in­surance might change to ac­com­mo­date the use of AVs. Part I briefly re­views the back­ground of in­surance reg­u­la­tion na­tion­ally and in Cal­ifor­nia. Part II dis­cusses gen­eral in­surance and li­a­bil­ity is­sues re­lated to AVs. Part III dis­cusses some challenges that in­sur­ers and reg­u­la­tors may face when set­ting rates for AVs, both gen­er­ally and un­der Cal­ifor­nia’s more idiosyn­cratic reg­u­la­tory struc­ture. Part IV dis­cusses challenges faced by Cal­ifor­nia in­sur­ers who may want to re­duce rates in a timely way when tech­nolog­i­cal im­prove­ments rapidly re­duce risk.

…When work­ing within the con­text of a file-and-use or use-and-file en­vi­ron­ment, AVs will pre­sent only mod­est challenges to an in­surer that wants to write these poli­cies. The main challenge will arise from the fact that the policy must be rated for a new tech­nol­ogy that may have an in­ad­e­quate base of ex­pe­rience for an ac­tu­ary to es­ti­mate fu­ture losses.21 “Prior ap­proval” states, like Cal­ifor­nia, re­quire that au­to­mo­bile rates be ap­proved prior to their use in the mar­ket­place.22 Th­ese states rely more on reg­u­la­tion than on com­pe­ti­tion to mod­u­late in­surance rates.23 In Cal­ifor­nia, au­to­mo­bile in­surance rates are ap­proved in a two-step pro­cess. The first step is the cre­ation of a “rate plan.”24 The rate plan con­sid­ers the in­surer’s en­tire book of busi­ness in the rel­a­tive line of in­surance and asks the ques­tion: How much to­tal pre­mium must the in­surer col­lect in or­der to cover the pro­jected risks, over­head and per­mit­ted profit for that line?25 The in­surer then cre­ates a “class plan.” The class plan asks the ques­tion: How should differ­ent poli­cy­hold­ers’ pre­miums be ad­justed up or down based on the risks pre­sented by differ­ent groups or classes of poli­cy­hold­ers?26 Among other fac­tors, the Depart­ment of In­surance re­quires that the rat­ing fac­tors com­ply with Cal­ifor­nia law and be jus­tified by the loss ex­pe­rience for the group.27 Rat­ing a new tech­nol­ogy with an un­proven track record may in­clude a con­sid­er­able amount of guess­work. …Cal­ifor­nia is the largest in­surance mar­ket in the United States, and it is the sixth largest among the coun­tries of the world.28 Cars are cul­ture in this most pop­u­lous state. There are far more in­sured au­to­mo­biles in Cal­ifor­nia than any other state.29

…Although adopted by the barest ma­jor­ity, [Cal­ifor­nia’s] Propo­si­tion 103 [see pre­vi­ous dis­cus­sion of its 3-part re­quire­ment for rat­ing in­surance pre­miums] may be amended by the leg­is­la­ture only by a two-thirds vote, and then only if the leg­is­la­tion “fur­ther[s] [the] pur­poses” of Propo­si­tion 103.68 Thus, Propo­si­tion 103 and the reg­u­la­tions adopted by the Depart­ment of In­surance are the ma­trix in which most (but not all) in­surance is sold and reg­u­lated in Cal­ifor­nia.69 …The most sen­si­ble ap­proach to this dilemma, at least with re­spect to AVs, would be to abol­ish or sub­stan­tially re-or­der the three manda­tory rat­ing fac­tors. How­ever, this is more eas­ily said than done. As noted above, amend­ing Propo­si­tion 103 re­quires a two-thirds vote of the leg­is­la­ture.160 More­over, sec­tion 8(b) of the Propo­si­tion pro­vides: “The pro­vi­sions of this act shall not be amended by the Leg­is­la­ture ex­cept to fur­ther its pur­poses.”161 Both of these re­quire­ments can be formidable hur­dles. Per­sis­tency dis­counts serve as an ex­am­ple. Most are aware that their in­surer dis­counts their rates if they have been with the in­surer for a pe­riod of time.162 This is called the “per­sis­tency dis­count.” The dis­count is usu­ally jus­tified on the ba­sis that per­sis­tency saves the in­surer the pro­duc­ing ex­penses as­so­ci­ated with find­ing a new in­sured. If one wants to change in­sur­ers, Propo­si­tion 103 does not per­mit the sub­se­quent in­surer to match the per­sis­tency dis­count offered by the in­sured’s cur­rent in­surer.163 Thus, the sec­ond in­surer could not com­pete by offer­ing the same dis­count. Chang­ing in­sur­ers, then, was some­what like a tax­able event. The “tax” is the loss of the per­sis­tency dis­count when pur­chas­ing the new policy. The Cal­ifor­nia leg­is­la­ture con­cluded that this both un­der­mined com­pe­ti­tion and drove up the cost of in­surance by dis­cour­ag­ing the abil­ity to shop for lower rates. …De­spite these leg­is­la­tive find­ings, the Court of Ap­peal held the amend­ment in­valid be­cause, in the Court’s view, it did not fur­ther the pur­poses of Propo­si­tion 103.165 The Court also held that Propo­si­tion 103 vests only the In­surance Com­mis­sioner with the power to set op­tional rat­ing fac­tors.166 Thus, the leg­is­la­ture, even by a su­per ma­jor­ity, may not be au­tho­rized to adopt rat­ing fac­tors for auto in­surance. Fol­low­ing this defeat in the courts, pro­mot­ers of “portable per­sis­tency” qual­ified a bal­lot ini­ti­a­tive to amend this as­pect of Propo­si­tion 103. With a vote of 51.9% to 48.1%, the ini­ti­a­tive failed in the June 8, 2010 elec­tion.167

…The State of Ne­vada re­cently adopted reg­u­la­tions for li­cens­ing the test­ing of AVs in the state. The reg­u­la­tions would re­quire in­surance in the min­i­mum amounts re­quired for other cars “for the pay­ment of tort li­a­bil­ities aris­ing from the main­te­nance or use of the mo­tor ve­hi­cle.”73 The reg­u­la­tion, how­ever, does not sug­gest how the tort li­a­bil­ity may arise. If there is no fault on the part of the op­er­a­tor or owner, then li­a­bil­ity may arise, if at all, only for the man­u­fac­turer or sup­plier. Man­u­fac­tur­ers and sup­pli­ers are not “in­sureds” un­der the stan­dard au­to­mo­bile policy-at least so far. Thus, for the rea­sons stated above, own­ers, man­u­fac­tur­ers and sup­pli­ers may fall out­side the cov­er­age of the policy.

…One pos­si­ble ap­proach would be to in­voke the var­i­ous doc­trines of prod­ucts li­a­bil­ity law. This would at­tach the ma­jor li­a­bil­ity to sel­l­ers and man­u­fac­tur­ers of the ve­hi­cle. How­ever, it is doubt­ful that this is an ac­cept­able ap­proach for sev­eral rea­sons. For ex­am­ple, while some ac­ci­dents are catas­trophic, for­tu­nately most ac­ci­dents cause only mod­est dam­ages. By con­trast, prod­ucts li­a­bil­ity law­suits tend to be com­plex and ex­pen­sive. In­deed, they may re­quire the trans­la­tion of hun­dreds or thou­sands of en­g­ineer­ing doc­u­ments-per­haps writ­ten in Ja­panese, Chi­nese or Korean…See In re Puerto Rico Elec­tric Power Author­ity, 687 F.2d 501, 505 (1st Cir. 1982) (stat­ing each party to bear trans­la­tion costs of doc­u­ments re­quested by it but cost pos­si­bly tax­able to pre­vailing party). Trans­la­tion costs of Ja­panese doc­u­ments in range of $250,000, and trans­la­tion costs of ad­di­tional Span­ish doc­u­ments may ex­ceed that amount.

…Com­mer­cial in­sur­ers of man­u­fac­tur­ers and sup­pli­ers are not en­cum­bered with Propo­si­tion 103’s unique au­to­mo­bile pro­vi­sions,197 there­fore they need not offer a GDD, nor need they con­form to the rank­ing of the manda­tory rat­ing fac­tors. To the ex­tent that the risks of AVs are trans­ferred to them, the in­surance bur­den passed to con­sumers in the price of the car can re­flect the ac­tual, and pre­sum­ably lower, risk pre­sented by AVs. As noted above, how­ever, for prac­ti­cal rea­sons some rat­ing fac­tors, such as an­nual miles driven and ter­ri­tory, can­not prop­erly be re­flected in the au­to­mo­bile price. Mov­ing from the awk­ward and ar­bi­trary re­sults man­dated by Propo­si­tion 103’s rat­ing fac­tors to a com­mer­cial in­surance set­ting that can­not prop­erly re­flect some other rat­ing fac­tors is also an awk­ward trade-off. At best, it may be a choice of the least worst. Another vi­able solu­tion might to be to amend the Cal­ifor­nia In­surance Code sec­tion 660(a) to ex­clude from the defi­ni­tion of “policy” those poli­cies cov­er­ing li­a­bil­ity for AVs (at least when op­er­ated in au­tonomous mode). Since Propo­si­tion 103 in­cor­po­rates sec­tion 660(a), this would likely re­quire a two-thirds vote of the leg­is­la­ture and the amend­ment would have to “fur­ther the pur­poses” of Propo­si­tion 103. As­sum­ing a two-thirds vote could be mus­tered, the is­sue would then be whether the amend­ment fur­thers the pur­poses of the Propo­si­tion. To the ex­tent that li­a­bil­ity moves from fault-based driv­ing to defect-based prod­ucts li­a­bil­ity, the pur­poses un­der­ly­ing the manda­tory rat­ing fac­tors and the GDD sim­ply can­not be ac­com­plished. Man­u­fac­tur­ers will pass these costs through to au­to­mo­bile buy­ers free of the Propo­si­tion’s re­straints. Since the pur­poses of the Propo­si­tion, at least with re­spect to li­a­bil­ity cov­er­age,199 sim­ply can­not be ac­com­plished when deal­ing with self-driv­ing cars, amend­ing sec­tion 660(a) would not frus­trate the pur­poses of Propo­si­tion 103.

…Filing a “com­plete rate ap­pli­ca­tion with the com­mis­sioner” is a sub­stan­tial im­ped­i­ment to re­duc­ing rates. A com­plete rate ap­pli­ca­tion is an ex­pen­sive, pon­der­ous and time-con­sum­ing pro­cess. A typ­i­cal filing may take three to five months be­fore ap­proval. Some ap­pli­ca­tions have even been de­layed for a year.205 In 2009, when in­sur­ers filed many new rate plans in or­der to com­ply with the new ter­ri­to­rial rat­ing reg­u­la­tions, de­lays among the top twenty pri­vate pas­sen­ger auto in­sur­ers ranged from a low of 54 days (Vik­ing) to a high of 558 days (USAA and USAA Ca­su­alty). Many took over 300 days (e.g., State Farm Mu­tual, Farm­ers In­surance Ex­change, Pro­gres­sive Choice).206 …n ad­di­tion, once an ap­pli­ca­tion to lower rates is filed, the Com­mis­sioner, con­sumer groups, and oth­ers can in­ter­vene and ask that the rates be low­ered even fur­ther.207 Thus, an ap­pli­ca­tion to lower a rate by 6% may in­vite pres­sure to lower it even fur­ther.208 If they “sub­stan­tially con­tributed, as a whole” to the de­ci­sion, a con­sumer group can also bill the in­surance com­pany for its le­gal, ad­vo­cacy, and wit­ness fees.209

…Un­less ways can be found to con­form Propo­si­tion 103 to this new re­al­ity, in­surance for AVs is likely to mi­grate to a statu­tory and reg­u­la­tory en­vi­ron­ment un­tram­meled by Propo­si­tion 103-com­mer­cial poli­cies car­ried by man­u­fac­tur­ers and sup­pli­ers. This mi­gra­tion pre­sents its own set of prob­lems. While the safety of AVs could be more fairly rated, other im­por­tant rat­ing fac­tors, such as an­nual miles driven and ter­ri­tory, must be com­pro­mised. Whether this mi­gra­tion oc­curs will also de­pend on how li­a­bil­ity rules do or do not ad­just to a world in which peo­ple will nev­er­the­less suffer in­juries from AVs, but in which it is un­likely our pre­sent fault rules will ad­e­quately ad­dress com­pen­sa­tion. If con­cepts of non-del­e­gable duty, agency, or strict li­a­bil­ity at­tach ini­tial li­a­bil­ity to own­ers of faulty cars with faultless drivers, the in­surance bur­den will first be filtered through au­to­mo­bile in­surance gov­erned by Propo­si­tion 103. Th­ese in­sur­ers will then pass the losses up the dis­tri­bu­tion line to the in­sur­ers of sup­pli­ers and man­u­fac­tur­ers that are not gov­erned by Propo­si­tion 103. Man­u­fac­tur­ers and sup­pli­ers will then pass the in­surance cost back to AV own­ers in the cost of the ve­hi­cle. The in­surance load re­flected in the price of the car will pass through to au­to­mo­bile own­ers free of any of the re­stric­tions im­posed by Propo­si­tion 103. There will be no GDD, such as it is, no manda­tory rat­ing fac­tors, and, de­pend­ing on where the sup­pli­ers’ or man­u­fac­tur­ers’ in­sur­ers are lo­cated, more flex­ible rat­ing. One may ask: What is gained by this merry-go-round?

“‘Look Ma, No Hands!’: Wrin­kles and Wrecks in the Age of Au­tonomous Ve­hi­cles”, Garza 2012

The benefits of these sys­tems can­not be over­es­ti­mated given that one-third of drivers ad­mit to hav­ing fallen asleep at the wheel within the pre­vi­ous thirty days.31 …If the driver fails to re­act in time, it ap­plies 40% of the full brak­ing power to re­duce the sever­ity of the col­li­sion.39 In the most ad­vanced ver­sion, the CMBS performs all of the func­tions de­scribed above, and it will also stop the car au­to­mat­i­cally to avoid a col­li­sion when trav­el­ing un­der ten miles-per-hour.40 Car com­pa­nies are hes­i­tant to push the au­to­matic brak­ing thresh­old too far out of fear that ‚fully ‘au­to­matic’ brak­ing sys­tems will shift the re­spon­si­bil­ity of avoid­ing an ac­ci­dent from the ve­hi­cle’s driver to the ve­hi­cle’s man­u­fac­turer.’41…See Larry Car­ley, Ac­tive Safety Tech­nol­ogy: Adap­tive Cruise Con­trol, Lane Depar­ture Warn­ing & Col­li­sion Miti­ga­tion Brak­ing, IMPORT CAR (June 16, 2009), http://​​www.im­port-car.com/​​Ar­ti­cle/​​58867/​​ac­tive_safety_tech­nol­ogy_adap­tive_cruise_con­trol_lane_de­par­ture_warn­ing__col­li­sion_miti­ga­tion_brak­ing.aspx

…Au­to­mo­bile prod­ucts li­a­bil­ity cases are typ­i­cally di­vided into two cat­e­gories: ‚(1) ac­ci­dents caused by au­to­mo­tive defects, and (2) ag­gra­vated in­juries caused by a ve­hi­cle’s failure to be suffi­ciently ‘crash­wor­thy’ to pro­tect its oc­cu­pants in an ac­ci­dent.‘79 …For ex­am­ple, a car suffers from a de­sign defect when a malfunc­tion in the steer­ing wheel causes a crash. 81 Ad­di­tion­ally, plain­tiffs have alleged and pre­vailed on man­u­fac­tur­ing- defect claims in cases where ‚un­in­tended, sud­den and un­con­trol­lable ac­cel­er­a­tion’ causes an ac­ci­dent.82 In such cases, plain­tiffs have been able to re­cover un­der a ‚malfunc­tion the­ory.’83 Un­der a malfunc­tion the­ory, plain­tiffs use a ‚res ipsa lo­quitur like in­fer­ence to in­fer defec­tive­ness in strict li­a­bil­ity where there was no in­de­pen­dent proof of a defect in the product.’84 Plain­tiffs have also pre­vailed where de­sign defects cause in­jury. 85 For ex­am­ple, there was a pro­lifer­a­tion of liti­ga­tion in the 1970s and 1980s as a re­sult of ve­hi­cles that were de­signed with a high cen­ter of grav­ity, which in­creased their propen­sity to roll over.86 Ad­di­tion­ally, many de­sign-defect cases arose in re­sponse to faulty trans­mis­sions that could in­ad­ver­tently slip into gear, caus­ing crashes and oc­cu­pants to be run over in some cases. 87 The two pri­mary tests that courts use to as­sess the defec­tive­ness of a product’s de­sign are the con­sumer-ex­pec­ta­tions test and the risk-util­ity test.88 The con­sumer-ex­pec­ta­tions test fo­cuses on whether ‚the dan­ger posed by the de­sign is greater than an or­di­nary con­sumer would ex­pect when us­ing the product in an in­tended or rea­son­ably fore­see­able man­ner.’89 …Thus, while an or­di­nary con­sumer can have ex­pec­ta­tions that a car will not ex­plode at a stoplight or catch fire in a two-mile-per-hour col­li­sion, they may not be able to have ex­pec­ta­tions about how a truck should han­dle af­ter strik­ing a five- or six-inch rock at thirty-five miles-per-hour.92 Per­haps be­cause the con­sumer-ex­pec­ta­tions test is difficult to ap­ply to com­plex prod­ucts, and we live in a world where tech­nolog­i­cal growth in­creases com­plex­ity, the risk-util­ity test has be­come the dom­i­nant test in de­sign-defect cases.93 …Liti­ga­tion can also arise where a plain­tiff alleges that a ve­hi­cle is not suffi­ciently ‚crash­wor­thy.’104 Crash­wor­thi­ness claims are a type of de­sign- defect claim.105

…Since their ad­vent and in­cor­po­ra­tion, seat belts have re­sulted in liti­ga­tion-much of which has in­volved crash­wor­thi­ness claims. 136 In Jack­son v. Gen­eral Mo­tors Corp., for ex­am­ple, the plain­tiff alleged that as a re­sult of a defec­tively de­signed seat belt, his in­juries were en­hanced. 137 The defen­dant man­u­fac­turer ar­gued that the com­plex­ity of seat belts fore­closed any con­sumer ex­pec­ta­tion,138 but the Ten­nessee Supreme Court noted that seat belts are ‘fa­mil­iar prod­ucts for which con­sumers’ ex­pec­ta­tions of safety have had an op­por­tu­nity to de­velop,‘and per­mit­ted the plain­tiff to re­cover un­der the con­sumer-ex­pec­ta­tions test.139 Although man­u­fac­tur­ers have been sued where seat belts ren­der a car in­suffi­ciently crash­wor­thy- as in cases where they fail to perform as in­tended or en­hance in­jury-the in­cor­po­ra­tion of seat belts has re­duced li­a­bil­ity as well.140 This re­duc­tion comes in the form of the ‚seat belt defense.’141 The ’seat belt defense’ al­lows a defen­dant to pre­sent ev­i­dence about an oc­cu­pant’s nonuse of a seat belt to miti­gate dam­ages or to defend against an en­hanced-in­jury claim.142 Be­cause seat belts are ca­pa­ble of re­duc­ing the num­ber of lives lost and the over­all sever­ity of in­juries sus­tained in crashes, it is ar­gued that nonuse should pro­tect a man­u­fac­turer from some claims.143 Although the ma­jor­ity rule is to pre­vent the ad­mis­sion of such ev­i­dence in en­hanced-in­jury liti­ga­tion, there is a grow­ing trend to­ward ad­mis­sion.144

…Since their in­cor­po­ra­tion, con­sumers have sued man­u­fac­tur­ers for defec­tive cruise con­trol sys­tems that lead to in­jury. 171 Be­cause of the com­plex­ity of cruise con­trol tech­nol­ogy, courts may not al­low a plain­tiff to use the con­sumer-ex­pec­ta­tions test.172 De­spite the com­plex­ity of the tech­nol­ogy, other courts al­low plain­tiffs to es­tab­lish a defect us­ing ei­ther the risk-util­ity test or the con­sumer-ex­pec­ta­tions test.173

…Un­der the con­sumer-ex­pec­ta­tions test, man­u­fac­tur­ers will likely ar­gue-as they his­tor­i­cally have-that OAV tech­nol­ogy is too com­pli­cated for the av­er­age con­sumer to have ap­pro­pri­ate ex­pec­ta­tions about its ca­pa­bil­ities.182 Com­men­ta­tors have stated that ‚con­sumers may have un­re­al­is­tic ex­pec­ta­tions about the ca­pa­bil­ities of these tech­nolo­gies . . . . Tech­nolo­gies that are en­g­ineered to as­sist the driver may be overly re­lied on to re­place the need for in­de­pen­dent vigilance on the part of the ve­hi­cle op­er­a­tor.’183 Plain­tiffs will ar­gue that, while the work­ings of the tech­nol­ogy are con­ced­edly com­plex, the over­all con­cept of au­tonomous driv­ing is not.184 Like the car ex­plod­ing at a stoplight or the car that catches fire in a two- mile-per-hour col­li­sion, the av­er­age con­sumer would ex­pect au­tonomous ve­hi­cles to drive them­selves with­out in­ci­dent.185 This means that com­po­nents that are meant to keep the car within a lane will do just that, and oth­ers will stop the ve­hi­cle at traf­fic lights. 186 Where in­ci­dents oc­cur, OAVs will not have performed as the av­er­age con­sumer would ex­pect.187 …plain­tiffs who pur­chase OAVs at the cusp of availa­bil­ity, and at­tempt to prove defect un­der the con­sumer- ex­pec­ta­tions test, are likely to face an up-hill bat­tle.194 But the un­availa­bil­ity of the con­sumer-ex­pec­ta­tions test will not be a sig­nifi­cant detri­ment as plain­tiffs can fall back on the risk-util­ity test.195 And as OAVs are in­creas­ingly in­cor­po­rated, and users be­come more fa­mil­iar with their ca­pa­bil­ities, the con­sumer-ex­pec­ta­tions test will be­come more ac­cessible to plain­tiffs.196 Given the mod­ern trend, plain­tiffs are likely to face the risk- util­ity test.197

…Ad­di­tion­ally, the ex­tent to which in­juries are ‚en­hanced’ by OAVs will be de­bated.228 Be­cause the ma­jor­ity of drivers fail to fully ap­ply their brakes prior to a col­li­sion,229 where an OAV only par­tially ap­plies brakes, or fails to ap­ply brakes at all, man­u­fac­tur­ers and plain­tiffs will dis­agree about the ex­tent of en­hance­ment.230 Man­u­fac­tur­ers will ar­gue that, ab­sent the OAV, the re­sult would have been the same or worse-thus, the ex­tent to which the in­juries of the plain­tiff are ‚en­hanced’ is min­i­mal.231 Plain­tiffs will ar­gue that, just like the pre­sen­ta­tion of crash statis­tics in a risk-util­ity anal­y­sis, this is a false choice.232 Like no-fire air bag claims, plain­tiffs will con­tend that but for the malfunc­tion of the OAV, their in­juries would have been greatly re­duced or nonex­is­tent. 233 As a re­sult, any in­juries sus­tained above that thresh­old should serve as a ba­sis for re­cov­ery. 234

…In prod­ucts li­a­bil­ity cases the ’use of ex­pert wit­nesses has grown in both im­por­tance and ex­pense.’301 Be­cause of the ex­traor­di­nary cost of ex­perts in prod­ucts li­a­bil­ity liti­ga­tion, many plain­tiffs are turned away be­cause, even if they were to re­cover, the prospec­tive award would not cover the ex­pense of liti­gat­ing the claim. 302

…Although com­plex, OAVs func­tion much like the cruise con­trol that ex­ists in mod­ern cars. As we have seen with seat belts, air bags, and cruise con­trol, man­u­fac­tur­ers have always been hes­i­tant to adopt safety tech­nolo­gies. De­spite con­cerns, prod­ucts li­a­bil­ity law is ca­pa­ble of han­dling OAVs just as it has these past tech­nolo­gies. While the nov­elty and com­plex­ity of OAVs are likely to pre­clude plain­tiffs from prov­ing defect un­der the con­sumer-ex­pec­ta­tion test, as im­ple­men­ta­tion in­creases this like­li­hood may de­crease. Un­der a risk-util­ity anal­y­sis, man­u­fac­tur­ers will stress the ex­traor­di­nary safety benefits of OAVs, while con­sumers will allege that de­signs can be im­proved. In the end, OAV adop­tion will benefit man­u­fac­tur­ers. Although li­a­bil­ity will fall on man­u­fac­tur­ers when ve­hi­cles fail, de­creased in­ci­dences and sever­ity of crashes will re­sult in a net de­crease in li­a­bil­ity. Fur­ther, the com­bi­na­tion of LDWS cam­eras and EDRs will dras­ti­cally re­duce the cost of liti­ga­tion. By re­duc­ing re­li­ance on ex­perts for com­plex cau­sa­tion de­ter­mi­na­tions, both man­u­fac­tur­ers and plain­tiffs will benefit. In the end, ob­sta­cles to OAV im­ple­men­ta­tion are more likely to be psy­cholog­i­cal than le­gal, and the sooner that courts, man­u­fac­tur­ers, and the mo­tor­ing pub­lic pre­pare to con­front these is­sues, the sooner lives can be saved.

“Self-driv­ing cars can nav­i­gate the road, but can they nav­i­gate the law? Google’s lob­by­ing hard for its self-driv­ing tech­nol­ogy, but some fea­tures may never be le­gal”, The Verge 14 De­cem­ber 2012

Google says that on a given day, they have a dozen au­tonomous cars on the road. This Au­gust, they passed 300,000 driver-hours. In Spain this sum­mer, Volvo drove a con­voy of three cars through 200 kilo­me­ters of desert high­way with just one driver and a po­lice es­cort.

…Bryant Walker Smith teaches a class on au­tonomous ve­hi­cles at Stan­ford Law School. At a work­shop this sum­mer, he put for­ward this thought ex­per­i­ment: the year is 2020, and a num­ber of com­pa­nies offer “ad­vanced driver as­sis­tance sys­tems” with their high-end model. Over 100,000 units have been sold. The owner’s man­ual states that the driver must re­main alert at all times, but one night a driver—we’ll call him “Paul”—falls asleep while driv­ing over a foggy bridge. The car tries to rouse him with alarms and vibra­tions but he’s a deep sleeper, so the car turns on the haz­ard lights and pulls over to the side of the road where an­other driver (let’s say Julie) rear-ends him. He’s in­jured, an­gry, and prone to liti­ga­tion. So is Julie. That would be tricky enough by it­self, but then Smith starts lay­er­ing on com­pli­ca­tions. Another model of auto-driver would have driven to the end of the bridge be­fore pul­ling over. If Paul had up­dated his soft­ware, it would have braced his seat­belt for the crash, miti­gat­ing his in­juries, but he didn’t. The com­pany could have pushed the up­date au­to­mat­i­cally, but man­age­ment chose not to. Now, Smith asks the work­shop, who gets sued? Or for a shorter list, who doesn’t?

…The fi­nan­cial stakes are high. Ac­cord­ing to the In­surance Re­search Coun­cil, auto li­a­bil­ity claims paid out roughly $215 for each in­sured car, be­tween bod­ily in­jury and prop­erty dam­age claims. With 250 mil­lion cars on the road, that’s $54 billion a year in li­a­bil­ity. If even a tiny por­tion of those law­suits are di­rected to­wards tech­nol­o­gists, the busi­ness would be­come un­prof­itable fast.

…Chang­ing the laws in Europe would take a re­play of the in­ter­na­tion­ally rat­ified Vienna Con­ven­tion (passed in 1968) as well as push­ing through a hodge­podge of na­tional and re­gional laws. As Google proved, it’s not im­pos­si­ble, but it leaves SARTRE fac­ing an un­usu­ally tricky adop­tion prob­lem. Law­mak­ers won’t care about the pro­ject un­less they think con­sumers re­ally want it, but it’s hard to get con­sumers ex­cited about a product that doesn’t ex­ist yet. Pro­jects like this usu­ally rely on a core of early adopters to demon­strate their use­ful­ness—a hard enough task, as most star­tups can tell you—but in this case, SARTRE has to bring auto reg­u­la­tors along for the ride. Op­ti­misti­cally, Volvo told us they ex­pect the tech­nol­ogy to be ready “to­wards the end of this decade,” but that may de­pend en­tirely on how quickly the law moves. The less op­ti­mistic pre­dic­tion is that it never ar­rives at all. Steve Sh­ladover is the pro­gram man­ager of mo­bil­ity at Cal­ifor­nia’s PATH pro­gram, where they’ve been try­ing to make con­voy tech­nol­ogy hap­pen for 25 years, lured by the prospect of fit­ting three times as many cars on the free­way. They were show­ing off a work­ing ver­sion as early as 1997 (pow­ered by a sin­gle Pen­tium pro­ces­sor), be­fore fal­ling into the same gap be­tween pro­to­type and fi­nal product. “It’s a solv­able prob­lem once peo­ple can see the benefits,” he told The Verge, “but I think a lot of the cur­rent ac­tivity is wildly op­ti­mistic in terms of what can be achieved.” When I asked him when we’d see a self-driv­ing car, Sh­ladover told me what he says at the many auto con­fer­ences he’s been to: “I don’t ex­pect to see the fully-au­to­mated, au­tonomous ve­hi­cle out on the road in the life­time of any­one in this room.”

…Many of Google’s planned fea­tures may sim­ply never be le­gal. One difficult fea­ture is the “come pick me up” but­ton that Larry Page has pushed as a solu­tion to park­ing con­ges­tion. In­stead of wast­ing en­ergy and space on ur­ban park­ing lots, why not have cars drop us off and then drive them­selves to park some­where more re­mote, like an au­to­mated valet?It’s a gen­uinely good idea, and one Google seems pas­sion­ate about, but it’s ex­tremely difficult to square with most ve­hi­cle codes. The Geneva Con­ven­tion on Road Traf­fic (1949) re­quires that drivers “shall at all times be able to con­trol their ve­hi­cles,” and pro­vi­sions against reck­less driv­ing usu­ally re­quire “the con­scious and in­ten­tional op­er­a­tion of a mo­tor ve­hi­cle.” Some of that is sim­ple se­man­tics, but other con­cerns are harder to dis­miss. After a crash, drivers are legally obli­gated to stop and help the in­jured—a difficult task if there’s no one in the car. As a re­sult, most ex­perts pre­dict drivers will be legally re­quired to have a per­son in the car at all times, ready to take over if the au­to­matic sys­tem fails. If they’re right, the self-park­ing car may never be le­gal.

“Au­to­mated Ve­hi­cles are Prob­a­bly Le­gal in the United States”, Bryant Walker Smith 2012

The short an­swer is that the com­puter di­rec­tion of a mo­tor ve­hi­cle’s steer­ing, brak­ing, and ac­cel­er­at­ing with­out real-time hu­man in­put is prob­a­bly le­gal….The pa­per’s largely de­scrip­tive anal­y­sis, which be­gins with the prin­ci­ple that ev­ery­thing is per­mit­ted un­less pro­hibited, cov­ers three key le­gal regimes: the 1949 Geneva Con­ven­tion on Road Traf­fic, reg­u­la­tions en­acted by the Na­tional High­way Traf­fic Safety Ad­minis­tra­tion (NHTSA), and the ve­hi­cle codes of all fifty US states.

The Geneva Con­ven­tion, to which the United States is a party, prob­a­bly does not pro­hibit au­to­mated driv­ing. The treaty pro­motes road safety by es­tab­lish­ing uniform rules, one of which re­quires ev­ery ve­hi­cle or com­bi­na­tion thereof to have a driver who is “at all times … able to con­trol” it. How­ever, this re­quire­ment is likely satis­fied if a hu­man is able to in­ter­vene in the au­to­mated ve­hi­cle’s op­er­a­tion.

NHTSA’s reg­u­la­tions, which in­clude the Fed­eral Mo­tor Ve­hi­cle Safety Stan­dards to which new ve­hi­cles must be cer­tified, do not gen­er­ally pro­hibit or uniquely bur­den au­to­mated ve­hi­cles, with the pos­si­ble ex­cep­tion of one rule re­gard­ing emer­gency flash­ers. State ve­hi­cle codes prob­a­bly do not pro­hibit-but may com­pli­cate-au­to­mated driv­ing. Th­ese codes as­sume the pres­ence of li­censed hu­man drivers who are able to ex­er­cise hu­man judg­ment, and par­tic­u­lar rules may func­tion­ally re­quire that pres­ence. New York some­what uniquely di­rects a driver to keep one hand on the wheel at all times. In ad­di­tion, far more com­mon rules man­dat­ing rea­son­able, pru­dent, prac­ti­ca­ble, and safe driv­ing have un­cer­tain ap­pli­ca­tion to au­to­mated ve­hi­cles and their users. Fol­low­ing dis­tance re­quire­ments may also re­strict the lawful op­er­a­tion of tightly spaced ve­hi­cle pla­toons. Many of these is­sues arise even in the three states that ex­pressly reg­u­late au­to­mated ve­hi­cles.

…This pa­per does not con­sider how the rules of tort could or should ap­ply to au­to­mated ve­hi­cles-that is, the ex­tent to which tort li­a­bil­ity might shift up­stream to com­pa­nies re­spon­si­ble for the de­sign, man­u­fac­ture, sale, op­er­a­tion, or pro­vi­sion of data or other ser­vices to an au­to­mated ve­hi­cle. 6

…Be­cause of the broad way in which the term and oth­ers like it are defined, an au­to­mated ve­hi­cle prob­a­bly has a hu­man “driver.” 295 Obli­ga­tions im­posed on that per­son may limit the in­de­pen­dence with which the ve­hi­cle may lawfully op­er­ate. 296 In ad­di­tion, the au­to­mated ve­hi­cle it­self must meet nu­mer­ous re­quire­ments, some of which may also com­pli­cate its op­er­a­tion. 297 Although three states have ex­pressly es­tab­lished the le­gal­ity of au­to­mated ve­hi­cles un­der cer­tain con­di­tions, their re­spec­tive laws do not re­solve many of the ques­tions raised in this sec­tion. 298

…A brief but im­por­tant aside: To vary­ing de­grees, states im­pose crim­i­nal or qua­sicrim­i­nal li­a­bil­ity on own­ers who per­mit oth­ers to drive their ve­hi­cles. 359 In Wash­ing­ton, “[b]oth a per­son op­er­at­ing a ve­hi­cle with the ex­press or im­plied per­mis­sion of the owner and the owner of the ve­hi­cle are re­spon­si­ble for any act or omis­sion that is de­clared un­lawful in this chap­ter. The pri­mary re­spon­si­bil­ity is the owner’s.” 360 Some states per­mit an in­fer­ence that the owner of a ve­hi­cle was its op­er­a­tor for cer­tain offenses; 361 Wis­con­sin pro­vides what is by far the most de­tailed statu­tory set of re­but­table pre­sump­tions. 362 Many oth­ers pun­ish own­ers who know­ingly per­mit their ve­hi­cles to be driven un­lawfully. 363 Although these own­ers are not drivers, they are as­sumed to ex­er­cise some judg­ment or con­trol with re­spect to those drivers-an in­stance of vi­car­i­ous li­a­bil­ity that sug­gests an owner of an au­to­mated ve­hi­cle might be li­able for merely per­mit­ting its au­to­mated op­er­a­tion. 364

…On the hu­man side, phys­i­cal pres­ence would likely con­tinue to provide a proxy for or pre­sump­tion of driv­ing. 366 In other words, an in­di­vi­d­ual who is phys­i­cally po­si­tioned to provide real-time in­put to a mo­tor ve­hi­cle may well be treated as its driver. This is par­tic­u­larly likely at lev­els of au­toma­tion that in­volve hu­man in­put for cer­tain por­tions of a trip. In ad­di­tion, an in­di­vi­d­ual who starts or dis­patches an au­to­mated ve­hi­cle, who ini­ti­ates the au­to­mated op­er­a­tion of that ve­hi­cle, or who speci­fies cer­tain pa­ram­e­ters of op­er­a­tion prob­a­bly qual­ifies as a driver un­der ex­ist­ing law. That in­di­vi­d­ual may use some de­vice-any­thing from a phys­i­cal key to the click of a mouse to the sound of her voice-to ac­ti­vate the ve­hi­cle by her­self. She may like­wise de­liber­ately re­quest that the ve­hi­cle as­sume the ac­tive driv­ing task. And she may set the ve­hi­cle’s max­i­mum speed or level of as­sertive­ness. This work­ing defi­ni­tion is un­clear in the same ways that ex­ist­ing law is likely to be un­clear. Rele­vant acts might oc­cur at any level of the pri­mary driv­ing task, from a de­ci­sion to take a par­tic­u­lar trip to a de­ci­sion to ex­ceed any speed limit by ten miles per hour. 367 A tac­ti­cal de­ci­sion like speed­ing is closely con­nected with the con­se­quences-whether a mov­ing vi­o­la­tion or an in­jury-that may re­sult. But treat­ing an in­di­vi­d­ual who dis­patches her fully au­to­mated ve­hi­cle as the driver for the en­tirety of the trip could at­ten­u­ate the re­la­tion­ship be­tween le­gal re­spon­si­bil­ity and le­gal fault. 368 Nonethe­less, strict li­a­bil­ity of this sort is ac­cepted within tort law 369 and pre­sent, how­ever con­tro­ver­sially, in US crim­i­nal law. 370

On the cor­po­rate side, a firm that de­signs or sup­plies a ve­hi­cle’s au­to­mated func­tion­al­ity or that pro­vides data or other digi­tal ser­vices might qual­ify as a driver un­der ex­ist­ing law. The key el­e­ment, as pro­vided in the work­ing defi­ni­tion, may be the lack of a hu­man in­ter­me­di­ary: A hu­man who pro­vides some in­put may still seem a bet­ter fit for a hu­man-cen­tered ve­hi­cle code than a com­pany with other rele­vant le­gal ex­po­sure. How­ever, as noted above, pub­lic out­rage is an­other el­e­ment that may mo­ti­vate new uses of ex­ist­ing laws. 377

…The mechanism by which some­one other than a hu­man would ob­tain a driv­ing li­cense is un­clear. For ex­am­ple, some com­pa­nies may pos­sess great vi­sion, but “a test of the ap­pli­cant’s eye­sight” may nonethe­less be difficult. 395 And while Gen­eral Mo­tors may (or may not) 396 meet a state’s min­i­mum age re­quire­ment, Google would not. [See Google, Google’s mis­sion is to or­ga­nize the world’s in­for­ma­tion and make it uni­ver­sally ac­cessible and use­ful, www.google.com/​​intl/​​en/​​about/​​com­pany/​​. In some states, Google might be al­lowed to drive it­self to school. See, e.g., Nev. Rev. Stat. § 483.270; Nev. Ad­min. Code § 483.200.]

And peo­ple say lawyers have no sense of hu­mor.