blufive: (Default)
[personal profile] blufive
At Eastercon, one of the few items I managed to get to, between wrangling offspring, was the "Ethics of AI" panel.

It was an interesting item, if a little "bitty" – I get the impression that there are so many unresolved issues that a single hour’s discussion couldn’t devote any significant period to any of them, so it mostly just bounced from one issue to the next. However, I was struck by how many of the issues are applicable, right now, to "dumb" software, never mind anything approaching an AI.

One of the topics (briefly) discussed was the issue of legal liability for the actions of a piece of software. I mentioned the very common software licence clause denying all liability for anything a program does. [livejournal.com profile] majorclanger quickly pointed out that such clauses are unlikely to survive any significant contact with the UK legal system (I don't recall the details he gave of the act in question – something about unenforceable/unreasonable contracts?). There are presumably similar laws in other jurisdictions.

In some ways (i.e. as a user of software) I think thatIs a good thing. If a company releases software that does damage somewhere, then there should be consequences.

On the other hand, as a professional programmer, I'm a little more uneasy. IIRC, one of Alan Turing's great contributions to computer science was a proof that it's impossible to prove exactly what an arbitrary lump of software is going to actually do before you run it. For trivial programs, you can make deductions via human inspection, but that fails utterly for even relatively small lumps of code.

For any real-world-useful software, it's basically impossible to prove that it is bug free. With care, you can probably assert that it probably has no major bugs. For huge software projects (say, an operating system[1]) even getting that far can require carefully-co-ordinated person-centuries or person-millennia of effort, backed up by even larger quantities of automated computational grunt work.

(Things get murkier still if the software in question has eleventy-billion little config switches that the user can fiddle with, some of which are labelled "if you get this wrong, very bad things will happen")

Surely there has to be some sort of cut-off, where a software company can say "look, we did everything reasonably possible to ensure that the software was good, we can’t be held liable for a one-in-a-trillion bug that only kicks in when you make a typo at 12:42pm on a Tuesday in March when the wind is blowing from the south-east"? There are industry standards and quality standards and acceptance testing and so on. Presumably some of those things are actually recognised in law as a defence for the software producer?

So, how many liability issues have actually made it to court? Certainly in my professional experience, screw-ups with major real-world consequences have mostly been resolved via negotiated financial settlements. Has anyone ever tried to seriously lean on a "no liability" licence clause, and if so, what happened?

[1] Scientific American once printed an article (probably about a decade or so back) which argued, totally seriously and very persuasively (yeah, I'm biased) that Windows 2000 was one of the most complex artifacts ever built. Yes, they included things like Airliners and Moon Rockets. Big software is complicated.

Date: 2012-04-14 21:09 (UTC)
From: [identity profile] eggwhite.livejournal.com
Heh... I avoided that panel because a) I was still to unbrained at that time of the morning and b) it probably would have just annoyed me. Then again, having spent four years working in AI R&D gives me a very different idea of what AI is from most SFnal representations.

Most of what gets presented as AI in SF really ought to just drop the "A" bit, as it's blatantly just intelligence.

Edit: Oh, and I thoroughly second the "big software is complicated" thing.
Edited Date: 2012-04-14 21:11 (UTC)

Date: 2012-04-15 15:02 (UTC)
From: [identity profile] blufive.livejournal.com
The panel of three was made up of (IIRC) an actual AI researcher, a tech-literate barrister, and an SF writer. So, some of the more blatant insanities got torn to bits PDQ.

Date: 2012-04-15 21:00 (UTC)
From: [identity profile] eggwhite.livejournal.com
Grand... always good to hear. My "I was too unbrained at that time of the morning" still holds up, though!

Date: 2012-04-14 21:34 (UTC)
From: [identity profile] nelc.livejournal.com
To my mind, if — or rather, when — one of these cases make it to court, the defendent will have to walk the jury through how complicated programming is, especially Turing's proof. Then show what they believe reasonable precautions to be, then that they took those reasonable precautions.

Assuming that all that is competently done, the plaintiff's best approach should be testing the definition of reasonable precautions, and whether the company took those precautions. Who wins the case will help define where the reasonable line is. Sucks to be the guy who has to defend against that; the best way to avoid being that guy is to make sure your software safety auditing is better than everyone else's.

Date: 2012-04-15 05:21 (UTC)
From: [identity profile] alex-holden.livejournal.com
What is a reasonable precaution depends on the application and the likely consequences of its failure, eg. I would expect the software that controls a car's anti-lock brake system to be tested and verified to a greater degree of confidence than a web browser.

Date: 2012-04-15 15:34 (UTC)
From: [identity profile] blufive.livejournal.com
the best way to avoid being that guy is to make sure your software safety auditing is better than everyone else's.

I suspect that a more popular answer is "make sure your software is used in an environment where the worst-case failure mode isn't that bad."

Which sorta loops back to the licencing issue again - I'm pretty sure that a lot of software has licence clauses like "don't use this software for any medical purpose" too...

Edited to Add: I mean "more popular" in the sense that "more people do this". Which is bleedin' obvious, now that I think about it, because most software probably exists in a reasonably "safe" environment.
Edited Date: 2012-04-15 19:03 (UTC)

Date: 2012-04-14 23:09 (UTC)
ext_63737: Posing at Zeusaphone concert, 2008 (That's It boater)
From: [identity profile] beamjockey.livejournal.com
(Things get murkier still if the software in question has eleventy-billion little config switches that the user can fiddle with, some of which are labelled "if you get this wrong, very bad things will happen")

Or worse, when they are NOT so labeled.

Surely there has to be some sort of cut-off, where a software company can say "look, we did everything reasonably possible to ensure that the software was good, we can’t be held liable for a one-in-a-trillion bug that only kicks in when you make a typo at 12:42pm on a Tuesday in March when the wind is blowing from the south-east"?

But the Therac-25 bug was exactly this sort of bug, and it killed people. So what is the appropriate cut-off?

Date: 2012-04-15 10:47 (UTC)
From: [identity profile] nelc.livejournal.com
Somewhere north of what AECL managed during design of the Therac-25. Software is complicated, yo, but if you don't have it independently assessed that's a failure of design. If you don't include the operator in the safety loop by listing error codes and the appropriate recovery processes in the manual, then that's a failure of design. If you don't put in hardware interlocks to lockout unsafe modes of operation, then that's a failure of design. And if you don't train the operators to recognise a fairly obvious failure mode (i.e. an electron beam flux so high the patient feels it as an electric shock), that's a failure of design (and maybe hospital culture).

Date: 2012-04-15 15:23 (UTC)
From: [identity profile] blufive.livejournal.com
So what is the appropriate cut-off?

Indeed, that's kinda what I was asking. As [livejournal.com profile] alex_holden pointed out above, the answer probably depends on what the software is for. I know from [livejournal.com profile] calatrice's professional experience that anything used in pharma can potentially get audited up the wazoo by the FDA. Something like the Therac-25, controlling potentially-lethal medical equipment, would (I hope) have to jump through a huge number of hoops to get certified these days. Several of those hoops might even involve the specific question "can this thing do a Therac-25 on us?".

For instance, in my day job, I have to worry about PCI-DSS and UK insurance regulation (and, of course, customers so clueless that they give us requirements that breach those regs...). So, if one of our systems gets hacked and loses $LoadsaMoney, does "well, we passed our PCI-DSS audits, we were just unlucky enough to get ownz0red by a zero-day exploit in $PopularHTTPServer" count as any sort of defence?

Edited Date: 2012-04-15 15:24 (UTC)

Date: 2012-04-15 14:44 (UTC)
From: [identity profile] alexmc.livejournal.com
I'm wondering about the self driving cars starting to appear :-)

Date: 2012-04-15 15:28 (UTC)
From: [identity profile] blufive.livejournal.com
In the UK at least, getting them certified as road-legal is likely to be a huge deal (and is exactly the kind of thing I would include in my "how many of the issues are applicable, right now, to "dumb" software" comment)

I suspect that, at this point (given that such cars now pretty much exist, albeit probably not in an economically viable form) it's going to take longer to work out exactly what the legal/liability issues are than to actually make them into mass-market product.

Date: 2012-04-16 19:21 (UTC)
From: [identity profile] major-clanger.livejournal.com
Oops, saw this but didn't get around to replying.

What I was citing was the Unfair Contract Terms Act 1977, which says that under English law you cannot contractually exclude liability for causing personal injury or death. You may be able to exclude liability for damage or monetary losses, but this will depend on the circumstances. There are entire chapters of books on IT law covering this, so it would be hard for me to provide a brief summary. However, some of the important factors will include:

- whether the party the exclusion clause operates against is a consumer rather than acting in the course of business;
- whether the exclusion clause was negotiated or was imposed as a standard condition;
- whether the clause excludes liability or just limits it;
- how closely connected to any fault the losses are that are excluded.

Date: 2012-04-16 19:52 (UTC)
From: [identity profile] blufive.livejournal.com
Thanks. Based on that description, the usual imposed-on-the-consumer excluding-all-liability clause does look to be on rather shaky ground. I suppose this probably means the lack of court cases is down to cases being settled way before they get that far...

(Sorry we didn't get to discuss this at the time - between me chasing offspring and you helping to run a con bid, I think we only came with 10 feet of each other for about 2 minutes in the whole con...)
Edited Date: 2012-04-16 19:54 (UTC)

Profile

blufive: (Default)
blufive

April 2024

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated 2025-07-08 13:35
Powered by Dreamwidth Studios