blufive: (Default)
At Eastercon, one of the few items I managed to get to, between wrangling offspring, was the "Ethics of AI" panel.

It was an interesting item, if a little "bitty" – I get the impression that there are so many unresolved issues that a single hour’s discussion couldn’t devote any significant period to any of them, so it mostly just bounced from one issue to the next. However, I was struck by how many of the issues are applicable, right now, to "dumb" software, never mind anything approaching an AI.

One of the topics (briefly) discussed was the issue of legal liability for the actions of a piece of software. I mentioned the very common software licence clause denying all liability for anything a program does. [ profile] majorclanger quickly pointed out that such clauses are unlikely to survive any significant contact with the UK legal system (I don't recall the details he gave of the act in question – something about unenforceable/unreasonable contracts?). There are presumably similar laws in other jurisdictions.

In some ways (i.e. as a user of software) I think thatIs a good thing. If a company releases software that does damage somewhere, then there should be consequences.

On the other hand, as a professional programmer, I'm a little more uneasy. IIRC, one of Alan Turing's great contributions to computer science was a proof that it's impossible to prove exactly what an arbitrary lump of software is going to actually do before you run it. For trivial programs, you can make deductions via human inspection, but that fails utterly for even relatively small lumps of code.

For any real-world-useful software, it's basically impossible to prove that it is bug free. With care, you can probably assert that it probably has no major bugs. For huge software projects (say, an operating system[1]) even getting that far can require carefully-co-ordinated person-centuries or person-millennia of effort, backed up by even larger quantities of automated computational grunt work.

(Things get murkier still if the software in question has eleventy-billion little config switches that the user can fiddle with, some of which are labelled "if you get this wrong, very bad things will happen")

Surely there has to be some sort of cut-off, where a software company can say "look, we did everything reasonably possible to ensure that the software was good, we can’t be held liable for a one-in-a-trillion bug that only kicks in when you make a typo at 12:42pm on a Tuesday in March when the wind is blowing from the south-east"? There are industry standards and quality standards and acceptance testing and so on. Presumably some of those things are actually recognised in law as a defence for the software producer?

So, how many liability issues have actually made it to court? Certainly in my professional experience, screw-ups with major real-world consequences have mostly been resolved via negotiated financial settlements. Has anyone ever tried to seriously lean on a "no liability" licence clause, and if so, what happened?

[1] Scientific American once printed an article (probably about a decade or so back) which argued, totally seriously and very persuasively (yeah, I'm biased) that Windows 2000 was one of the most complex artifacts ever built. Yes, they included things like Airliners and Moon Rockets. Big software is complicated.


blufive: (Default)

April 2017

234 5678


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated 2017-09-24 12:06
Powered by Dreamwidth Studios