Friday, March 16, 2007

Artificial Intellegence

Do you trust the machine? An intriguing story at Wired tells of a web service providing bankruptcy paperwork.

A web-based "expert system" that helped users prepare bankruptcy filings for a fee made too many decisions to be considered a clerical tool, an appeals court said last week, ruling that the software was effectively practicing law without a license.

First I laughed, but this has huge implications for not-too-distant future technology.

Reynoso entered his personal information, debts, income, assets and other data into a series of dialog boxes, and the program generated a complete set of bankruptcy forms, including an affidavit for Reynoso to sign claiming he'd done all the legal research on his own.

Fair enough! He did do the research - in a manner of speaking. If he had done his research in books he would still be taking the word of the authors. If he printed pages from the web he could be said to be "doing his own research". The affidavit was an attempt (albeit a failure in this case) to make the user take responsibility for the results.

The problem here arose because of an error in the paperwork and the affidavit was apparently inadequate. But this is very early days, software will improve. Some legal advice is fairly simple and a reasonable short term target for AI software. The same can be said for financial advice.

If I need to decide whether a certain level of mortgage is manageable, or whether to pursue a libel case, there are undoubtedly some rules-of-thumb. Answering a few questions ought to give me some guidance.
Proposed mortgage in described circumstances constitutes: Extreme Risk!
Consider 20% reduction in mortgage level for Moderate Risk.
Libel case success probability: 30%
Libel case failure probability: 70%

Now if I'm choosing between paying £150 for 5 minutes with a lawyer looking down his nose at my small-beer proposal, or paying £5 for consultancy from software that can trawl a database of a million similar cases, the software sounds like a good start. Sure the software can miss things, but so could the expensive lawyer.

A little further down the road - how long before NHS Direct uses some Artificial Intelligence triage? Of course there will be an outcry when it's first suggested, but it will come.

And I think people will want it. How many of us have already walked into the GP's surgery with a fist full of printed web pages filled with possible diagnoses and courses of treatment. We may have used a search engine to find page. Soon we may try a medical search engine - perhaps a search on a symptom database. Perhaps we'll select a category, narrow down the search, answer a couple of questions and view a list of probable conditions. Then who did the research?

Who will take professional responsibility for recommendations made by this software? In the bankruptcy case, the web site maintainer is held responsible. He was ordered to withdraw the service and pay back the fees. This seems straight-forward at first. But web technology isn't bound like that. The site (or something similar) will pop up again, perhaps hosted in a less regulated country. Ultimately, the user will be responsible for the advice he follows.

And what happens when the software is sophisticated enough to amend itself, or to update it's own research database? Then the software will write new software - a generation removed from human authors!

Science fiction authors have been thinking about this for decades. We'd better all start thinking about it. It's here.

1 comment:

Anonymous said...

There was an article last year about a university study in which Google searches were compared to doctors recommendations, the results, Google was 80% more accurate than your average GP...

To be fair to the GPs, they only have their experience, training and memory to go on. Google has access to almost everything ever printed on about each disease.