Looking for intelligence in the lab
The following is my original Laboratorytalk editor’s column from 13 October 2010.
Automation, whether in the laboratory or on the factory floor, has been sold on the principle that it takes the drudgery away from the human workers and frees them to do the actual thinking. Machines are very good at doing what they are told, but are not so good at the creative side of things. That will always need humans.
Not any longer, if we are to believe in the potential of the ‘artificial experimenter’ concocted by a couple of PhD students at the University of Southampton, UK. Chris Lovell and Gareth Jones from the university’s school of electronics and computer science say that their artificial intelligence (AI) mimics the techniques used by successful human scientists. It looks at the data, builds hypotheses, and chooses experiments without any human involvement.
Before we dismiss this as simply the latest in a long line of AI breakthroughs that never translate into everyday reality, this one has just been awarded top prize at the 13th international conference on Discovery Science, held in Canberra, Australia, last week. Clearly then, other computer scientists hold it in high regard.
The pair’s supervisor, Klaus-Peter Zauner, says that “experimentation is expensive” and that “biological experimentation can be error prone”. The system therefore attempts to minimise the number of experiments performed and applies not only artificial intelligence, but also artificial common sense, to the results: it tries to detect and therefore ignore erroneous data. Coupling this AI software to lab-on-a-chip hardware promises to make the process of discovery at once faster, cheaper, and more reliable.
Those fearing for their jobs, or worried about the machines finally taking over (“…Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14am…”) may find curious consolation in this: the research work was made possible through a Microsoft fellowship.
This means that Artificial Experimenter 1.0 will be expensive, buggy, and spontaneously freeze. Later versions will promise much more, but will in practice do little other than demand inflated memory and greater processing power in order to achieve as little as v1.0. Call me a cynic, but I think our jobs are safe for a few years yet.