Are we on the brink of creating a computer with a human brain?
By Michael Hanlon
11th August 2009
Professor Markram claims he plans to build an electronic human brain 'within the next ten years'
There are only a handful of scientific revolutions that would really change the world. An immortality pill would be one. A time machine would be another.
Faster-than-light travel, allowing the stars to be explored in a human lifetime, would be on the shortlist, too.
To my mind, however, the creation of an artificial mind would probably trump all of these - a development that would throw up an array of bewildering and complex moral and philosophical quandaries. Amazingly, it might also be within reach.
For while time machines, eternal life potions and Star Trek-style warp drives are as far away as ever, a team of scientists in Switzerland is claiming that a fully-functioning replica of a human brain could be built by 2020.
This isn't just pie-in-the-sky. The Blue Brain project, led by computer genius Henry Markram - who is also the director of the Centre for Neuroscience & Technology and the Brain Mind Institute - has for the past five years been engineering the mammalian brain, the most complex object known in the Universe, using some of the most powerful supercomputers in the world.
And last month, Professor Markram claimed, at a conference in Oxford, that he plans to build an electronic human brain 'within ten years'.
If he is right, nothing will be the same again. But can such an extraordinary claim be credible? When we think of artificial minds, we inevitably think of the sort of machines that have starred in dozens of sci-fi movies.
Indeed, most scientists - and science fiction writers - have tended to concentrate on the nuts and bolts of robotics: how you make artificial muscles; how you make a machine see and hear; how you give it realistic skin and enough tendons and ligaments underneath that skin to allow it to smile convincingly.
But what tends to be glossed over is by far the most complex problem of all: how you make a machine think.
This problem is one of the central questions of modern philosophy and goes to the very heart of what we know, or rather do not know, about the human mind.
Most of us imagine that the brain is rather like a computer. And in many ways, it is. It processes data and can store quite prodigious amounts of information.
'They are copying a brain without understanding it'
But in other ways, a brain is quite unlike a computer. For while our computers are brilliant at calculating the weather forecast and modelling the effects of nuclear explosions - tasks most often assigned to the most powerful machines - they still cannot 'think'.
We cannot be sure this is the case. But no one thinks that the laptop on your desk or even the powerful mainframes used by the Met Office can, in any meaningful sense, have a mind.
So what is it, in that three pounds of grey jelly, that gives rise to the feeling of conscious self-awareness, the thoughts and emotions, the agonies and ecstasies that comprise being a human being?
This is a question that has troubled scientists and philosophers for centuries. The traditional answer was to assume that some sort of 'soul' pervades the brain, a mysterious 'ghost in the machine' which gives rise to the feeling of self and consciousness.
If this is the case, then computers, being machines not flesh and blood, will never think. We will never be able to build a robot that will feel pain or get angry, and the Blue Brain project will fail.
But very few scientists still subscribe to this traditional 'dualist' view - 'dualist' because it assumes 'mind' and 'matter' are two separate things.
Instead, most neuroscientists believe that our feelings of self-awareness, pain, love and so on are simply the result of the countless billions of electrical and chemical impulses that flit between its equally countless billions of neurons.
So if you build something that works exactly like a brain, consciousness, at least in theory, will follow.
In fact, several teams are working to prove this is the case by attempting to build an electronic brain. They are not attempting to build flesh and blood brains like modern-day Dr Frankensteins.
They are using powerful mainframe computers to 'model' a brain. But, they say, the result will be just the same.
Two years ago, a team at IBM's Almaden research lab at Nevada University used a BlueGene/L Supercomputer to model half a mouse brain.
Half a mouse brain consists of about eight million neurons, each of which can form around 8,000 links with neighbouring cells.
Creating a virtual version of this pushes a computer to the limit, even machines which, like the BlueGene, can perform 20trillion calculations a second.
The 'mouse' simulation was run for about ten seconds at a speed a tenth as fast as an actual rodent brain operates. Nevertheless, the scientists said they detected tell-tale patterns believed to correspond with the 'thoughts' seen by scanners in real-life mouse brains.
It is just possible a fleeting, mousey, 'consciousness' emerged in the mind of this machine. But building a thinking, remembering human mind is more difficult. Many neuroscientists claim the human brain is too complicated to copy.
'Turning it off might be seen as murder'
Markram's team is undaunted. They are using one of the most powerful computers in the world to replicate the actions of the 100billion neurons in the human brain. It is this approach - essentially copying how a brain works without necessarily understanding all of its actions - that will lead to success, the team hopes. And if so, what then?
Well, a mind, however fleeting and however shorn of the inevitable complexities and nuances that come from being embedded in a body, is still a mind, a 'person'. We would effectively have created a 'brain in a vat'. Conscious, aware, capable of feeling, pain, desire. And probably terrified.
And if it were modelled on a human brain, we would then have real ethical dilemmas. If our 'brain' - effectively just a piece of extremely impressive computer software - could be said to know it exists, then do we assign it rights?
Would turning it off constitute murder? Would performing experiments upon it constitute torture?
And there are other questions, too, questions at the centre of the nurture versus nature debate. Would this human mind, for example, automatically feel guilt or would it need to be 'taught' a sense of morality first? And how would it respond to religion? Indeed, are these questions that a human mind asks of its own accord, or must it be taught to ask them first?
Thankfully, we are probably a long way from having to confront these issues. It is important to stress that not one scientist has provided anything like a convincing explanation for how the brain works, let alone shown for sure that it would be possible to replicate this in a machine.
Not one computer or robot has come near passing the famous 'Turing Test', devised by the brilliant Cambridge scientist Alan Turing in 1950, to prove whether a machine could think.
It is a simple test in which someone is asked to communicate, using a screen and keyboard, with a computer trying to mimic a human, and another, real human. If the judge cannot tell the machine from the other person, the computer has 'passed' the test. So far, every computer we have built has failed.
Yet, if the Blue Brain project succeeds, in a few decades - perhaps sooner - we will be looking at the creation of a new intelligent lifeform on Earth. And the ethical dilemmas we face when it comes to experimenting on animals in the name of science will pale into insignificance when faced with the potential torments of our new machine mind.
No comments:
Post a Comment