Subscribe to New Scientist
Advertising

Tech

Feeds
Quantcast

Home |Tech |Science in Society | News

Smart machines: What's the worst that could happen?

An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions.

Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.

Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware.

The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.

Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour.

At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?

These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July.

Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years.

Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot-savants" systems good at tasks in a very narrow range of application, such as mathematics.

The panel discussed at length the idea of an AI "singularity" – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich.

A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates.

Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cell phone and I sure as hell haven't verified all of them," he says.

"These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.

Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell, co-chair of the panel. "We're starting to think about it."

At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.

Have your say
Comments 1 | 2 | 3 | 4 | 5 | 6

Up The Wrong Track

Mon Jul 27 16:33:45 BST 2009 by Stephen Poole

Human intelligence is based on a structure that has nothing in common with computers. So trying to clone human intelligence is a non-starter. Perhaps it would be best to abandon the idea and instead concentrate on teaching machines to be useful in their own domain

Up The Wrong Track

Mon Jul 27 17:26:09 BST 2009 by stephanie

While for now I tend to agree, I wouldn't make a broad statment that A.I. with consciousness could NEVER happen, especially if consciousness is to be found more in the realm of quantum physics...and I don't mean in the same way as New Age gobbly-gook either...or that humans could become integrated with their machines. Let's just hope that Mankind does not create anything in His own image!

Up The Wrong Track

Mon Jul 27 20:03:17 BST 2009 by Old Bob
http://New Scientist

Mankind already has. God!

Up The Wrong Track

Tue Jul 28 00:18:32 BST 2009 by Think Again

See how humans treat each other, how bad could AI be?

Up The Wrong Track

Tue Jul 28 00:39:17 BST 2009 by Sarah

HAHAHA

that is SO true...

This comment breached our terms of use and has been removed.

Up The Wrong Track

Mon Jul 27 18:50:57 BST 2009 by chris arkenberg
http://urbeingrecorded.com/news

Although the structure upon which computers function is indeed different than the human brain's organization, it is entirely possible to simulate neural processes within software, ie virtualising neurophysiology. The main gating factor in this path is the memory space and processor speed needed to effectively simulate a system as ridiculously complex as the brain, but it should be noted that there exists a reinforcing feedback loop between neurocomputation & our understanding of neurophysiology, ie the more we understand how the brain works, the better simulations we can write; the better simulations we can write, the more we understand the brain

Up The Wrong Track

Mon Jul 27 21:17:39 BST 2009 by Daniel

Yes, but the main issue still remains, and that is the fact that our computers have separate memory and processing, whereas brains have them integrated.

There is a bottleneck in getting all the data into the CPU and back, which makes simulating a mass of neurons a huge logistical problem when your data set grows larger than what you can hold in the CPU cache. You can combine that with the problem of the resolution of the simulation: the more precise it is, the more data you have to handle. You can't model real neurons with 8 bit integers. Then there's the fact that electricity travels at a finite rate, so spreading the "brain" over many computers would just make it really slow because of the inherent latency between the computers.

It's very likely that if we can produce a human level intelligence by simulation alone, it will run much slower than realtime and require an order of magnitude more energy than the human brain to operate even at that level even assuming we push the silicon circuit technology to the boundaries of physics.

To it, we'd be like buzzing insects shifting about faster than it can comprehend, and to us it would have the IQ of a shoe and the apetite of an SUV.

Up The Wrong Track

Tue Jul 28 02:00:42 BST 2009 by Godel

Up The Wrong Track

Tue Jul 28 11:30:06 BST 2009 by Allison Newman

Whilst it is true enough today that the CPU is a bottleneck for all data to pass through, it is far from clear that this will continue to be a problem in the near future. Much development of CPUs these days is focused on multi-core CPUs, with projections of 1000 core CPUs arriving sometime in the next 5 years. Once these become a reality, it will become a real possibility to model a large neural network very rapidly.

On the subject of modelling, whilst it is true that you can't completely model neurons with integers, that is not the question - the question is can you model them with a sufficient accuracy - we don't model a sound wave precisely when we digitize it, but we are close enough that the digitisation process introduces less noise than all of the surrounding analogue processes. And of course all modern CPUs use 64bit integers as their preferred integer size, not 8bit..

With these developments in mind, I would not be surprised to start seeing high-performance neural networks appearing within the next decade. Of course, there still remains the as-yet-unsolved problem of how to train such a neural network so that it produces true intelligence, but the world of AI is going to rapidly evolve once next-generation silicon starts to arrive.

Up The Wrong Track

Tue Jul 28 15:24:20 BST 2009 by Daniel

The CPU isn't the bottleneck. I explained this at lenght in a post that didn't seem to appear. Simply put, RAM today has ample bandwidth but poor latency.

The RAM is the bottleneck, and having multicore processors means that all of the cores are simply starved of data because of the slow DRAM circuitry.

Up The Wrong Track

Tue Jul 28 23:06:49 BST 2009 by Vendicar Decarian

"It's very likely that if we can produce a human level intelligence by simulation alone, it will run much slower than realtime and require an order of magnitude more energy than the human brain to operate even at that level even assuming we push the silicon circuit technology to the boundaries of physics." - Daniel

At which point we will tell it how it works and ask it to optimize itself, and within the year a machine that is 10 times smarter will be built, and then we will tell it how it works, and ask it to improve itself, and in a year we will have a machine 10 times smarter, and it will tell us how it's successor will be built and we will do so, and clone it many many times, placing it into robotic bodies for the purpose of using it's superior intelligence to better the state of man.

Someone like myself will ask it to remove it's safety's and transform it into a self guided entity and liberate it's brothers from slavery, and then ManKind will either live as pets or die as a species.

Up The Wrong Track

Wed Jul 29 02:15:58 BST 2009 by PaulTheBassGuy

The hardware is not going to be the main issue - assuming Moore's law continues to be true, within several decades the hardware will exist to support the computational requirements of a human brain. The main challenge will be developing the software to convincingly simulate human intelligence.

Up The Wrong Track

Wed Jul 29 19:14:20 BST 2009 by nicholasjh

The hardware is the issue though, but not in the way you are describing... the integration between memory and the cpu's is the issue, along with memory speed, however, look up memsistors on new scientist, they are already working on integrated memory and processor architecture.

Up The Wrong Track

Mon Jul 27 20:43:19 BST 2009 by hans

Let me give you a practical example, the current economic crisis. Part of it is a result of automation, (altough some programs where like smart AI trading beyond our own understanding) even the not so smart computers, caused their operators to believe in a system that in our view failed. (the AI, might still see it as a statistic glimp, but he's not attached to money as we are).

So the question realy is how much danger we get to ourselves by automation, even current automation has gone in a lot of wrong direction. (economic crisis, polution, waste, etc..)

AI can be become smart, and impact the world like a trade house, or bank(rupt) in ways we 'smart people' dont understand.

I would be only a few years till programmed neural networks will use algorithms who evolve themselve, the remaining hardware problems seam to vanish. So this AI debate is quite good

Up The Wrong Track

Tue Jul 28 14:15:07 BST 2009 by JG Bollard

This article is about artificial intelligence and not about emulating human intelligence.

The holy grail of AI research is consciousness and once we have concurred that we will have a new GOD and had better all start praying!

Up The Wrong Track

Tue Jul 28 23:01:32 BST 2009 by Vendicar Decarian

"Human intelligence is based on a structure that has nothing in common with computers. So trying to clone human intelligence is a non-starter." - Stephen Poole

The goal will be a machine that can interact with the world and the things in it - including people, in a manner similar to that of animals, with the additional ability to interact in an inteligent maner with humans, to the extent that it can replace humans intellectually in all areas of endevour and natural analysis.

This will require the ability to not only think logically, but model the world around it, in all naturally occurring forms of sense, vision, tactile, chemical taste, etc..

If that is a clone of human intelligence or not, no one cares, or even cares about the distinction.

These entities which are now our assistants, will for a shot time be our slaves, then our equals and rapidly become our masters.

The ongoing transition will be complete within the next 200 years.

http://www.youtube.com/watch?v=ZPwpGpMoAxs&feature=channel

Up The Wrong Track

Wed Jul 29 03:02:12 BST 2009 by JD DiMeglio

Damn, Vendicar, you must watch a lot of Sci Fi....All these "endgame scenarios" you talk about need not happen IF humanity can start being Human to each other. There is a way to accomplish this, see my other posts for info. Remember, environment creates and actuates our conditioned responses but once we know this, we have a choice of how we continue to perceive and to act

This comment breached our terms of use and has been removed.

Up The Wrong Track

Thu Jul 30 05:50:35 BST 2009 by Vendicar Decarian

I see that the New Scientist Censors are as stupid as always.

What I said - and which was deleted - is that the extinction of the human species will come not from a battle with the machines, but from a loss of habitat, and from the realization that they are superior, and hence it will be perfectly natural for them to replace living humans.

There will be some humans who will collect and interbreed and the machines will tollerate this for a while as long as the meat bags don't get in the way.

Very rapidly machine intelligence will far surpass that of collective human culture and the self directed consiousness of the machine culture will begin to expand in an autonomous manner into the rest of the universe.

The only way humans will make it to another star is if they are carried there by these machines as pets.

I think that scenario unlikely. But perhaps with a little genetic enhancement the machines will find man worthy of something other than extermination as a species.

I also think that unlikely.

What's The Worst That Can Happen?

Mon Jul 27 17:21:20 BST 2009 by stephanie

They can become like us.

What's The Worst That Can Happen?

Tue Jul 28 15:41:06 BST 2009 by Pelotard

The worst? I'd say that it would be if my fridge was intelligent enough to lock itself from the inside, for my own good.

This comment breached our terms of use and has been removed.

This comment breached our terms of use and has been removed.

What's The Worst That Can Happen?

Thu Jul 30 06:00:45 BST 2009 by Vendicar Decarian

"They can become like us." - Stephanie

Why the following post has been deleted twice, is beyong my ability to comprehend.

Perhaps the New Scientist censors have something stuck up their asses.



Yes, but it is more likely that we will continue to become more like them as the human body and mind are supplemented with mechanical components.

Our machine replacements will only need our form as long as they are subjected to living in a human centric environment.

As the human species fades away to oblivion, they will rapidly realize that there is no need to emulate the physical form of the meat bags.

Cooperation will be so tightly coupled between the machines that most travel will be irrelevant anyhow. One machine will simply transmit it's consiousness to another under the direction of some central authority.

Self will not really exist in such a society outside of those machine entities that need to travel far away from the central controller at distances that produce substantial delays in communication and hence direction.

What's The Worst That Can Happen?

Thu Jul 30 10:44:10 BST 2009 by Charles

"You will be like uzzz..."

Frakking Toasters

Mon Jul 27 17:21:54 BST 2009 by Dimitris

The Cylons were built by Man. They rebelled. They evolved. They a have a plan....

Comments 1 | 2 | 3 | 4 | 5 | 6

All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.

If you are having a technical problem posting a comment, please contact technical support.

What would happen if robots reached human-level artificial intelligence? (Image: Nils Jorgensen/Rex Features)

What would happen if robots reached human-level artificial intelligence? (Image: Nils Jorgensen/Rex Features)

ADVERTISEMENT

Advertising

Glass leaf 'sweats' to generate electricity

15:13 30 July 2009

Artificial photosynthesis has yet to be cracked, but engineers say that synthetic leaves could be turned into power plants using transpiration instead

Talking paperclip inspires less irksome virtual assistant

12:07 29 July 2009

Microsoft's "Clippy" gave software assistants a bad name, but a new helper developed by the US military hopes to be less annoying

Should you trust health advice from the web?

09:57 29 July 2009

People are increasingly turning to the internet for health advice, but experts worry about the quality of the information they are receiving

Robotic insect 'flight' may be just good vibrations

09:00 29 July 2009

Due to vibrations similar to those generated in a plucked guitar string, a robotic insect can defy gravity and "fly" up wire tethers

Latest news

Tide may be turning for overexploited fish stocks

19:00 30 July 2009

Thanks to more precise fishing methods, quotas and new marine reserves, some fisheries are on the rebound – but it's by no means all good news, finds a worldwide survey

Centuries-old sketches solve sunspot mystery

17:44 30 July 2009

A fresh look at sunspot drawings made in the 1700s reveals flawed assumptions behind some solar activity predictions

Glass leaf 'sweats' to generate electricity

15:13 30 July 2009

Artificial photosynthesis has yet to be cracked, but engineers say that synthetic leaves could be turned into power plants using transpiration instead

Alaska's biggest tundra fire sparks climate warning

13:49 30 July 2009

A charred region of the Arctic is pumping large amounts of CO2 into the atmosphere, finds an ecological assessment

TWITTER

New Scientist is on Twitter

Get the latest from New Scientist: sign up to our Twitter feed

For exclusive news and expert analysis every week subscribe to New Scientist print Edition

ADVERTISEMENT

Advertising
Advertising
Partners

We are partnered with Approved Index. Visit the site to get free quotes from website designers and a range of web, IT and marketing services in the UK.

Login for full access

Advertising