Powered by Guardian.co.ukIzeneko artikulu hau “Adimen artifiziala: nola clever ez, gure makinak izan nahi dugu?” zen Alex Hern ek idatzitako, Azaroaren 29an Observer for 2014 19.00 UTC

aurrera 2001: A Space Odyssey ra Blade Runner eta RoboCop ra Matrix, gizakiak nola artifizialak adimen sortu dute emankorrak dystopian zinemagile dagoen lurralde bat frogatu du aurre. Duela gutxi Spike Jonze en Her eta Alex Garland en datorren Makinatik arakatzeko zer gustatzen liteke AI sorkuntzak gure artean bizi izan den eta, Alan Turing test ospetsua bezala foregrounded, nola delikatua haragi eta odol txip eta kodigoa kontatzeko izan zitekeen.

Kezka horiek dira, nahiz eta Silicon Valley izen handienetako batzuk troubling: Azken hilabetean Telsa en Elon Musk gizateriaren gisa deskribatu AI "existentzial mehatxu handienetako ... Oso kontuz ibili behar dugu". Zer gutako askok ez dira konturatzen da AI ez da zenbait urrutiko teknologia hori bakarrik zinegilearen irudimen eta informatikariak laborategietako existitzen. gure smartphones asko enplegatzen aztarnazko AI teknikak hizkuntzetan itzultzea edo gure zalantzak argitzen, bideo-jokoak enplegatzen AI bitartean konplexuak sortzeko, aldakor jolas agertoki. Eta hain luze Silicon Valley, besteak beste, Google eta Facebook bezalako enpresek bezala jarraituko AI enpresak eskuratu eta AI adituak kontratatzea, AI IQ igotzen jarraituko du ...

Ez dago Steven Spielberg pelikula bat AI?
Ez dago argumentu, baina epe, bertan "Adimen artifiziala" dago, Spielberg eta Kubricken baino gehiago solairu historia du 2001 film. Adimen artifizialaren kontzeptua doa itzuli informatika jaiotza: in 1950, besterik 14 urte orokorreko ordenagailu bat kontzeptua definitzeko ondoren, Alan Turing galdetu "Ezin makinak uste?"

AI
Jude Law Gigolo Joe gisa (eta Pals) Spielberg eta Kubrick-en 2001 AI movie. Argazkia: Allstar / Warner Bros / Sportsphoto SL

zerbait da, oraindik ere, gure adimenak aurrealdean da 64 urte geroago, Azen Alex Garland-en film berria muina bilakatu, Makinatik, bertan gizon gazte bat android eder baten gizateriaren ebaluatzeko galdetu ikusten. Ez kontzeptua da milioi bat mila hori kenduko ezarritako Turing-en 1950 paper, Informatika Makineria eta Adimen, bertan ezarri zuen proposamena da "joku" - zer gaur egun ezagutzen dugun Turing proba gisa. Kako ordenagailu bat ireki terminal testuei eta utzi giza interrogator elkarrizketak izan da, benetako pertsona bat bera egiten du, berriz,. Probaren bihotza den ala ez, eskatu duzun galde horrek giza da asmatzen, "Galde du [izango] erabaki gaizki gisa askotan denean Joko hau bezala jokatzen denean jokoa Gizon bat eta emakume bat "artean jokatzen ez zuen.

Turing esan makinak ala eskatuz imitazio jokoa pasatzen dutela ala ez "uste" auzia vague eta filosofikoki unclear baino gehiago erabilgarria da. "Jatorrizko Galdera ... too zentzugabeak eztabaida merezi duela uste dut". Halere, pentsatu zuen urtearen arabera 2000, "Hitzak eta orokorrean hezi iritzia erabilera aldatu egingo da hainbeste erabiltzen den makinak kontra egin espero gabe pentsatzen hitz egin ahal izango da".

Hizkuntza natural dagokionez, ez zen urruti off. gaur, Ez da arraroa pertsona ari "nahastu" beren ordenagailuak buruz hitz egiten entzun nahi, edo denbora luze bat hartu, zerbait egin behar dira ari "dela pentsatzen" delako. Baina zer pentsamendu makina bat bezala zenbatzen buruz zorrotzagoak dira, nahiz dugu, errealitatea hurbiltzen jende askok uste baino da.

Blade Runner 'Voight-Kampff' proba, oinarritutako beren galderak erantzun emozional on gizakiak erreplikanteen bereizteko diseinatu.

Beraz AI dagoeneko existitzen?
Arabera. Oraindik ez dugu ezerezetik gertu Turingen imitazio jokoa pasatzen den, Aitzitik txostenak arren. Ekainean, Eugene Goostman izeneko chatbot bat arrakastaz engaina maketak Turing test bat Londresen ospatu epaileek hirugarren bat giza zela pentsatzen sartu. Baina baino gai izateaz uste, Eugene relied on a clever gimmick and a host of tricks. By pretending to be a 13-year-old boy who spoke English as a second language, the machine explained away its many incoherencies, and with a smattering of crude humour and offensive remarks, managed to redirect the conversation when unable to give a straight answer.

The most immediate use of AI tech is natural language processing: working out what we mean when we say or write a command in colloquial language. For something that babies begin to do before they can even walk, it’s an astonishingly hard task. Consider the phrase beloved of AI researchers – “time flies like an arrow, fruit flies like a banana”. Breaking the sentence down into its constituent parts confuses even native English speakers, let alone an algorithm.

Is all AI concerned with conversations?
Not at all. Izan ere, one of the most common uses of the phrase has little to do with speech at all. Some readers will know the initials AI not from science fiction or Alan Turing, but from video games, where it is used to refer to computer-controlled opponents.

In a first-person shooter, adibidez, the AI controls the movements of the enemies, making them dodge, aim and shoot at you in challenging ways. In a racing game, the AI might control the rival cars. As a showcase for the capabilities of AI, video games leave a lot to be desired. But there are diamonds in the rough, where the simplistic rules of the systems combine to make something that appears complex.

Take GTA V, where the creation of a city of individuals living their own lives means that it’s possible to turn a corner and find a fire crew in south central LA having a fist-fight with a driver who got in the way of their hose; edo Dwarf Fortress, where caves full of dwarves live whole lives, richly textured and algorithmically detailed. horiek emergent gameplay systems show a radically different way that AI can develop, aimed not at fully mimicking a human, but at developing a “good enough” heuristic that turns into something altogether different when scaled up enough.

So is everyone ploughing money into AI research to make better games?
Do Not. A lot of AI funding comes from firms such as Apple and Google, which are trying to make their “virtual personal assistants”, such as Siri and Google Now, live up to the name.

It sounds a step removed from the sci-fi visions of Turing, but the voice-controlled services are actually having to do almost all the same heavy lifting that a real person does. They need to listen to and understand the spoken word, determine how what they have heard applies to the data they hold, and then return a result, also in conversational speech. They may not be trying to fool us into thinking they’re people, but they aren’t far off. Because all the calculations are done in the cloud, the more they hear, the better they are at understanding.

In the 2013 movie Her, lonely Theodore Twombly (Joaquin Phoenix) falls in love with an operating system.

However the leading AI research isn’t just aimed at replicating human understanding of the world, but at exceeding it. IBM’s Watson is best known as the computer that won US gameshow Jeopardy! in 2011, harnessing its understanding of natural language to parse the show’s obtuse questions phrased as answers. But as well as natural language understanding, Watson also has the ability to read and comprehend huge bodies of unstructured data rapidly. In the course of the Jeopardy! taping, that included more than 200 million pages of content, including the full text of Wikipedia. But the real goal for Watson is to expand that to full access to the entire internet, as well as specialist data about the medical fields it will eventually be put to work in. And then there are the researchers who are just trying to save humanity.

Oh God, we’re all going to die?
Agian. The fear is that, once a sufficiently general-purpose AI such as Watson has been created, its capacity will simply scale with the processing power available to it. Moore’s law predicts that processing power doubles every 24 hilabete, so it’s only a matter of time before an AI becomes smarter than its creators – able to build an even faster AI, leading to a runaway growth in cognitive capacity.

But what does a superintelligent AI actually do with all that capacity? That depends on its programming. The problem is that it’s hard to program a supremely intelligent computer in a way that will ensure it won’t just accidentally wipe out humanity.

Suppose you’ve set your AI the task of making paperclips and of making itself as good at making paperclips as possible. Pretty soon, it’s exhausted the improvements to paperclip production it can make by improving its production line. What does it do next?

“One thing it would do is make sure that humans didn’t switch it off, because then there would be fewer paperclips,” explains Nick Bostrom in Salon magazine. Bostrom’s book, Superintelligence, has won praise from fans such as SpaceX CEO Elon Musk for clearly stating the hypothetical dangers of AI.

The paperclip AI, Bostrom says, “might get rid of humans right away, because they could pose a threat. Era, you would want as many resources as possible, because they could be used to make paperclips. Like, adibidez, the atoms in human bodies.”

How do you fight such an AI?
The only way that would work, according to some AI theorists such as Ray Kurzweil, a director of engineering at Google, is to beat it to the punch. Not only do humans have to try to build a smart AI before they make one accidentally, but they have to think about ethics first – and then program that into it.

Azken finean, coding anything simpler is asking for trouble. A machine with instructions to “make people happy”, adibidez, might just decide to do the job with electrodes in brains; so only by addressing one of the greatest problems in philosophy can we be sure we’ll have a machine that understands what it means to be “good”.

Beraz, all we have to do is program in ethics and we’ll be fine?
Beno, not quite. Even if we manage to not get wiped out by malicious AI, there’s still the issue of how society adapts to the increasing capability of artificial intelligence.

The Industrial Revolution was characterised by the automation of a number of jobs that previously relied on manual labour. There is little doubt that it represented one of the greatest increases in human welfare ever seen. But the upheaval at the time was momentous and something we could be about to see again.

Elon Musk on the dangers of AI.

What steam power did for physical labour, AI could do for mental labour. dagoeneko, the first casualties are starting to become clear: the minicab dispatch office has little place in a world of Hailo eta Ubera; the job of a stockbroker has changed beyond all recognition thanks to the introduction of high-frequency trading; and ever since the construction of the Docklands Light Railway in the 1980s, the writing has been on the wall for train drivers.

And the real changes are only just beginning. azaroan, Goldman Sachs led a $15m funding round for Kensho, a financial data service that uses AI techniques to pump out financial analysis at a rate no human analyst could match. And it can do it while taking stock of the entirety of the huge amount of financial data available, something humans simply can’t cope with.

Kensho’s analytical notes could then be passed on to a high-frequency trading firm such as Athena, which will use the insights to gain an edge of milliseconds on the market – that’s enough to make money, if you’re trading with billions of dollars. Once the trading has affected the market, it might be written up for Forbes by Narrative Science, which uses algorithms to replace financial journalists. Azken finean, most business stories follow a common template, and the data is already available in a structured format, so why waste time getting people involved at all?

On aggregate level, these changes are a good thing. If the work of millions of people is covered by algorithms, then output goes up, hours worked go down, and we move one step closer to a Jetsons-style utopia.

Azkenean, it will be OK?
Assuming we avoid the superintelligent AIs wiping us out as an afterthought, manage to automate a large proportion of our jobs without creating mass unemployment and societal unrest, and navigate the tricky boundaries of what personhood entails in a world where we can code passable simulacra of humans, then yes, it should be fine.

guardian.co.uk © Guardian News & Media Limited 2010

Argitaratutako bidez Guardian News Feed plugin WordPress.

26664 0