Everything Else

Fear of Artificial Intelligence - Page 1

I am back from the new terminator-movie -- Terminator Genisys -- when once again John Connor sends Kyle Reese back in time to protect Sarah , which ohhh what an amazing story, Judging the preview I was really confident that - a 37th-best movie quote - would not have been a crap, so back from this boring experience I wanted to have a few reads about the Science and the Psychology, more specifically what people and scientists fear about the A.I. Googling.dot.com I found these articles

have fun
I'm not worried about artificial intelligence (which I've yet to see any reason to suspect that we'll ever manage to create) nearly as much as I'm worried about artificial "intelligence" (which there's already way too much of) and the natural idiots who want to put it in charge of every goddamn thing these days.
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/Jupiter-6/D-50/MT-32/SC-55k, Ensoniq SQ-80/Mirage, Yamaha DX7/V-50/FB-01, Korg DW-8000/03-RW/MS-20 Mini, E-mu Proteus MPS/Proteus/2, Rhodes Chroma Polaris

"'Legacy code' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrup
How many angels can dance on the head of a pin ?
- How many angels can dance on the head of a pin ? - what a crazy question, it might be good for a song-title ( I was thinking something like " Duran Duran - Like An Angel "), but oh well, and how many sub particles can dance dance there ? and do we know the why of the distance between the proton and electron in a hydrogen atom ?

It's might be and interesting set of questions, but it might be more interesting the reason why Stephen Hawking has warned in that way. He is a scientist, he has almost the QI of Albert Einstein , so he should know that artificial intelligence is not like the one described by William Gibson .

I mean in the Cyberpunk trilogy { Down the Cyberspace, Count Zero, Monna Lisa Cyberpunk } William Gibson has written several time that the Artificial Intelligence must have a gun pointed to the head, but oh well ... we have failed in A.I. in the last 50 years, we are far from the " singularity-event " and his friend Roger Penrose has written up to 4 books just to say we can't do A.I. without first improving math and physics, and we are also struggling on Why on the Why can't Einstein and Quantum Mechanics get along, and Stephen Hawking has contributed more on describing black holes , so ... I think before meeting a true A.I. bot , humans will understand the graviton and the gravity in order to build flying robots but without trashing so much energy in the ESC-technology (brashness motors + propeller, controlled by dedicated MPU with the purpose of being fast and accurate in their response) that we are currently using to make our quadri and esa rotors able to fly. These bots are cool, everyone wants to buy one, but see their autonomy, usually these bots can fly for 30-40 minutes, but then they need to land because their batteries are almost uncharged.

Image Image
Electric barbarella by Duran Duran

BTW, if the A.I. will ever dress the robot mind, then I want see it dressed like Electric Barbarella :lol:
(just to reinvent the definition of " machine-porn ", currently that definition is applicable only with a vintage SGI under the desk)
have fun
I expected you to mention Cherry 2000!
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
robespierre wrote: I expected you to mention Cherry 2000!

Stepford Wives or The Silver Metal Lover ?

What's it gonna be, boy ? YES or NO ? I gotta know right now !
hamei wrote: I expected you to mention Cherry 2000!


so I expected you to mention a 2001 American science fiction film directed by Steven Spielberg and based on a project made by Stanley Kubrick : " A.I. Artificial Intelligence " :D

That movie sounds to me like the futuristic tale of Pinocchio, The Adventures of a new Pinocchio-bot , a story about an animated puppet, a robot with a new kind of intelligence, completely different from other puppets because it can really have emotions, and other fairy tale devices (including a new definition of prostitutes, will robots made for this ?). The setting of the story is in a far away future, that will go even more 'far away in the future until a world in where humans are completely extinct. Marvelous movie!
have fun
ivelegacy wrote: so I expected you to mention a 2001 American science fiction film directed by Steven Spielberg and based on a project made by Stanley Kubrick: " A.I. Artificial Intelligence "

The Velveteen Rabbit with hardware instead of a soul ...
hamei wrote: The Velveteen Rabbit with hardware instead of a soul …


the science will feel like Alice in the Wonderland , she has just discovered the first quantum computing , so she is following the fist white rabbit she has seen, with the principle of the causality, sooner or later it will be tumbling down the rabbit hole .

oh well, quantum mechanics gives an incomplete description of the real state of affairs, but the science knows what Penrose is arguing: he says that to be conscious the robot hardware should be powered by quantum mechanics , so we do still do not know in details how to achieve strong A.I. but we know something about their hardware ( quantum computing machines ), while there's still no scientific definition for soul. What is it ? Turing said nothing about that, Is it a special quantum mechanics ? Does it include gravitons ?
have fun
I wasn't impressed by Penrose's reasoning, and his books don't seem to me to have much in the way of structure or exposition.
All physical phenomena below a certain size are influenced by quantum effects, but there's no reason to think that (e.g.) brains are any more than semiconductors. In fact, all real work on neurobiology assumes a classical model.
:PI: :O2: :Indigo2IMP: :Indigo2IMP:
Roger Penrose's theory of the mind gets a lot of attention among computer types, but specialists in the topics he addresses have dismissed his work.

He completely misunderstands the range of application of Goedel's incompleteness theorems[1]. He posits quantum effects on processes that operate at a scale where quantum properties have no significant effect[2].

I seem to recall there were also criticisms of his understanding of biology, but I don't have any references for that.

[1] Torkel Franzen. 2005. Goedel's Theorem: An Incomplete Guide to its Use and Abuse.
[2] Alwyn Scott. 1995. Stairway to the Mind: The Controversial New Science of Consciousness.
Logans un
-----------------------------------------------------------------------
Hey Ho! Pip & Dandy!
MyDungeon() << :Fuel: :Octane2: :Octane2: :Octane2: :Octane: :Indy: MyLoft() << :540: :Octane: MyWork() << :Indy: :Indy: :O2: :O2: :O2: :Indigo: :Indigo:
uunix wrote: Logans Run

Nice costumes :D

I saw Little Annie Fanny and her sister last night on the way home. Was in such shock I neglected to ask how much ....
Image
who is the next ? Shirley Manson !

OMG (means Oh My Garbage), so The Garbage vocalist Shirley Manson loves Nerds

Image

but She has now posted a message denouncing West - " I (L) nerds " - " but " - " I am Terminator, T1001 model " -

and The Resistance notified us with sadness that Future Savannah is actually a class TOK cyborg


Image
Alicia Witt

Image


Image

Oh my goodness, so now I am a bit confused about the robo-anthropologists (anthrobopologists?)
have fun
Image

yet an other funny interesting article
have fun
Fears about AI fall into two big buckets:

1) AI will kill us
2) AI will take our jobs

There is a third, somewhat subconscious fear, i.e. if machines out think us, we will no longer be "special".

Re the first one, there is enough autonomy coming to weapons where a very small number of people might be able to deploy and propel a large force of powerful autonomous weapons systems to create havoc at an unprecedented scale. Whether or not the initial intent is supplied by a goal setting AI or a human, at some level, becomes immaterial. Robots as warfighters are going to be a reality. CIWS cannons, autonomous drones, fire and forget missiles and unmanned ground vehicles are just a few examples of weapon systems that make decisions on their own. And the thing about not firing without human approval is a bit of a myth... they are instructed to take on a range of targets, but AI-powered systems do individual target selection and firing on their own today.

Re the second, yes, AI-based systems are taking our jobs and will continue to. Whether its autonomous cars decimating the number one source of employment for US men (truck drivers) or reasonably advanced "agents" that may not entirely obliterate, but will certainly greatly reduce, employment in the number one job category for women in the US (assistants). Other than these, mass providers of employment, such as manufacturing, agriculture etc. are clearly going to be robo-sourced. Yes, not every job will be done by robots/AI in the near future, but this doesn't matter. Enough will be done so that some pretty fundamental socio-economic assumptions will have to be re-examined.

Re the third, everyone is special in their own way :-)

AI is here and its footprint will continue to grow. Whether robots are "sentient" or have "consciousness" are almost peripheral questions when considering the impact of AI with regards to all three of the fears cited above.
--
:Octane2: :O2: :O2: :Indigo: :Indigo: :Indigo: :Fuel: :Indy: :Indy: :Indy: :Indigo2: :Indigo2IMP:
sgifanatic wrote: AI is here

Well, not really. That's not intelligence. That's "following instructions". You can set up a row of dominoes, then push the first one so they all fall down in a row. Is that "intelligence" ?

The theory of relativity and the sistine chapel were not 'following instructions'. That's intelligence.

and its footprint will continue to grow.

Not for too much longer. When there's no food in another fifty years, artificial intelligence will be the least of the race's worries.

Did you know trhat fighting fires was about 15% of the Forest Service budget in the 70's but now it's almost 50% ? This is just one example of an obvious hazard. There are so many things that will happen that we aren't even considering ... humans may survivie what they hath wrought (looking at the candidates for political office tho, there's much doubt in my mind) but even if they do, it ain't gonna be anything like what we have now.

And getting there, wherever there is, will be extremely ugly.

Whether robots are "sentient" or have "consciousness" are almost peripheral questions when considering the impact of AI with regards to all three of the fears cited above.

Dancing on the head of a pin again. The question itself is irrelevant.
hamei wrote: Well, not really. That's not intelligence. That's "following instructions". You can set up a row of dominoes, then push the first one so they all fall down in a row. Is that "intelligence" ?

But, but, but hamei, it's The Future now! Or if it isn't, we can make it be The Future by just insisting hard enough! What are you, some kind of filthy Luddite , that you don't believe in The Future!?
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/Jupiter-6/D-50/MT-32/SC-55k, Ensoniq SQ-80/Mirage, Yamaha DX7/V-50/FB-01, Korg DW-8000/03-RW/MS-20 Mini, E-mu Proteus MPS/Proteus/2, Rhodes Chroma Polaris

"'Legacy code' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrup
hamei wrote:
sgifanatic wrote: AI is here

Well, not really. That's not intelligence. That's "following instructions". You can set up a row of dominoes, then push the first one so they all fall down in a row. Is that "intelligence" ?


The question you are asking above appears to be rhetorical. I don't know if you are familiar with layer at a time learning, for example, but when exposed to images the system begins to learn and abstract the same kinds of basic features humans would, without being told to do so, or given any knowledge of what a line is or an edge is. There is no first-order "if-then-else" involved. AI systems today are performing their own feature selection, feature evolution and so on. The abstractions they learn - and the rules they evolve - are not provided by a human. In most cases, humans are unaware of them.

The theory of relativity and the sistine chapel were not 'following instructions'. That's intelligence.


Chess was the ultimate exemplar of human intelligence until Deep Blue.

Computers have invented important things like super efficient 3D junctions, without being explicitly told how to do this. These results were achieved by programs that could "re-write" their own rules and discover new rules not considered by their human creators... "advancing knowledge". The 3D junction I alluded to is just one example of an unanticipated discovery made by AI software that did not result only from following explicit instructions provided by a human. There are many other such examples.

Very early on, AI software was already capable of chancing upon important theorems and mathematical discoveries. For example, an AI program re-discovered Goldbach's conjecture. And within the domain of mathematics, automated theorem provers have been providing increasingly more impressive results. Their discoveries will become better known in time. A likely outcome within the next century is that much of the new mathematics being discovered, will in fact be machine discovered. This doesn't require machine sentience, by the way. Just faster computers and the same types of GA/GP techniques that have proven themselves over and over in this category.

It's interesting that you used the theory of relativity as an example of human intelligence. Particularly in physics, many human discoveries have been attributed to symbolic manipulation; i.e. not "seeing the truth" in some structural sense, but arriving at it by following a mathematical chain, constrained by the pre-defined rules of mathematical operations. When human mathematical symbolic manipulation leads to a new "truth" it is accepted universally as evidence of intelligence.

Whether robots are "sentient" or have "consciousness" are almost peripheral questions when considering the impact of AI with regards to all three of the fears cited above.

Dancing on the head of a pin again. The question itself is irrelevant.


I agree. As I said, both these questions are peripheral when considering the impact of AI. AI is having a massive impact already and the debate over whether software can be sentient is thus becoming less interesting.

Not for too much longer. When there's no food in another fifty years,


I'll start stocking up!
--
:Octane2: :O2: :O2: :Indigo: :Indigo: :Indigo: :Fuel: :Indy: :Indy: :Indy: :Indigo2: :Indigo2IMP:
commodorejohn wrote: But, but, but hamei, it's The Future now! Or if it isn't, we can make it be The Future by just insisting hard enough! What are you, some kind of filthy Luddite , that you don't believe in The Future!?


I can only speak for myself. And while I am sure some folks may limit their interest in this space to rhetorical insistence, I do, in fact, do a little more than that :-)
--
:Octane2: :O2: :O2: :Indigo: :Indigo: :Indigo: :Fuel: :Indy: :Indy: :Indy: :Indigo2: :Indigo2IMP: