New Mars Forums

Official discussion forum of The Mars Society and MarsNews.com

You are not logged in.

Announcement

Announcement: As a reader of NewMars forum, we have opportunities for you to assist with technical discussions in several initiatives underway. NewMars needs volunteers with appropriate education, skills, talent, motivation and generosity of spirit as a highly valued member. Write to newmarsmember * gmail.com to tell us about your ability's to help contribute to NewMars and become a registered member.

#1 2021-06-02 18:15:45

louis
Member
From: UK
Registered: 2008-03-24
Posts: 7,208

Musk Warns About AI

People sometimes assume Musk is an AI enthusiast. He's actually someone who has considered the subject in depth and knows there are great dangers ahead. I believe he wants to humanity to devise ways to use AI safely.

https://www.youtube.com/watch?v=TY4eJTo9Kv4


Let's Go to Mars...Google on: Fast Track to Mars blogspot.com

Offline

#2 2021-06-03 12:16:43

kbd512
Administrator
Registered: 2015-01-02
Posts: 7,416

Re: Musk Warns About AI

Louis,

There is no way to use something like AI "safely".  Firstly, because there's no such thing as "safety".  Secondly, because no human has the capability to control a machine that has the same powers of reasoning as a human, but billions or even trillions of times the computational capacity of every human on the planet.  This is much like thinking that there's a "safe" way to control a car or aircraft.  You can take prudent precautions against known operating limitations to prevent disaster, but the complexity of the machine, its operator, and its operating environment will prevent you from ever achieving 100% reliability.  Even AI is subject to failure.  It's imperfect by nature specifically because it was created by imperfect people.

Offline

#3 2021-06-03 16:52:04

Calliban
Member
From: Northern England, UK
Registered: 2019-08-18
Posts: 3,408

Re: Musk Warns About AI

Genuine artificial intelligence, that is to say, machines capable of learning and innovating in non-repeatable situations, would be a disaster for mankind.  It would allow mechanisation to replace human labour in every situation.  Up to now, machines have replaced human labour in situations where tasks are simple and repeatable, with few unexpected variables.  Up to now, mechanisation has magnified the productivity of human beings, because humans are able to function as the brain behind the machine.  Humans have retained a role within our economy precisely because machines are not yet capable of dealing with the sort of complexity that requires reasoning.  If that changes, it is difficult to foresee any tasks that could not be carried out by automation.  That would be great if you are lucky enough to own a stake in an AI dominated economy.  But if you don't own any intellectual property and are stuck in a situation where you are selling your labour, then your wages would be reduced to a pittance.  Machines do not get hungry and do not need to eat.  Aside from modest amounts of electricity and occasional spare parts, they do not consume any resources beyond their initial manufacture.  Trying to directly compete with an intelligent machine in any productive endeavour would be a miserable existence.

Last edited by Calliban (2021-06-03 16:56:11)


"Plan and prepare for every possibility, and you will never act. It is nobler to have courage as we stumble into half the things we fear than to analyse every possible obstacle and begin nothing. Great things are achieved by embracing great dangers."

Offline

#4 2021-06-03 17:53:09

tahanson43206
Moderator
Registered: 2018-04-27
Posts: 17,047

Re: Musk Warns About AI

For Calliban re #3

This post is NOT intended to question or disagree with the points you made above ...

Instead, I'd simply like to point out that the future has been explored by science fiction writers, and by futurists who are actually paid to think deep thoughts about such matters.

Star Trek (as I understand the design of the series) anticipated a time when NO human did any menial work of ANY kind, unless they actually wanted to do so.

A well designed future society would manage social expectations by raising children to blend into whatever patterns may seem appropriate at the time.

The pathetic scrabbling after income we humans persist in will eventually become a bad memory, along with child labor and other social ills of a less enlightened time.

However, returning to your opening line ... plenty of science fiction has attempted to deal with exactly that scenario ...

Somewhere in the recesses of my memory is a well thought out novel about a welfare state created by humans with the most sublime of motives, which resulted in robots managing the lives of humans so that absolutely no danger could befall them.

The setting was science fiction, so the plot included attempts to escape the nannies by travelling to other planets.

The tension built into the yarn by the author had to do with the cat-and-mouse game between the humans and the nannies who (of course) considered space travel too dangerous for humans.

Thanks again for supporting Louis' interesting new topic here.

(th)

Offline

#5 2021-06-03 21:34:00

kbd512
Administrator
Registered: 2015-01-02
Posts: 7,416

Re: Musk Warns About AI

tahanson43206,

The biggest problem with that kind of thinking is that science fiction is starkly different from science reality.  In science reality, you can't press a magic button to restart human civilization if the AI decides to detonate all of the nuclear weapons or destroy the internet or otherwise make the planet uninhabitable by humans.  Science reality also requires obeying simple mathematics, but nearly all of our "science fiction visionaries" have a terrible habit of hand-waving away basic math whenever it doesn't permit their glittering vision of the future from approaching a technologically feasible future reality using known physics.  After our understanding of basic physics changes, then a lot more of these intractable problems might work out the way we want them to.  Until then, all machines made by humans eventually fail, often in catastrophic ways.  The damage done depends a lot upon what the machine has control over.  Up to this point, nearly all machines except for the internet are discrete devices with comparably little potential damage to human civilization, with the exception of nuclear weapons.

Poverty is a human behavioral problem.  It has virtually nothing to do with technology or money or market economies.  If humanity had never invented the concept of a universal medium of exchange, aka "money", how well would a lazy and/or unintelligent person fare?

I think history has already provided the answer to that question, over and over and over again.

What you describe as "pathetic scrabbling after income" is, in point of fact, a very sophisticated, if simple to use and understand, mechanism to ensure high efficiency for both discrete and continuous processes, so long as governments or other assorted criminals don't exert influence to pervert the workings of that economic machine.  The fact that there are people who devote their entire lives to the management of money / capital / labor, should be an indicator that good stewardship of finite resources is necessary for a technologically advanced human civilization to thrive.  To that point, successful people are typically, obviously not universally, successful because they make good decisions regarding their time and money, live within their means, and prepare for avoidable disasters.

So...  Imagine for a moment that we lived in a world without money:

Let's say everyone decides to produce food because humans absolutely need food to survive, but there is no established mechanism to assure that humanity produces food in the most efficient ways.  If you run out of resources to produce food, then you have mass starvation, suffering, and death.  The underlying issue is as simple as that and always has been.

Seriously though, what is up with this revulsion to objective reality that drives magical thinking?

After you have an idea, do an experiment to see how well your idea actually works, using a set of willing versus unwilling test subjects.

I really need someone to answer that question honestly, because we can't get to where these "visionaries" say they want to go from where we're at.  We need to start dealing with imperfect objective reality where science and technology very seldom moves at the speed that people with little to no actual understanding of what they say they want, think it should.

Every aspect of modern technologically advanced human civilization stems from a market economy that uses money as a medium of exchange.  Instead of flippantly dismissing the entire concept of money because some have more and some have less or we don't have the exact amount that we want, maybe we should instead ask the question of why money exists to begin with.  What does money actually represent (aka, a measure of agreed upon value of exchange for labor and goods), and why do we use money versus something else as a tangible and fungible medium of exchange?  If we simply call "money" something else, like "oopa looplah", does something magical happen in the brains of our visionaries, or are we still stuck with the concept of finite resource management and "economizing" on the use of resources to produce things that people value?

Is it possible that a similar fundamental issue also applies to the argument as to why some jobs are not entirely entrusted to machines?

I think Elon Musk is very prudently concerned about the potential problems that AI could cause for humanity, because we don't have any solution at all for some of them, and there are many others problems that we probably haven't even considered.

Offline

#6 2021-06-04 05:10:37

louis
Member
From: UK
Registered: 2008-03-24
Posts: 7,208

Re: Musk Warns About AI

I'd say that thinking it is impossible to manage AI safely is a bit of a counsel of despair.  This is where we need to show our human intelligence and creativity. 

1. In the current era and for say the next hundred years, AI is going to be fundamentally dependent upon computer power. The idea of AI becoming some total omniscient unified presence seems to me unlikely because that would require some new type of technological platform beyond what we recognise as computers. Also I am sceptical about whether AI can leapfrog human intelligence. Yes, I can see AI will be able to discover some exotic chemical compounds because it can work across the whole of chemsitry fast and on many levels. But how likely is it that AI will come up with a convincing cosmological theory that trounces all those theories produced by our Nobel Prize winners? I don't think it's likely, but I guess we will see. I suspect that they will just come up with rival theories, no more and no less persuasive than most human ones. When you read about how some of our top mathematicians "think" you realise that they cannot explain it in rational terms. There is a huge amount of intuition. Some genius mathematicians "see" numbers as a giant landscape in their mind which they rove over. For us non-mathematically minded mortals it's difficult to understand how they think. There is as yet no evidence that AI could replicate that kind of intuition.

2. We can build barriers to what I would call "Invasive" AI.  We already have some robot doctors (based on the HOLMES computer I think) that can diagnose an individual's symptoms better than your average human doctor and we have some robot surgery taking place already. We need rules in place that prevent Invasive AI. So, for instance, there might be a rule that all reports on an individual human's health have to be in the form of a non AI file. A robot surgeon can only act on the basis of that file, not through direct contact with the AI general practitioner. How you would close off a whole transport system with self-drive vehicles and GPS and all the rest so it cannot be taken over by mad AI is obviously more problematic but in principle might involve a similar approach.

3. The barriers and safeguards need to be established across the whole field of human activity. That's the way I would see it - billions of "gates" installed to stop Invasive AI. 

4. We need special conventions on human-computer interfaces.  These do need to be limited unless you don't care about humans being turned into robots.

5. You probably need the equivalent of an AI Control Police to identify threats and make interventions.
 
6. You of course need international agreement to make this work. It's not going to work if North Korea is using AI to build a human-computer invincible AI army, for instance. Musk might be pessimistic about the chances of getting AI controls to work on Earth which is one reason why he sees settling Mars as an urgent project.


Let's Go to Mars...Google on: Fast Track to Mars blogspot.com

Offline

#7 2021-06-04 08:15:28

Mars_B4_Moon
Member
Registered: 2006-03-23
Posts: 9,175

Re: Musk Warns About AI

They kill with no human oversight?

AI drone may have 'hunted down' and killed soldiers in Libya with no human input

https://www.cnet.com/news/autonomous-dr … n-its-own/


https://en.wikipedia.org/wiki/Libyan_Ci … 4–present)

Arab spring movement and Nato bombing campaign that toppled Gaddafi in 2011.

The African migrants are now openly sold in Libya 'slave markets'

Attempts to build a democratic state after Gaddafi fell disintegrated into a new civil war between rival governments in 2014.

Last edited by Mars_B4_Moon (2021-06-04 08:18:08)

Offline

#8 2021-06-04 09:39:25

kbd512
Administrator
Registered: 2015-01-02
Posts: 7,416

Re: Musk Warns About AI

Louis,

Given the fact that we have computer hackers who can bypass any security mechanisms created by other humans, what makes you think an AI-based computer program can't do the same thing, except much faster than any human reasonably could?

AI didn't require massive amounts of computing power to design markedly lighter parts for cars and aircraft than what teams of engineers, using very sophisticated software, were able to produce over months or even years.  The AI program wasn't running on a near-supercomputer the way the CAD software was, it was running on a single workstation.

I'm not saying AI technology doesn't have good uses.  I'm saying you don't get to choose between good and bad uses for the technology.

You seem to completely ignore what Elon Musk actually stated about AI.  He's not going to Mars to skirt around Earth-based laws.  He's building a colony on Mars as a backup plan to deal with emerging threats like AI and very old threats like giant space rocks crashing into the Earth.

Offline

#9 2021-06-04 20:19:32

SpaceNut
Administrator
From: New Hampshire
Registered: 2004-07-22
Posts: 28,832

Re: Musk Warns About AI

Its not that AI can or will be achieved its for when its broken that we have problems as it tries to fill in its own blanks to rewrite its base code that remains....

Offline

#10 2021-06-07 04:55:02

Mars_B4_Moon
Member
Registered: 2006-03-23
Posts: 9,175

Re: Musk Warns About AI

Musk vs Anonymous

Anonymous accuses Elon Musk of ‘destroying lives’ with cryptocurrency tweets

https://www.independent.co.uk/life-styl … 60458.html

Anonymous hacker group is a 'grey hat' group ?  dedicated to who knows what, making weird 4chan memes or cryptic threat video in v for vendetta guy fawkes masks?
I guess you could call them Hackvistists they leaked emails wiki leaks style and vandalized and graffiti on websites, some times Anonynmous would post conspiracy new world order political stuff or write massages about human trafficking, they also carry out 'Ops' or raids on groups and websites

Lift the pedosadist / trafficking networks and the entire global oligarchy will be in the net

  Anon  OpDeathEaters


The group Anonymous seems to attract many types from journalist, the scientists, to hackers, to political activists, to artistic comedians to political guys like Julian Assange, You had organized movements like Support for Occupy Wall St, Protests against New Zealand's Government Communications, they organized a Response to the Aaron Swartz Suicide at reddit, Operation Copyright Japan against Japanese Censorship and Vatican website DDoS Attacks, OpsIsrael in response to Israel-Palestine events and an ongoing 'Operation Charlie Hebdo' against islamist jihadis that killed cartoonists in Europe.
Anonymous seems to have a community and collective hacker sense of their own moral values that are weird, its like a 4chan cult outside of normal religion or the mainstream news narative...
anyone can be a target of anonymous from scientology, to the kkk, to isis
the FBI has broken up various Anonymous groups and tried to shut them down

Offline

#11 2021-08-28 19:22:40

SpaceNut
Administrator
From: New Hampshire
Registered: 2004-07-22
Posts: 28,832

Re: Musk Warns About AI

bump see we had one...

Offline

#12 2021-09-08 08:15:02

Mars_B4_Moon
Member
Registered: 2006-03-23
Posts: 9,175

Re: Musk Warns About AI

Elon Musk's Tesla Bot raises serious concerns - but probably not the ones you think
https://www.chron.com/news/local/articl … 439925.php

Offline

#13 2021-11-26 12:28:56

SpaceNut
Administrator
From: New Hampshire
Registered: 2004-07-22
Posts: 28,832

Re: Musk Warns About AI

Seems we are using AI to see how our brains work.
Shock AI Discovery Suggests We've Not Even Discovered Half of What's Inside Our Cells

Inside every cell of the human body is a constellation of proteins, millions of them. They're all jostling about, being speedily assembled, folded, packaged, shipped, cut and recycled in a hive of activity that works at a feverish pace to keep us alive and ticking.

Microscopes, powerful as they are, allow scientists to peer inside single cells, down to the level of organelles such as mitochondria, the power packs of cells, and ribosomes, the protein factories. We can even add fluorescent dyes to easily tag and track proteins.

AARaIba.img?w=671&h=530&m=6

Classic view of a Eukaryote cross section. (Mariana Ruiz/LadyofHats/Wikimedia)

Fusing image data from a library called the Human Protein Atlas and existing maps of protein interactions, the machine learning algorithm was tasked with computing the distances between protein pairs.

The goal was to identify communities of proteins, called assemblies, that co-exist in cells at different scales, from the very small (less than 50 nm) to the very 'large' (more than 1 μm).

One shy of 70 protein communities were classified by the algorithm, which was trained using a reference library of proteins with known or estimated diameters, and validated with further experiments.

Around half of the protein components identified are seemingly unknown to science, never documented in the published literature, the researchers suggest.

No wonder we are behind the ball on Covid....

Offline

#14 2021-11-26 12:44:25

louis
Member
From: UK
Registered: 2008-03-24
Posts: 7,208

Re: Musk Warns About AI

Yep that picture you have in your head of a cell, from your biology class  when you were 15 - a simple, still thing - is all wrong. Better to think of a furiously boiling pan of minestrone soup.


SpaceNut wrote:

Seems we are using AI to see how our brains work.
Shock AI Discovery Suggests We've Not Even Discovered Half of What's Inside Our Cells

Inside every cell of the human body is a constellation of proteins, millions of them. They're all jostling about, being speedily assembled, folded, packaged, shipped, cut and recycled in a hive of activity that works at a feverish pace to keep us alive and ticking.

Microscopes, powerful as they are, allow scientists to peer inside single cells, down to the level of organelles such as mitochondria, the power packs of cells, and ribosomes, the protein factories. We can even add fluorescent dyes to easily tag and track proteins.

https://img-s-msn-com.akamaized.net/ten … &h=530&m=6

Classic view of a Eukaryote cross section. (Mariana Ruiz/LadyofHats/Wikimedia)

Fusing image data from a library called the Human Protein Atlas and existing maps of protein interactions, the machine learning algorithm was tasked with computing the distances between protein pairs.

The goal was to identify communities of proteins, called assemblies, that co-exist in cells at different scales, from the very small (less than 50 nm) to the very 'large' (more than 1 μm).

One shy of 70 protein communities were classified by the algorithm, which was trained using a reference library of proteins with known or estimated diameters, and validated with further experiments.

Around half of the protein components identified are seemingly unknown to science, never documented in the published literature, the researchers suggest.

No wonder we are behind the ball on Covid....


Let's Go to Mars...Google on: Fast Track to Mars blogspot.com

Offline

#15 2023-03-09 08:25:23

Mars_B4_Moon
Member
Registered: 2006-03-23
Posts: 9,175

Re: Musk Warns About AI

Elon Musk, who co-founded firm behind ChatGPT, warns A.I. is 'one of the biggest risks' to civilization

https://www.cnbc.com/2023/02/15/elon-mu … -risk.html

Tesla, SpaceX, and Twitter CEO Elon Musk has warned of a 'dangerous technology' during a Tesla earnings call

Offline

Board footer

Powered by FluxBB