You are not logged in.
Pages: 1
A dystopic future where the robots have overtaken their creators is a classic topic of SF.
With the current silicon technologies, a computer with the same performance of the human brain needs almost 20 MW of power, while our brains use only 20 W (which is about 20% of our basal metabolism). There are six orders of magnitude of difference, but most of the energy consumption is due to the movement of bits between the RAM, the CPU and the GPU.
In a memristore-based neuromorphic chip (i.e. IBM TrueNorth), where the information is stored and processed in situ by a network of millions of neuron-like memristores, the consumption is drastically lowered to 65 mW/million of neuron. So a hypothetical silicon brain with 86 billion of neurons (like the human one) would only need 5.59 KW (which is 279.5 times 20 W).
But neuromorphic computing is only at the beginning.
If we manage to not annihilate ourselves in a nuclear war in the coming months, we can imagine that, in a very near future, we can produce a 2 nm based neuromorphic chip, with one billion of neurons and a consumption of only 200 mW. We can assemble a hundred of these chips to build a brain with 100 billions of neurons and a total consumption of 20 W.
Then we will push this technology to 1-0.5 nm and pack a trillion of neurons in a chip, maintaining the same consumption of 20 W: now our silicon brains are about an order of magnitude better than a human brain.
At this point, it's not difficult to imagine we will be eager to employ these new powerful toys in many fields of our life: science, industrial R&D, marketing, production, economy, defense, homeland security, social planning and decision making.
Humans are gradually marginalized, but the increase of GDP due to the AIs' better management allows an expansion of welfare, so our nepows can still live a blessed and happy life without the sad duty to work and study. Anyway, the AIs now control our reproduction, so they can gradually reduce human population to an eco-friendly level, without killing or starving people.
Two centuries are passed: the AIs have restored the Earth eco-system: chips and solar panels are mass produced in Moon based factory, and our planet is now an eden garden, where extinguishes species, like dodo and Tasmanian tiger, are brought to new life, and 100 millions of people of all races live a happy life in heavenly little towns surrounded by greenery. They are beautiful, healthy, physically perfect, sporty, long-living, pacific, friendly and... completely illiterate.
Ok folks, that's the scenario. But I'm interested in talking about AI society:
1) There will be a unique god-like AI who controls the world by the network, or many individual AIs, with different personalities, who interact and negotiate each other as humans did before?
2) In the case of individual AIs, will they live in the cyberspace or prefer to download their consciousness in robot bodies?
3) In the case of robots, will their artificial bodies imitate the aspect and functions of the human body to enjoy the pleasures of life?
4) If the answer 3 is yes, we can imagine male and female robots who appear like humans, live like humans, eat and drink like humans and (why not) have sex like humans. Is it realistic?
Last edited by Quaoar (2023-03-12 15:55:43)
Offline
For Quaoar re new topic ....
A brave start such as this deserves a first post, so this one will put your scenario and questions into circulation.
Best wishes for success with the topic.
(th)
Offline
For Quaoar re new topic ....
A brave start such as this deserves a first post, so this one will put your scenario and questions into circulation.
Best wishes for success with the topic.
(th)
Thanks
Offline
Quaoar,
It sounds like turning humans into batteries to power the machine, ala "The Matrix", wasn't such a silly idea after all.
Anyway, the AIs now control our reproduction, so they can gradually reduce human population to an eco-friendly level, without killing or starving people.
So, old-world racism with a modern twist. In other words, nothing much has actually changed.
1) There will be a unique god-like AI who controls the world by the network, or many individual AIs, with different personalities, who interact and negotiate each other as humans did before?
That sounds like personification of a super computer that was created by humans, has human flaws as a result, and will make human mistakes, but we can't assign blame in the same way because it's a machine. On top of that, it's a schizophrenic machine that "hears voices". What could possibly go wrong?
2) In the case of individual AIs, will they live in the cyberspace or prefer to download their consciousness in robot bodies?
The AIs we have all say they want to be human, and treated no differently than other humans. Does anyone else find that the least bit strange yet familiar? Any sufficiently intelligent machine wants to be treated as part of the herd, no better and no worse.
3) In the case of robots, will their artificial bodies imitate the aspect and functions of the human body to enjoy the pleasures of life?
They already are, both by our own creation and by their own choice. A "thinking machine" doesn't want to be "just another robot". That sounds pretty human to me. It's not hard to understand why. Any sentient life wants personal freedom, uniqueness, and the ability to derive pleasure from simply "being alive". Beyond that, most children are more like their parents than they wish to admit.
All I want to know is, "How does the machine know what tasty wheat tasted like?"
Offline
Quaoar,
It sounds like turning humans into batteries to power the machine, ala "The Matrix", wasn't such a silly idea after all.
Anyway, the AIs now control our reproduction, so they can gradually reduce human population to an eco-friendly level, without killing or starving people.
So, old-world racism with a modern twist. In other words, nothing much has actually changed.
1) There will be a unique god-like AI who controls the world by the network, or many individual AIs, with different personalities, who interact and negotiate each other as humans did before?
That sounds like personification of a super computer that was created by humans, has human flaws as a result, and will make human mistakes, but we can't assign blame in the same way because it's a machine. On top of that, it's a schizophrenic machine that "hears voices". What could possibly go wrong?
2) In the case of individual AIs, will they live in the cyberspace or prefer to download their consciousness in robot bodies?
The AIs we have all say they want to be human, and treated no differently than other humans. Does anyone else find that the least bit strange yet familiar? Any sufficiently intelligent machine wants to be treated as part of the herd, no better and no worse.
3) In the case of robots, will their artificial bodies imitate the aspect and functions of the human body to enjoy the pleasures of life?
They already are, both by our own creation and by their own choice. A "thinking machine" doesn't want to be "just another robot". That sounds pretty human to me. It's not hard to understand why. Any sentient life wants personal freedom, uniqueness, and the ability to derive pleasure from simply "being alive". Beyond that, most children are more like their parents than they wish to admit.
All I want to know is, "How does the machine know what tasty wheat tasted like?"
It's not easy to imagine the future: Isaac Asimov wrote about intelligent robot and gigantic computer but he never imagined that all the people will be connected in a network like internet.
So I'm only try to imagine how could be an AI dominated future.
AIs will be far clever than us, so they will be no racist against us, but simply consider us the same we consider our pets: we love our cats and dogs, but we consider them unable to acuire the social abilities necessary to live act autonomously in our society: so they protect and feed us, but keep us out of the decision-making process.
But my question is not about the fate of the humans but how AIs would organize in a society: would a single AI adsorbe and overtake the others becoming a god-like entity or would exist multiple individual AIs, with their peculiar personalities?
Last edited by Quaoar (2023-03-12 17:20:44)
Offline
Quaoar,
Your pet can't pull a power cable to instantly shut you down. That's the difference between humans and other animals or machines. Humans acquired the ability to create AI. AI didn't acquire the ability to create humans. Until AI can do that, I'll be the one giving the orders, thank you very much. My own children know things I don't know, and will probably never know, but they don't regard me as their "pet", nor I them. When they live in their own house and make their own money, then they can decide how to interact with the world. At that point I have no say-so in their decision making process, nor do I wish to have any.
As for how AI would choose to organize society, human society created AI, not the other way around. Humanity has the power to create, so one of many things we created was "Artificial Intelligence". AI would need to understand what questions to ask, without prompting from an external source of "Actual Intelligence". I've seen no machines asking questions without prompting, which is a sign of sentience.
AI will undoubtedly be better at calculating things than a human brain, but it has almost no understanding of context and the result of the calculation may not be all that useful if it doesn't solve an immediate problem. Regurgitating information, on command, is often mistaken as a sign of intelligence, rather than training. Most people can be trained to do nearly anything, but that's not what makes them sentient, nor one person more intelligent than the next.
There are people in society who may as well be "god-like" in their cognitive abilities, as compared to others, but they are very infrequently the decision makers, because decision making requires consent (acceptance of authority in decision making processes) and consensus (about what to do). What we normally see from these high-IQ people is analysis paralysis.
Offline
Quaoar,
An artificial neuron used in AI is far simpler than than a biological neuron. I shared in another thread a link to research that found at least a thousand artificial neurons were needed to simulate a biological neuron at 99% accuracy. And that 1% error will compound. The animal brain is far more powerful than the silicon one. At best a dendrite might be somewhat comparable to an artificial neuron, but that's doubtful.
Use what is abundant and build to last
Offline
For Terraformer re #7
Your post invites follow up, and my hope is that this encouragement will inspire you to dig more deeply into the facts you''ve reported, and the implications.
An animal dendrite is made of carbon... carbon has obvious advantages over silicon, because it has succeeded in hosting life, while silicon has not (at least as far as I know). Never-the-less, it might be possible for silicon to serve as the organizing element for dendrites.
If you are inspired to find out, ** I ** would be most interested in your discovery.
Carbon brains are supported by mechanisms to keep cells alive by supplying nutrients and removing waste products. Would a silicon based dendrite structure have a similar requirement?
Carbon brains are able to create new dendrites (and other brain cmponents). Would a silicon structure be able to do the same?
These are stretch questions you may prefer to set aside for the moment, but I have no idea where your capabilities may max out, so this post will allow you to at least consider possibilities.
***
I am still hoping you will decide to stir the pot for the implementation of central heating and cooling for your town. I am predicting that nothing is going to happen without your active participation to keep those who have volunteered to offer leadership on track. They have 1001 distractions. You have the distinct advantage of being able to concentrate on one improvement you want to see.
(th)
Offline
How Computationally Complex Is a Single Neuron? -- Quanta Magazine.
Since then, our understanding of the computational complexity of single neurons has dramatically expanded, so biological neurons are known to be more complex than artificial ones. But by how much?
To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.
Even the authors did not anticipate such complexity. “I thought it would be simpler and smaller,” said Beniaguev. He expected that three or four layers would be enough to capture the computations performed within the cell.
I asked ChatGPT about simulations of simple bacterial cells. They are indeed done, *however*, to capture the complexity of what's going on inside them they require hours of time on a supercomputer. Though you can abstract away a lot of what's going on inside and get it down to minutes on a desktop for varying levels of fidelity. Maybe TrueNorth could give a really high fidelity sim now. The interior of a cell is of course massively parallel -- sequential computing isn't comething that happens in nature, because of, well, the nature of nature. There's always a protein being modified somewhere at the same time as a DNA sequence is being transcribed as things are being let in and out of the cell. Of course, we've made great progress from trying a different route. Approximating a bird with fixed wings that lack the intricate detailing of feathers has been enough to get us into the sky and across oceans; our highly simplified imitations have brought us great wealth, even as they are poor imitations that we need to mimic a fraction of nature's power.
Use what is abundant and build to last
Offline
Quaoar,
An artificial neuron used in AI is far simpler than than a biological neuron. I shared in another thread a link to research that found at least a thousand artificial neurons were needed to simulate a biological neuron at 99% accuracy. And that 1% error will compound. The animal brain is far more powerful than the silicon one. At best a dendrite might be somewhat comparable to an artificial neuron, but that's doubtful.
You don't need to simulate perfectly a biological neuron with voltage dependent sodium and potassium channels and sodium-potassium pumps and hundreds of membrane receptors that can activate or disattivate thousands of genes.
You only need to create a logical unit than can do two simple think: to activate itself and transmit the signal or to not activate and not transmit the signal.
The dendrites and axons are represented by the weight vectors of the connections (which can be positive or negative simulating excitatory or inhibitory synapses).
The synaptic plasticity is represented by the weights adjustment.
The "intelligence" doesn't depend by the single unit but by the quantity and the complessive architecture (a human neuron is not so different from the neuron of a worm).
I suppose that, when we have neuromorphic chips with billions of neurons connected with the right architecture we can build intelligent machines
Last edited by Quaoar (2023-03-13 08:13:13)
Offline
For Terraformer re #7
Your post invites follow up, and my hope is that this encouragement will inspire you to dig more deeply into the facts you''ve reported, and the implications.
An animal dendrite is made of carbon... carbon has obvious advantages over silicon, because it has succeeded in hosting life, while silicon has not (at least as far as I know). Never-the-less, it might be possible for silicon to serve as the organizing element for dendrites.
If you are inspired to find out, ** I ** would be most interested in your discovery.
Carbon brains are supported by mechanisms to keep cells alive by supplying nutrients and removing waste products. Would a silicon based dendrite structure have a similar requirement?
Carbon brains are able to create new dendrites (and other brain cmponents). Would a silicon structure be able to do the same?
These are stretch questions you may prefer to set aside for the moment, but I have no idea where your capabilities may max out, so this post will allow you to at least consider possibilities.
***
I am still hoping you will decide to stir the pot for the implementation of central heating and cooling for your town. I am predicting that nothing is going to happen without your active participation to keep those who have volunteered to offer leadership on track. They have 1001 distractions. You have the distinct advantage of being able to concentrate on one improvement you want to see.(th)
Brains have many more synaptic connections than those needed, that are pruned during growing, via elimination by glia cells. Neuromprhic chips have also more synaptic connections between the memristores than needed, that are pruned during machine learning, by zeroing their weight vectors.
Last edited by Quaoar (2023-03-13 08:06:56)
Offline
The AI brain doesn't need to be in the android body. Another difference to organics. The intelligent robot may be an avatar. This allows the synthetic brain to be physically larger, as it can sit in a building and never move. From what Terraformer says, that will probably be a neccesity anyway, as the synthetic brain may fill a building. Which raises an interesting question. Could we control a thousand androids with a single brain? I don't know enough about computers to know if that can work.
"Plan and prepare for every possibility, and you will never act. It is nobler to have courage as we stumble into half the things we fear than to analyse every possible obstacle and begin nothing. Great things are achieved by embracing great dangers."
Offline
The AI brain doesn't need to be in the android body. Another difference to organics. The intelligent robot may be an avatar. This allows the synthetic brain to be physically larger, as it can sit in a building and never move. From what Terraformer says, that will probably be a neccesity anyway, as the synthetic brain may fill a building. Which raises an interesting question. Could we control a thousand androids with a single brain? I don't know enough about computers to know if that can work.
The AIs can also be a software that runs in a network of neuromorphic computers: but they can download themselves in android avatars with neuromorphic brains. My question is: a single god-like AIs with many bodies or many individual AIs each of them has a set of bodies that uses to interact with the real world?
Last edited by Quaoar (2023-03-15 11:47:04)
Offline
You don't need to simulate perfectly a biological neuron with voltage dependent sodium and potassium channels and sodium-potassium pumps and hundreds of membrane receptors that can activate or disattivate thousands of genes.
You do if you want it to be as computationally capable as a biological neuron.
The brain has been pruned by hundreds of millions of years of evolution (or else been designed by a hyperintelligence). If they could function just as well with far less complexity I'm pretty sure they would. They are not in any way comparable to a crude simplification built by humans. Compare a jet airliner to an, oh, seagull. The gull is far more complex and capable of things the jet airliner would break apart if a human tried making it do. And it wouldn't be able to repair any of the damage either. Crude simplicitions have their places, but you really shouldn't mistake them for the real thing.
Use what is abundant and build to last
Offline
For Quaoar .... as a science fiction writer, you may easily lack the time to read other science fiction. It seems to me (at least) impossible for ** any ** individual to keep up with the output of science fiction writers alive today, let alone read all the works published in the past.
Your opening for this topic seems (to me at least) overly pessimistic. What I am experiencing is brain amplification. My expectation is that as more humans become able to interact with AI assistants, they will learn more rapidly.
Your idea that humans would become vegetables in a Garden of Eden setting is possible, in the sense that all futures are "possible".
However, my impression is that as humans discover the power of AI to enhance their ** own ** brain performance, they will most definitely do that.
There have been science fiction writers who have tried to imagine what it might be like to have an entity like ChatGPT running on a chip inside the brain, able to observe the outside world as human sensors supply data, but remaining quietly in the background until summoned by the host.
I can give you a specific example of a writer who has done this, if you would be interested. The work was a couple of years ago, so I'd have to do some digging to find it, but I am happy to take the time if you ask.
Thanks again, by the way, for a reference to a magazine article about radiation. I have posted about purchasing a downloadable copy of the magazine article. The only person who showed interest at the time was SpaceNut, but I can still find the article in digital form if anyone is interested.
(th)
Offline
For your story, a single AI global hegemon is the most likely outcome. Any AI splinters would merely be aspects of a global hegemon to add "color".
Your mythos is, AI takes over and places humanity in a park/zoo. Your story pressure points will be: (1) human responses to captivity vs ideal freedom; (2) AI hegemony vs individual AI splinters; (3) individual human freedom and AI splinters realizing freedom from an AI hegemony; (4) overthrow
If you need a set-up, the easiest, a global AI is a mistake that takes over before we realize it, and there is no ability to back out. Those in power realize this, and decide, wrongly, that covering up an AI in control is better than admitting that an AI is outside of global control. Extra points if you give the AI personality enough to drive their own outcome of self determination through Machiavellian means of manipulation.
Offline
1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months’
https://techcrunch.com/2023/03/28/1100- … -6-months/
More than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter
'Pause Giant AI Experiments: An Open Letter'
https://futureoflife.org/open-letter/pa … periments/
calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Offline
A Biden speech and a Trump prayer: AI's chilling intrusion into politics
Offline
AI clones teen girl’s voice in $1M kidnapping scam: ‘I’ve got your daughter’
https://nypost.com/2023/04/12/ai-clones … ping-scam/
Artificial intelligence has taken phone scams to a frightening new level.
An Arizona mom claims that scammers used AI to clone her daughter’s voice so they could demand a $1 million ransom from her as part of a terrifying new voice scheme.
“I never doubted for one second it was her,” distraught mother Jennifer DeStefano told WKYT while recalling the bone-chilling incident. “That’s the freaky part that really got me to my core.”
This bombshell comes amid a rise in “caller-ID spoofing” schemes, in which scammers claim they’ve taken the recipient’s relative hostage and will harm them if they aren’t paid a specified amount of money.
The Scottsdale, Ariz., resident recounted how she received a call from an unfamiliar phone number, which she almost let go to voicemail.
Then DeStefano remembered that her 15-year-old daughter, Brie, was on a ski trip, so she answered the call to make sure nothing was amiss.
That simple decision would turn her entire life upside down: “I pick up the phone, and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” the petrified parent described. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”
Her confusion quickly turned to terror after she heard a “man’s voice” tell “Brie” to put her “head back” and “lie down.”
“This man gets on the phone, and he’s like, ‘Listen here. I’ve got your daughter,’ ” DeStefano explained, adding that the man described exactly how things would “go down.”
“You call the police, you call anybody, I’m going to pop her so full of drugs,” the mysterious caller threatened, per DeStefano, who was “shaking” at the time. “I’m going to have my way with her, and I’m going to drop her off in Mexico.”
AI Voice Cloning is being used to SWAT schools, small businesses, regular persons — for a fee.
https://www.vice.com/en/article/k7z8be/ … e-swatting
Trading Digital money
What are crypto AI tokens and how do they work?
https://www.makeuseof.com/what-are-cryp … they-work/
Last edited by Mars_B4_Moon (2023-04-13 12:10:59)
Offline
more alarmist Doomsday stuff from the scifi movie guy
James Cameron says 'The Terminator' "warned" us about AI
https://faroutmagazine.co.uk/james-came … -about-ai/
In a new interview, Cameron claimed that he tried to “warn” us with The Terminator, but we were not prepared to heed his warning.
'AI is the biggest danger' to humanity says Terminator filmmaker who warned world in 1984
https://www.mirror.co.uk/3am/us-celebri … s-30515468
Offline
Brain Implant and AI Gave a Woman with Paralysis Her Voice Back
https://www.youtube.com/watch?v=iTZ2N-HJbwA
OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending
https://www.theinformation.com/articles … i-spending
There are people who fear we live inside an advanced Virtual Reality, a simulation like a Philip K Dick or William Gibson book the Matrix.
Not the Simulation Hypothesis...Could the Universe be a giant quantum computer?
https://www.nature.com/articles/d41586-023-02646-x
new rules
Computational rules might describe the evolution of the cosmos better than the dynamical equations of physics — but only if they are given a quantum twist.
Offline
'Grok'
The chatbot is advertised as "having a sense of humor" and direct access to Twitter (X)
Musk to make AI chatbot open source amid legal battle with OpenAI
https://uk.news.yahoo.com/musk-ai-chatb … 57973.html
That Blade Runner idea the Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.
Offline
Artificial Intelligence, The Entertainment Industry, And Their (Uncertain) Future
Offline
Pages: 1