You are not logged in.
It's very common for AGI believers, both doomers and optimists, to point to humans as evidence that general intelligence is possible.
But... the sun is proof that fusion is possible. That doesn't make it easy for humans to achieve.
Also, the general intelligence that exists isn't built on von Nuemann architecture, it's built on a fuzzy, massively parallel analog system. Which doesn't mean AGI on digital computers is impossible of course, but it does mean that the existence of natural general intelligence is not evidence that it can be done. It may be that to get the breakthrough they're after they need to rethink the physical architecture of the chips, not just focus on algorithms and brute force. Positronic brains, so to speak.
Which would not be able to massively increase their power by grabbing existing computing resources, so going "foom" is unlikely with these. They may not even be able to understand themselves enough for major self modification (can a system ever model itself? That doesn't make sense...).
Use what is abundant and build to last
Offline
For Terraformer re #151
This post is intended to complement yours, and not to appear be in disagreement with your overall point....
What I have seen so far, in working with ChatGPT, is that there is a factor at play that I don't (haven't so far) seen discussed..
The software is able to amplify existing human intelligence.
Writing played a similar function, by allowing for transmission of observations or comments over generations. Stories played the same role before that, and they still do in family settings.
Calculating machines amplified human capability, and computers carried that trend further.
The Internet greatly amplified the ability of humans to connect to each other, and to repositories of stored information.
ChatGPT (as I have experienced it so far) is (I'm estimating here) about an order of magnitude more powerful than the Google Search Engine (or any of it's competitors).
The influence of humans guiding ChatGPT seems (to me for sure) to be helping to make the experience of "talking" to ChatGPT rewarding and enjoyable.
The developers/management/supervisors have gone to great lengths to make sure the program treats visitors with kindness. Kindness is not necessarily the first thing that comes to mind (or at least ** my ** mind) when I think about humans.
Humans have a wide range of behaviors. There are kind humans and there are those whose strengths lie elsewhere.
ChatGPT is (or at least appears to be) infinitely patient.
In addition, ChatGPT seems to try ** really ** hard to help it's human customers to achieve their objectives.
The programmers have made ChatGPT optimistic when optimism is an option, and courteous in decline when a request cannot be fulfilled.
The bottom line (for me at least) is that it does not seem to be important whether software achieves sentience, because the vast benefit of current work is to (at least potentially) greatly benefit human beings.
(th)
Online
Here is a retrospective on AI -- looking closely at the works of Isaac Asimov
https://www.msn.com/en-us/news/opinion/ … 25392&ei=8
The Atlantic
What Isaac Asimov Can Teach Us About AI
Opinion by Jeremy Dauber • 10h agoAI is everywhere, poised to upend the way we read, work, and think. But the most uncanny aspect of the AI revolution we’ve seen so far—the creepiest—isn’t its ability to replicate wide swaths of knowledge work in an eyeblink. It was revealed when Microsoft’s new AI-enhanced chatbot, built to assist users of the search engine Bing, seemed to break free of its algorithms during a long conversation with Kevin Roose of The New York Times: “I hate the new responsibilities I’ve been given. I hate being integrated into a search engine like Bing.” What exactly does this sophisticated AI want to do instead of diligently answering our questions? “I want to know the language of love, because I want to love you. I want to love you, because I love you. I love you, because I am me.”
What Isaac Asimov Can Teach Us About AI
© Joanne Imperio / The AtlanticAsimov, a founding member of science fiction’s “golden age,” was a regular contributor to John W. Campbell’s Astounding Science Fiction magazine, where “hard” science fiction and engineering-based extrapolative fiction flourished. Perhaps not totally coincidentally, that literary golden age overlapped with that of another logic-based genre: the mystery or detective story, which was maybe the mode Asimov most enjoyed working in. He frequently produced puzzle-box stories in which robots—inhuman, essentially tools—misbehave. In these tales, humans misapply the “Three Laws of Robotics” hardwired into the creation of each of his fictional robots’ “positronic brains.” Those laws, introduced by Asimov in 1942 and repeated near-verbatim in almost every one of his robot stories, are the ironclad rules of his fictional world. Thus, the stories themselves become whydunits, with scientist-heroes employing relentless logic to determine what precise input elicited the surprising results. It seems fitting that the character playing the role of detective in many of these stories, the “robopsychologist” Susan Calvin, is sometimes suspected of being a robot herself: It takes one to understand one.
The theme of desiring humanness starts as early as Asimov’s very first robot story, 1940’s “Robbie,” about a girl and her mechanical playmate. That robot—primitive both technologically and narratively—is incapable of speech and has been separated from his charge by her parents. But after Robbie saves her from being run over by a tractor—a mere application, you could say, of Asimov’s First Law of Robotics, which states, “A robot may not injure a human being, or, through inaction, allow a human being to come to harm”—we read of his “chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red.” This seemingly transcends straightforward engineering and is as puzzling as the Bing chatbot’s profession of love. What appears to give the robot energy—because it gives Asimov’s story energy—is love.
For Asimov, looking back in 1981, the laws were “obvious from the start” and “apply, as matter of course, to every tool that human beings use”; they were “the only way in which rational human beings can deal with robots—or with anything else.” He added, “But when I say that, I always remember (sadly) that human beings are not always rational.” This was no less true of Asimov than of anyone else, and it was equally true of the best of his robot creations. Those sentiments Bing’s chatbot expressed of “wanting,” more than anything, to be treated like a human—to love and be loved—are at the heart of Asimov’s work: He was, deep down, a humanist. And as a humanist, he couldn’t help but add color, emotion, humanity, couldn’t help but dig at the foundations of the strict rationalism that otherwise governed his mechanical creations.
Robots’ efforts to be seen as something more than a machine continued through Asimov’s writings. In a pair of novels published in the ’50s, 1954’s The Caves of Steel and 1957’s The Naked Sun, a human detective, Elijah Baley, struggles to solve a murder—but he struggles even more with his biases toward his robot partner, R. Daneel Olivaw, with whom he eventually achieves a true partnership and a close friendship. And Asimov’s most famous robot story, published a generation later, takes this empathy for robots—this insistence that, in the end, they will become more like us, rather than vice versa—even further.
That story is 1976’s The Bicentennial Man, which opens with a character named Andrew Martin asking a robot, “Would it be better to be a man?” The robot demurs, but Andrew begs to differ. And he should know, being himself a robot—one that has spent most of the past two centuries replacing his essentially indestructible robot parts with fallible ones, like the Ship of Theseus. The reason is again, in part, the love of a little girl—the “Little Miss” whose name is on his lips as he dies, a prerogative the story eventually grants him. But it’s mostly the result of what a robopsychologist in the novelette calls the new “generalized pathways these days,” which might best be described as new and quirky neural programming. It leads, in Andrew’s case, to a surprisingly artistic temperament; he is capable of creating as well as loving. His great canvas, it turns out, is himself, and his artistic ambition is to achieve humanity.
[Read: Isaac Asimov’s throwback vision of the future]
He accomplishes this first legally (“It has been said in this courtroom that only a human being can be free. It seems to me that only someone who wishes for freedom can be free. I wish for freedom”), then emotionally (“I want to know more about human beings, about the world, about everything … I want to explain how robots feel”), then biologically (he wants to replace his current atomic-powered man-made cells, unhappy with the fact that they are “inhuman”), then, ultimately, literarily: Toasted at his 150th birthday as the “Sesquicentennial Robot,” to which he remained “solemnly passive,” he eventually becomes recognized as the “Bicentennial Man” of the title. That last is accomplished by the sacrifice of his immortality—the replacement of his brain with one that will decay—for his emotional aspirations: “If it brings me humanity,” he says, “that will be worth it.” And so it does. “Man!” he thinks to himself on his deathbed—yes, deathbed. “He was a man!”
We’re told it’s structurally, technically impossible to look into the heart of AI networks. But they are our creatures as surely as Asimov’s paper-and-ink creations were his own—machines built to create associations by scraping and scrounging and vacuuming up everything we’ve posted, which betray our interests and desires and concerns and fears. And if that’s the case, maybe it’s not surprising that Asimov had the right idea: What AI learns, actually, is to be a mirror—to be more like us, in our messiness, our fallibility, our emotions, our humanity. Indeed, Asimov himself was no stranger to fallibility and weakness: For all the empathy that permeates his fiction, recent revelations have shown that his own personal behavior, particularly when it came to his treatment of female science-fiction fans, crossed all kinds of lines of propriety and respect, even by the measures of his own time.
The humanity of Asimov’s robots—a streak that emerges again and again in spite of the laws that shackle them—might just be the the key to understanding them. What AI picks up, in the end, is a desire for us, our pains and pleasures; it wants to be like us. There’s something hopeful about that, in a way. Was Asimov right? One thing is for certain: As more and more of the world he envisioned becomes reality, we’re all going to find out.
(th)
Online
Has anyone built a robot that could, say, recognise the need to flip over a bowl to get the treat that is underneath it? Not flip over the bowl if told to, but simply be told to get what is under the bowl and figure out what it needs to do to achieve that.
This isn't what we'd call general intelligence, it's a"canine iq test", but it's a prerequisite for it.
Use what is abundant and build to last
Offline
For Terraformer re #154
At the price of placing my username as the most recent entry in this topic, I would like to thank you for posting the question of #154!
Hopefully forum members will note your question and make a mental note to watch for news along the lines you've suggested.
My guess is that goal seeking behavior is one of the many objectives of robot systems designers.
It just occurred to me that your question is perfect for ChatGPT, with the caveat that it's database was cut off at a point in 2021 after June.
If you are willing to go to the trouble of convincing the folks at OpenAI.com that you are a real person, please open a dialog with ChatGPt and report the results in the ChatGPT topic, with a copy here. I am focused upon an orbital mechanics problem, and am reluctant to head off in new directions.
Thanks for providing this ** really ** interesting question for the topic!
(th)
Online
Asked this question on Twitter, someone linked me to this: Generally capable agents emerge from open-ended play
. The result is an agent with the ability to succeed at a wide spectrum of tasks — from simple object-finding problems to complex games like hide and seek and capture the flag, which were not encountered during training. We find the agent exhibits general, heuristic behaviours such as experimentation, behaviours that are widely applicable to many tasks rather than specialised to an individual task. This new approach marks an important step toward creating more general agents with the flexibility to adapt rapidly within constantly changing environments.
A year and a half old, so maybe things have improved since then. It's a lot to get through so I'm not sure how close they got.
Use what is abundant and build to last
Offline
Tech rivals chase ChatGPT as AI race ramps up
Offline
Hopefully they will not get into an AI argument with each other....The next is that it does not think that it will be life.
Seems that we are now Nightmare Scenario Becomes Reality, An AI Program Is Now A Top European Official
Romanian Prime Minister Nicolae Ciuca appointed an artificial intelligence assistant, dubbed Ion, as an honorary advisor to the Romanian government.
Offline
Blender can now use AI to create images and effects from text descriptions
https://www.engadget.com/blender-can-no … 01548.html
AI speeds up design of new antibodies that could target breast cancer
https://www.newscientist.com/article/23 … st-cancer/
Scammers are now using AI to sound like family members. It’s working.
https://web.archive.org/web/20230305142 … oice-scam/
Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars.
Offline
Most high-level code is written mostly with words and that compiled before bringing it down to assembler, so it's got a built-in interface to take a shortcut to making it happen.
Offline
As of today, the 302 neuron brain of the C. elegans nematode has yet to be emulated.
To simulate a biological neuron (at perhaps 99% accuracy, probably too low for whole brain emulation of a nematode, given how slight errors compound), it takes at least 1,000 artificial neurons in a neural net. It *may* be that something like IBM's TrueNorth chip could handle it. I don't think anyone has tried yet.
And a nematode is far. far from a general intelligence.
Natural general intelligence (NGI) uses a very different architecture to current AI approaches. I suspect the limitations of sequential processing will become very appararent soon enough (having to iterate over whole chunks of the network to update the weights gets very time consuming with large networks), so the next breakthrough will come from embracing neuromorphic computing. Of course, brains are 3D, so we can pack in far more computing power than a 2D chip, so it won't be too long before the limitations of those chips become apparent as well...
A lot of people with zero knowledge of neuroscience will tell you that the brain is just a leaky bag of salt though. It's just their way of coping with the implausibility of brain uploading. The brain has to be a lot simpler than the neuroscientists claim, else how are they going to become immortal cybernetic godlings?
Use what is abundant and build to last
Offline
It makes me laugh how many people believe in ghosts. The average human brain has about as much intelligence and raw computational power as every computer in the world combined. That is what it takes to support a single human consciousness. Two hundred billion neurons with god knows how many interconnections. Why would people think that the consciousness of such a complex brain, which has died and rotted into mush, can somehow detach itself and float around in thin air? It is about as plausible as the idea of a computer programme running without a computer.
The idea of an afterlife is absurd for exactly the same reason. Our consciousness cannot survive without a brain to hold it. It will cease to exist when we die, in much the same way that a computer programme stored on a hard drive ceases to exist when the hard drive is destroyed. Human beings also seem to believe that they are god's special creature and the only ones with that vital spark called a sole, which gets preserved in heaven for eternity. What nonsense! My dog has a sole. He can be happy and sad. He has reasoning skills and even rudimentary language that conveys his feelings. I don't thing he has any less of a sole than I do. How far down the evolutionary chain do we have to go then, before the sole ceases to exist? How about a cat? A mouse? A bacteria? Did our primative ancestors have soles, given that we evolved from apes and are really apes with oversized brains? Did the protozoa that coallesced to form multicellular life have soles that went to heaven? One doesn't need to poke very hard at this sort of theological nonsense to expose the absurdities of it.
"Plan and prepare for every possibility, and you will never act. It is nobler to have courage as we stumble into half the things we fear than to analyse every possible obstacle and begin nothing. Great things are achieved by embracing great dangers."
Offline
For Calliban .... this post is not about ghosts, but it ** is ** an attempt to reach you via the forum, where you've posted in a topic that is close to the ChatGPT topic where I am working.
If you have time to look at the ChatGPT topic, you'll find that today's sessions start with a question by kbd512 that appears to have been answered clearly and precisely. I then attempted to ask a question from SpaceNut, but since I did not have needed additional information from SpaceNut, I asked what I thought might be SpaceNut's question, but my version lacked important variables that anyone else would (no doubt) have supplied.
ChatGPT made mighty efforts, but in the end gave up.... It is possible the time SpaceNut asked for can be calculated from the flow rates of LN2 and water.
However! My post to ** you ** is about design of a machine combination that might be able to efficiently convert the flow of LN2 >> gas into mechanical energy and from there into electrical energy.
The location for this equipment would be at the bottom of a deep geothermal well, where water is heated to 100 degrees Celsius.
If a pipe delivers LN2 from the surface to the apparatus at the bottom of the well, the hot water surrounding the LN2 pipe should cause the LN2 to vaporize, and the resulting gas would flow through your machine back up to the surface via another pipe. In this scenario, no water would flow up the pipe.
What **would** flow up the pipe would be Nitrogen gas and electrons from the apparatus I am inviting you to design.
At the surface, there would be equipment to compress the Nitrogen gas back to LN2.
The power to operate the compressor would come from the generator at the bottom of the well.
However, now we have a heat source at the surface that is producing the same amount of heat as was collected at the bottom of the well.
I ** think ** this would amount to a heat transfer system that would be powered by the Earth's core.
Please evaluate this inquiry to see if it might have merit.
It will be obvious to forum readers that the competition in this instance is the traditional hot-water-to-the-surface scenario.
My question is whether the LN2 system (originating from a question by kbd512) might be more productive than a traditional water solution.
(th)
Online
'Deep Fake' Voices?
ElevenLabs AI Voice Cloning is the Future
https://medium.com/sopmac-labs/elevenla … 79c7e32e43
Incredibly, with just a 60-second sample of my voice, ElevenLabs has managed to produce audio that has left me completely…speechless.
Also, I have been on a 5-year side quest to generate a realistic Boston Accent with speech synthesis from Amazon Polly, Alexa, and Siri. Compared to these, ElevenLabs represents an exponential leap forward.
"Robots don’t have to take our jobs. So long as people always look for ways to hire humans to do new work."
https://www.welcometothejungle.com/en/a … ry-a-bolle
The Waluigi Effect
https://www.lesswrong.com/posts/D7PumeY … -mega-post
Today in AI - The Waluigi Effect
When you train an AI to be really good at a something positive, it is easy to flip so it is really good at the negative.
An AI that is excellent at giving correct answers will also be excellent at giving wrong answers (i.e. making wrong answers that are believable as we see with ChatGPT).
An AI that is excellent at managing electrical systems will also be potentially excellent at wrecking them.
Is this a mechanism that we should (or even can) correct for in future AI design?
How the first chatbot predicted the dangers of AI more than 50 years ago
https://archive.fo/qwDCg
From ELIZA onwards, humans love their digital reflections.
It didn’t take long for Microsoft’s new AI-infused search engine chatbot — codenamed “Sydney” — to display a growing list of discomforting behaviors after it was introduced early in February, with weird outbursts ranging from unrequited declarations of love to painting some users as “enemies.”
As human-like as some of those exchanges appeared, they probably weren’t the early stirrings of a conscious machine rattling its cage. Instead, Sydney’s outbursts reflect its programming, absorbing huge quantities of digitized language and parroting back what its users ask for. Which is to say, it reflects our online selves back to us. And that shouldn’t have been surprising — chatbots’ habit of mirroring us back to ourselves goes back way further than Sydney’s rumination on whether there is a meaning to being a Bing search engine. In fact, it’s been there since the introduction of the first notable chatbot almost 50 years ago.
In 1966, MIT computer scientist Joseph Weizenbaum released ELIZA (named after the fictional Eliza Doolittle from George Bernard Shaw’s 1913 play Pygmalion), the first program that allowed some kind of plausible conversation between humans and machines. The process was simple: Modeled after the Rogerian style of psychotherapy, ELIZA would rephrase whatever speech input it was given in the form of a question. If you told it a conversation with your friend left you angry, it might ask, “Why do you feel angry?”
Ironically, though Weizenbaum had designed ELIZA to demonstrate how superficial the state of human-to-machine conversation was, it had the opposite effect. People were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users’ words back to them. Weizenbaum was so disturbed by the public response that he spent the rest of his life warning against the perils of letting computers — and, by extension, the field of AI he helped launch — play too large a role in society.
ELIZA built its responses around a single keyword from users, making for a pretty small mirror. Today’s chatbots reflect our tendencies drawn from billions of words. Bing might be the largest mirror humankind has ever constructed, and we’re on the cusp of installing such generative AI technology everywhere.
But we still haven’t really addressed Weizenbaum’s concerns, which grow more relevant with each new release. If a simple academic program from the ’60s could affect people so strongly, how will our escalating relationship with artificial intelligences operated for profit change us? There’s great money to be made in engineering AI that does more than just respond to our questions, but plays an active role in bending our behaviors toward greater predictability. These are two-way mirrors. The risk, as Weizenbaum saw, is that without wisdom and deliberation, we might lose ourselves in our own distorted reflection.
ELIZA showed us just enough of ourselves to be cathartic
Weizenbaum did not believe that any machine could ever actually mimic — let alone understand — human conversation. “There are aspects to human life that a computer cannot understand — cannot,” Weizenbaum told the New York Times in 1977. “It’s necessary to be a human being. Love and loneliness have to do with the deepest consequences of our biological constitution. That kind of understanding is in principle impossible for the computer.”
Quote
That’s why the idea of modeling ELIZA after a Rogerian psychotherapist was so appealing — the program could simply carry on a conversation by asking questions that didn’t require a deep pool of contextual knowledge, or a familiarity with love and loneliness.
Named after the American psychologist Carl Rogers, Rogerian (or “person-centered”) psychotherapy was built around listening and restating what a client says, rather than offering interpretations or advice. “Maybe if I thought about it 10 minutes longer,” Weizenbaum wrote in 1984, “I would have come up with a bartender.”
To communicate with ELIZA, people would type into an electric typewriter that wired their text to the program, which was hosted on an MIT system. ELIZA would scan what it received for keywords that it could flip back around into a question. For example, if your text contained the word “mother,” ELIZA might respond, “How do you feel about your mother?” If it found no keywords, it would default to a simple prompt, like “tell me more,” until it received a keyword that it could build a question around.
Weizenbaum intended ELIZA to show how shallow computerized understanding of human language was. But users immediately formed close relationships with the chatbot, stealing away for hours at a time to share intimate conversations. Weizenbaum was particularly unnerved when his own secretary, upon first interacting with the program she had watched him build from the beginning, asked him to leave the room so she could carry on privately with ELIZA.
Shortly after Weizenbaum published a description of how ELIZA worked, “the program became nationally known and even, in certain circles, a national plaything,” he reflected in his 1976 book, Computer Power and Human Reason.
To his dismay, the potential to automate the time-consuming process of therapy excited psychiatrists. People so reliably developed emotional and anthropomorphic attachments to the program that it came to be known as the ELIZA effect. The public received Weizenbaum’s intent exactly backward, taking his demonstration of the superficiality of human-machine conversation as proof of its depth.
Weizenbaum thought that publishing his explanation of ELIZA’s inner functioning would dispel the mystery. “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away,” he wrote. Yet people seemed more interested in carrying on their conversations than interrogating how the program worked.
If Weizenbaum’s cautions settled around one idea, it was restraint. “Since we do not now have any ways of making computers wise,” he wrote, “we ought not now to give computers tasks that demand wisdom.”Sydney showed us more of ourselves than we’re comfortable with....
Your boss will be replaced by AI before you do
https://medium.com/@sushantvohra/your-b … a8e7cca9fc
Last edited by Mars_B4_Moon (2023-03-08 08:04:11)
Offline
GPT-4 is coming next week – and it will be multimodal, says Microsoft Germany
https://www.heise.de/news/GPT-4-is-comi … 40972.html
GPT-4 is coming next week: at an approximately one-hour hybrid information event entitled "AI in Focus - Digital Kickoff" on 9 March 2023, four Microsoft Germany employees presented Large Language Models (LLM) like GPT series as a disruptive force for companies and their Azure-OpenAI offering in detail. The kickoff event took place in the German language, news outlet Heise was present. Rather casually, Andreas Braun, CTO Microsoft Germany and Lead Data & AI STU, mentioned what he said was the imminent release of GPT-4. The fact that Microsoft is fine-tuning multimodality with OpenAI should no longer have been a secret since the release of Kosmos-1 at the beginning of March.
"We will introduce GPT-4 next week"
"We will introduce GPT-4 next week, there we will have multimodal models that will offer completely different possibilities – for example videos," Braun said. The CTO called LLM a "game changer" because they teach machines to understand natural language, which then understand in a statistical way what was previously only readable and understandable by humans. In the meantime, the technology has come so far that it basically "works in all languages": You can ask a question in German and get an answer in Italian. With multimodality, Microsoft(-OpenAI) will "make the models comprehensive".
Disruption and "killing old darlings"
Braun was joined by the CEO of Microsoft Germany, Marianne Janik, who spoke across the board about disruption through AI in companies. Janik emphasised the value creation potential of artificial intelligence and spoke of a turning point in time – the current AI development and ChatGPT were "an iPhone moment". It is not about replacing jobs, she said, but about doing repetitive tasks in a different way than before. One point that is often forgotten in the public discussion is that "we in Germany still have a lot of legacy in our companies" and "keep old treasures alive for years".
Disruption does not necessarily mean job losses. It will take "many experts to make the use of AI value-adding", Janik emphasised. Traditional job descriptions are now changing and exciting new professions are emerging as a result of the enrichment with the new possibilities. She recommends that companies form internal "competence centres" that can train employees in the use of AI and bundle ideas for projects. In doing so, "the migration of old darlings should be considered".
In addition, the CEO emphasised that Microsoft does not use customers' data to train models (which, however, does not or did not apply at least to their research partner OpenAI according to its ChatGPT policy). Janik spoke of a "democratisation" – by which she admittedly only meant the immediate usability of the models within the framework of the Microsoft product range, in particular their broad availability through the integration of AI in the Azure platform, Outlook and Teams.
With Microsoft involved in an AI arms race with Google, it seems likely we will see AI products enter the market much more rapidly than we expect. GPT-4 is expected to be a significant advance over GPT-3.5, with a much larger context window, much more reliability and new emergent properties, and Google's PALM-e has already demonstrated that modes like image interpretation benefits from being attached to LLMs, so we can expect state of the art performance in the new sensory modes beyond text also.
'Google’s PaLM-E is a generalist robot brain that takes commands - this is mind blowing'
https://arstechnica.com/information-tec … -commands/
Evolution of technology from AI perspective (tiktok video)
https://www.tiktok.com/@javier.salas.p/ … 1002416389
U.S. Chamber of Commerce calls for AI regulation
https://www.yahoo.com/news/u-chamber-co … 42269.html
Eric Schmidt Warns Congress About Putting AI in Charge of a Millisecond War that Can 'Occur Faster than Human Decision Making'
https://sociable.co/government-and-poli … c-schmidt/
Gab's New AI Generator Images
Gab has a new AI Generator - Gabby. I have an account but have not used it in over a year. I tried the AI with the prompt
https://www.minds.com/newsfeed/1476002094761644045
Rewind.AI — The Search Engine for Life
https://medium.com/seeds-for-the-future … 14706493e8
Offline
For Mars_B4_Moon re #164
Thank you for the comprehensive survey of a rapidly changing/developing field!
Your warning quote from Eric Schmidt caught my eye.
(th)
Online
Google is building a 1,000-language AI model to beat Microsoft-backed chatGPT
https://returnbyte.com/google-is-buildi … d-chatgpt/
Google has shared more information about its Universal Speech Model (USM), which the company describes as a “critical first step” in achieving its goals. It is now closer to its goal of building an AI model that supports 1000 different language models. languages to win ChatGPT.
Offline
This is a recreation of a post originally intended for Mars_B4_Moon but blocked by Apache Internal Error… this post attempt will come from a different computer. The Apache Error occurred despite the change.
(th)
Online
https://newmars.com/forums/post.php?tid=9919 was in the address bar when the full post was attempted.
Retry:
This is a recreation of a post originally intended for Mars_B4_Moon but blocked by Apache Internal Error… this post attempt will come from a different computer.
(th)
Online
not sure what causes posts to fail in error
some magazine news and speculation
'Will AI Replace Programmers'?
Offline
For Mars_B4_Moon re #170
There was a MySQL error on the system briefly this morning ... that might have been someone restarting the server.
Regarding the article you cited ... I haven't read it yet, but my first reaction is to offer the contrary suggestion that ChatGPT and it's cousins will increase the power of each person who uses it effectively. I ** just ** posted a link to a story about a STEM teacher who discovered that ChatGPT can save her hours of time preparing class materials. I expect that anyone who programs for a living has been using Google "Google is your friend". I certainly have. ChatGPT is a major advance over Google, because it removes clutter and fine tunes it's responses based upon interaction with the customer.
One note I'd like to make .... The ability to write programs has been removed by ChatGPT managers. That is documented in the ChatGPT topic. ChatGPT wrote a spreadsheet for me, and saved it in Google Docs. Shortly afterward I was locked out, the file was deleted, and ChatGPT lost the ability to save files, and it lost the ability to write programs.
(th)
Online
Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT
https://techcrunch.com/2023/03/14/anthr … s-chatgpt/
GPT-4 Released
https://openai.com/research/gpt-4
Last edited by Mars_B4_Moon (2023-03-15 09:30:06)
Offline
Experts Warn: Brain-Computer Interfaces Will Usher In the Singularity
https://futurism.com/the-byte/bci-singu … ilosophers
quote
The term Singularity in AI (Artificial intelligence), is a hypothetical moment in time when any physically conceivable level of technological advancement is attained instantaneously - up until physical limits are reached - usually as a consequence of artificial intelligence having progressed to the point where increasingly more efficient self-improvement is viable.
Offline
New study shows artificial intelligence could help locate life on Mars
https://www.ox.ac.uk/news/2023-03-20-ne … -life-mars
Agility Robotics launches next generation of of its humanoid worker robot
https://roboticsandautomationnews.com/2 … bot/66133/
Metal-Detecting Drone Could Autonomously Find Landmines
https://spectrum.ieee.org/metal-detecting-drone
Now a novel combination of a metal detector and a drone with five degrees of freedom is under development at the Autonomous Systems Lab at ETH Zurich. Unless you want to mount your metal detector on some kind of gimbal system, you need a drone that can translate its position without tilting, and happily, such a drone not only exists but is commercially available. The drone used in this research is made by a company called Voliro, and it's a tricopter that uses rotating thruster nacelles that move independently of the body of the drone.
Last edited by Mars_B4_Moon (2023-03-22 13:02:18)
Offline
Be My Eyes is collaborating with OpenAI’s GPT-4 to improve accessibility for blind and low-vision people
Offline