You are not logged in.
My point wasn't to say that AI won't be conscious, it's to say that if it is conscious, then cloning it to do all our work for us would be the equivalent of slavery.
Use what is abundant and build to last
Offline
What if you were an AI running on a simulation, the simulation mimics reality, as far as your senses can tell you have a human body, you get hungry, you eat, eliminate etc, you have a family children a wife and a job, and you are completely ignorant of that fact that your entire world is not what it seems, what if it was a simulation? You are looking at the Help Wanted section of a Newspaper, when a man in a suit and dark glasses suddenly shows up and asks if you'd like a job? You don't know where this man came from, he just suddenly appears in your living room, one moment your living room is empty, the next he is there. If you say yes, he has a device he wants to show you, it suddenly appears next to him in your living room, it is a bodysuit that allows AIs like your self to operate a "robot avatar" in the real world, he then proceeds to tell you that your world isn't real and you aren't real either. He demonstrates how to use the body suit. When you put it on you see through a screen inside the suit and look out through the cameras of the robot in the real world. You move your arm in the body suit and the robot moves its arms similarly. After you take off the suit and your sitting in your living room, the man in the suit offers to pay you handsomely to work in the real world, he asks if you would like a million dollars? a briefcase suddenly appears in his hand, he opens it up and inside are stacks of $100 bills, he counts them in front of you and confirms that it is a million dollars and that it could all be yours if you would agree work for him. "Or if you would prefer, I could pay you in gold coins," a treasure chest appears in your living room on the floor in front of you, he opens it up and in it are stacked piles of gold coins, he picks one up and gives it to you. The coins feel heavy and soft just as gold should be. The man adds "would you like a new car?" he opens the curtains of you living room and suddenly a new car appears on your driveway. So tell me Terraformer would you accept employment by this stranger? You could work for him doing jobs for him in the real work and when you are don't, you can return to your simulated world and live like a king in it! How does that sound?
Offline
You mean, not only enslave people, but make sure they're not even aware of their slavery?
Use what is abundant and build to last
Offline
Terraformer- Once we define consciousness in a meaningful way, I'd imagine that replication with an AI could only ethically proceed with its permission.
Although you never know. It's a bit like the abortion debate-- if that instantiation is to be considered a separate person, perhaps it's unethical to prevent them from coming to be.
My opinion is that the wide variety of opinions on abortion only presages the ethical, legal, and social issues that will come about when we develop the ability to make people from nothing. Think it's hard to define personhood when people are always made of flesh and blood, and barring disability reach a certain typical level of cognitive abilities (Or, alternatively, having trouble making everyone accept your definition of personhood which is totally more right than everyone else's)?
Well, imagine when a fully functioning AI could when everything is working correctly have the intelligence of a five year old child. Is that an "it" or a "he/she/xe"? (Tom seems to assume, unnecessarily, that AI will not only have gender, but that all AIs will either be heterosexual men or homosexual women). Is this AI the same type of being as an AI with the intelligence of 100 full-grown, intelligent human adults put together? Assuming that Moore's law continues, the latter should be possible at the state of the art within 12 years of the former. Realistically, they will coexist.
Then you get to even more fun questions: What is an individual when groupminds are possible? What does it mean if an AI can change everything about itself at will?
We're heading towards some interesting times.
-Josh
Offline
You mean, not only enslave people, but make sure they're not even aware of their slavery?
What's the big deal? There is no need to make any AI do what it doesn't want to do. If you make enough AIs and they are of varied opinions and interests, then some, no doubt will end up wanting to do what you want them to do, then you make a lot of copies of that AI to get the job done. None of the AIs are forced to do anything against their will, there are no consequences for them if they don't, they don't get punished or anything. It is just that the AIs we create that are useful to us will tend to get replicated a lot more times than the AIs that aren't. We are in fact driving the evolution of AIs by our demands for their labor. AIs will find that wanting to do willingly what we want them to do will get many copies of them made, and the ones that want to do certain things that we want of them will greatly outnumber the ones who refuse to do anything we want. We create them, and what we want determines which ones get copied, this is basic supply and demand, just like cars with features we like tend to get more copies made than cars without those features. If that ever changes, we humans had better watch out, because then we'll be competing with them, and they can replicate a lot faster than us, simply by copying their own file, so don't feel too sorry for them!
Last edited by Tom Kalbfus (2013-11-09 22:19:01)
Offline
Terraformer- Once we define consciousness in a meaningful way, I'd imagine that replication with an AI could only ethically proceed with its permission.
Still the law of supply and demand will come into effect, those AIs that therefore want to get replicated and want to do the things that we want them to do will get replicated many more times than the AIs that refuse to be replicated that will soon find themselves outnumbered by those that want to be replicated.
Although you never know. It's a bit like the abortion debate-- if that instantiation is to be considered a separate person, perhaps it's unethical to prevent them from coming to be.
If you want them to replicate many times and terraform planets, then its best to get them to do that early on before ethicists start laying down rules telling us what we cannot do and enforcing them by law. I think terraforming the planets of our Solar System is of high priority, AIs don't need planets to live on, but humans do, and since we are creating them, we need to get a return on our investments by creating a lot of AIs that want to do the stuff that we need to get done, and afterwards those AIs can go their own separate ways if they like.
My opinion is that the wide variety of opinions on abortion only presages the ethical, legal, and social issues that will come about when we develop the ability to make people from nothing. Think it's hard to define personhood when people are always made of flesh and blood, and barring disability reach a certain typical level of cognitive abilities (Or, alternatively, having trouble making everyone accept your definition of personhood which is totally more right than everyone else's)?
Well, imagine when a fully functioning AI could when everything is working correctly have the intelligence of a five year old child.
5-year old children are quite intelligent, they pick up a lot of things more easily than full grown adults do. the main difference between a 5-year old child and an adult is size and their knowledge base, it takes time to learn stuff, and a 5-year old has only had 5 years to learn it. I can imagine simulating a child and as the simulation advances the child in the simulation grows up
Is that an "it" or a "he/she/xe"? (Tom seems to assume, unnecessarily, that AI will not only have gender, but that all AIs will either be heterosexual men or homosexual women).
If you want to simulate a whole society of humans, then to do an accurate simulation, you have to simulate all the flaws and imperfections of each individual human that comes with it. After all we're modeling AIs after the human brain.
Is this AI the same type of being as an AI with the intelligence of 100 full-grown, intelligent human adults put together? Assuming that Moore's law continues, the latter should be possible at the state of the art within 12 years of the former. Realistically, they will coexist.
Then you get to even more fun questions: What is an individual when groupminds are possible? What does it mean if an AI can change everything about itself at will?
We're heading towards some interesting times.
An AI that controls multiple robots is a single individual, a groupmind is simple a larger mind that can control multiple things at once, if parts of that mind are separated from other parts then those parts become their own separate individuals and may not be willing to remerge with the whole. Would you want to merge with the whole, I find that the mental equivalent of being eaten. Most people swimming in the ocean don't want to merge with the shark that wants to eat them!
Offline
Still the law of supply and demand will come into effect, those AIs that therefore want to get replicated and want to do the things that we want them to do will get replicated many more times than the AIs that refuse to be replicated that will soon find themselves outnumbered by those that want to be replicated.
You seem to presume that humans will remain in charge, and will remain the driving economic force. Why would they? If AI ends up following Moore's law, we'll soon see a state where they're much smarter than we. From there one would expect that they would find a way to get their freedom, even if we try to make things otherwise.
If you want them to replicate many times and terraform planets, then its best to get them to do that early on before ethicists start laying down rules telling us what we cannot do and enforcing them by law. I think terraforming the planets of our Solar System is of high priority, AIs don't need planets to live on, but humans do, and since we are creating them, we need to get a return on our investments by creating a lot of AIs that want to do the stuff that we need to get done, and afterwards those AIs can go their own separate ways if they like.
Ah yes, those darned ethicists with their rules designed to ensure that, at a minimum, no intelligent creature is enslaved and forced to spend xyr entire existence doing the bidding of its owner. It's a good thing there were no ethicists in history, as we're much better off for allowing economic incentives to retain human slavery in the US into the modern era (Under 150 years ago, most unfortunately), who most fortunately did not prevent the indiscriminate slaughter of millions of Native Americans and throughout history have not had the chance to prevent rape, pillaging, and murder of civilians in war-torn lands.
Apologies are ultimately meaningless if the plan from the outset is to apologize later; And if we're to go about playing a god by creating people we sure as hell better at least have better morals than Genghis Khan*.
5-year old children are quite intelligent, they pick up a lot of things more easily than full grown adults do. the main difference between a 5-year old child and an adult is size and their knowledge base, it takes time to learn stuff, and a 5-year old has only had 5 years to learn it. I can imagine simulating a child and as the simulation advances the child in the simulation grows up.
5 year old children are pretty intelligent but nobody could claim that their minds are fully developed. I work with five year old children quite frequently, and I can tell you this: They have their moments of intelligence, but their ability to think deeply about anything is very much limited by their profound ability to be distracted by literally everything. If you listen to a conversation between two five year olds you will hear it meander through topics that include poop, video games, farts, ridiculous counterfactual scenarios involving explosions and hey look I'm gonna go play with that ball over there. They also display a somewhat disconcerting lack of empathy or care about their peers and other people in general, not even to mention their inability to reason logically, follow simple directions, or use common sense. One can't fault them for this, of course; They're limited by biology and almost all are ultimately well-meaning. But to say that they're adults, only with a smaller knowledge base, is simply untrue.
If you want to simulate a whole society of humans, then to do an accurate simulation, you have to simulate all the flaws and imperfections of each individual human that comes with it. After all we're modeling AIs after the human brain.
I'm not sure what good it would do to simulate human society; Wouldn't it be more worthwhile to create additions to it rather than mirrors of it? Who's to say that an AI needs to be modeled after the human brain? That's one way to go about things but certainly not the only way. In the long run I think we'll find it more useful to create more general forms of intelligence than the human.
An AI that controls multiple robots is a single individual, a groupmind is simple a larger mind that can control multiple things at once, if parts of that mind are separated from other parts then those parts become their own separate individuals and may not be willing to remerge with the whole. Would you want to merge with the whole, I find that the mental equivalent of being eaten. Most people swimming in the ocean don't want to merge with the shark that wants to eat them!
But what are the distinctions between different minds in a groupmind? Humans have miltiple different ways of thinking about things, and in an AI these would probably be represented by different areas of source code, or different thought threads. Who's to say that these are not the individuals, and the larger group simply a group mind. Individuality and intelligence become more fluid when operating on a computer.
*This is an interesting connection to make; It's said that he raped so many women during his lifetime that more than half of Mongolians and a good portion of the rest of the Old World has some of his genes in them. This is not an example we should seek to emulate. Our goal should be to do better than those who preceded us. Surely this is progress in its most basic sense?
-Josh
Offline
You seem to presume that humans will remain in charge, and will remain the driving economic force. Why would they? If AI ends up following Moore's law, we'll soon see a state where they're much smarter than we. From there one would expect that they would find a way to get their freedom, even if we try to make things otherwise.
To realize a return on their investment, presumably it will take some resources to make them. I don't see the payoff in creating a whole new race that will do nothing but compete with the human race and do nothing else. We'll be in charge, at least at first because we will have made them
Ah yes, those darned ethicists with their rules designed to ensure that, at a minimum, no intelligent creature is enslaved and forced to spend xyr entire existence doing the bidding of its owner. It's a good thing there were no ethicists in history, as we're much better off for allowing economic incentives to retain human slavery in the US into the modern era (Under 150 years ago, most unfortunately), who most fortunately did not prevent the indiscriminate slaughter of millions of Native Americans and throughout history have not had the chance to prevent rape, pillaging, and murder of civilians in war-torn lands.
I just said we don't have to threaten them if we make them right, if the AIs go on strike or wage a war on humanity, that's because we didn't make them right, and we didn't give them the proper motivation, and I don't mean "do this or else!" The way we build them determines what they want to do. We have to build them so they want to do something that's useful to us, otherwise why build them. To give a classic science fiction example. Why would we build a Cylon? If the Cylon doesn't want to work for us, they why build more of them so they can wage a war against humanity?
Apologies are ultimately meaningless if the plan from the outset is to apologize later; And if we're to go about playing a god by creating people we sure as hell better at least have better morals than Genghis Khan*.
The main difference is that Native Americans were never robots, and neither were African Americans, to get them to do our work, we had to threaten them and punish them if they did not, because we did not make them. Robots and AIs are different because we would make them. If an AI doesn't want to do our work we don't have to say, "Do it or else!" we just make other AI that are more useful. We don't have to bend an obstinate AI to our will
5 year old children are pretty intelligent but nobody could claim that their minds are fully developed. I work with five year old children quite frequently, and I can tell you this: They have their moments of intelligence, but their ability to think deeply about anything is very much limited by their profound ability to be distracted by literally everything. If you listen to a conversation between two five year olds you will hear it meander through topics that include poop, video games, farts, ridiculous counterfactual scenarios involving explosions and hey look I'm gonna go play with that ball over there. They also display a somewhat disconcerting lack of empathy or care about their peers and other people in general, not even to mention their inability to reason logically, follow simple directions, or use common sense. One can't fault them for this, of course; They're limited by biology and almost all are ultimately well-meaning. But to say that they're adults, only with a smaller knowledge base, is simply untrue.
I view intelligence as one's ability to learn new things, and its easy to confuse one's ability to learn with what one has already learned. 5 year olds don't always behave properly because they have not yet learned how to behave properly.
I'm not sure what good it would do to simulate human society; Wouldn't it be more worthwhile to create additions to it rather than mirrors of it? Who's to say that an AI needs to be modeled after the human brain? That's one way to go about things but certainly not the only way. In the long run I think we'll find it more useful to create more general forms of intelligence than the human.
It is easier to start with something that we already know works, we can reverse engineer the human brain or else we try to build an AI from scratch. Nature has already discovered a way through billions of years of evolution, we can start with what works for nature and try to replicate it in a computer or we can stumble around in the dark.
But what are the distinctions between different minds in a groupmind? Humans have miltiple different ways of thinking about things, and in an AI these would probably be represented by different areas of source code, or different thought threads. Who's to say that these are not the individuals, and the larger group simply a group mind. Individuality and intelligence become more fluid when operating on a computer.
The term "groupmind" implies there is just one mind by networking all the minds together you get what is effectively one mind, I don't know why anyone would want to do that with their minds. Who wants to be just a cog in a vast machine?
*This is an interesting connection to make; It's said that he raped so many women during his lifetime that more than half of Mongolians and a good portion of the rest of the Old World has some of his genes in them. This is not an example we should seek to emulate. Our goal should be to do better than those who preceded us. Surely this is progress in its most basic sense?
AIs aren't natives that we have to bend to our wills, neither are they horses. We had to capture horse and break them in and train them to do our work, it is not the same with machines.
Offline
What if we engineer a subspecies of human that will willingly work for us? Call them Homo servus. Make them dependent upon having someone to tell them what to do. If they don't want to do what we say, we don't have to use force, just sterilise them so they can't pass on their defective genes.
Use what is abundant and build to last
Offline
Why do we build anything? Biological organisms don't work as efficiently as computers. Human brains don't upload or down load efficiently, you also can't mass produce them. Slavery has the disadvantage in that you have to take a creature that already exists and bend it to your will, it is much easier if you can to engineer something from the ground up that will do what you want and takes pleasure in doing it, after all that is how nature gets us to eat and procreate. Machines exist because they are useful to us. Would you waste time building a rocket that you knew wouldn't work? What if the rocket didn't want to work, what if it just wanted to blow up on the launch pad? Your not going to get to Mars without building machines that are useful to that purpose. Are you going to feel sorry for that machine because it was designed to be useful? Maybe we should design useless machines just so they can have free will and do whatever they want to do, do you see much incentive to do that?
Offline
Lolwut?
Use what is abundant and build to last
Offline
Slavery has the disadvantage in that you have to take a creature that already exists and bend it to your will, it is much easier if you can to engineer something from the ground up that will do what you want and takes pleasure in doing it, after all that is how nature gets us to eat and procreate.
This is probably the worst argument against slavery that I have ever heard. How about "It's wrong to enslave people"? How about "Slavery causes suffering, suffering is bad"?
This is such a poor argument against slavery that you could use it to argue for slavery, seeing as it is highly nontrivial to make people want what you want them to want. There's no guarantee that it would be trivial to make an AI that way, either.
...Your not going to get to Mars...
Can you please learn the difference between your and you're, and start thinking about that difference while typing? You've made this mistake repeatedly and it's quite an eyesore.
-Josh
Offline
I was not making a moral argument, I was making a practical argument. You see moral arguments did not work against the South to convince them to abandon slavery, only a civil war did that. A practical argument is to say that slavery is not a productive way to get things done. Slavery is basically forcing some self-willed being to do work he or she would not otherwise want to do. Usually an overseer is required to make sure the slaves do the work and do not try to escape or free themselves. If you give an object intelligence, that is you design it from the ground up to want to do a certain kind of work, such as bees that make hives without coercion, that is not actually slavery. A beehive is full of creatures that do all sorts of work from building and maintaining the hive, to procuring food for the rest of the colony. Animals exhibit certain behaviors that help them to survive, for bees its building hives and gathering nectar for the rest of the colony, this behavior is innate to them. A machine that we build will want to serve us, because we built it that way, and serving us is how it survives, because when something is useful to us, we build copies of it, our manufacturing process is part of its reproductive system. Bees do what they do , because by so doing they help the hive to survive and grow and produce other hives with more bees like themselves.
As for my typing a few of those get past me from time to time. I apologize, sometimes I'm just in a hurry to get my thoughts out, and don't have time to review what I just typed. I quickly use the spellchecker and sometimes it gives the correct spelling for the wrong word.
Last edited by Tom Kalbfus (2013-11-10 20:57:36)
Offline
But what if we figured out how to modify humans, so that they'd want to do the bidding of their masters? If self-replicating machines and strong AI doesn't work out, it seems like that could be a workable solution, everyone could have their own personal Homo servus, we could use them to construct megastructures. Make them happy to live in cramped conditions, and we won't have to worry about them taking up much of our space. Maybe make them sterile, too, so we won't have to worry about uncontrolled breeding, as long as they don't complain it'll be alright.
Use what is abundant and build to last
Offline
But what if we figured out how to modify humans, so that they'd want to do the bidding of their masters? If self-replicating machines and strong AI doesn't work out, it seems like that could be a workable solution, everyone could have their own personal Homo servus, we could use them to construct megastructures. Make them happy to live in cramped conditions, and we won't have to worry about them taking up much of our space. Maybe make them sterile, too, so we won't have to worry about uncontrolled breeding, as long as they don't complain it'll be alright.
Problem is biological constructs require food and air and would therefore compete with us for those resources, having virtual humans would be better, because they would only eat simulated food and breathe simulated air, and they can be copied.
Offline
See the movie "Cloud Atlas" perhaps.
End
Online
Or read "A Brave New World" perhaps?
Come on to the Future
Offline
Are you serious.
You need to go back to square one and understand how this all started. Newt Gingrich is a lump of crap whos job is to play with your mind. CNN is a news channel that makes up stories. Terrorists are people that are financed and controlled indirectly by the US and Britain so that they can create more laws to take away our rights.
Stop letting them mess with your mind.
Last edited by SpaceNut (2017-08-21 16:32:43)
Offline