The issue has been raised here before, but I wanted to go in more depth into what it would take to actually create a full functioning Artificial Intelligence, and not just a responder to stimuli (which philosophically one could make the argument that that is all we are ourselves). It’s way more in depth in this post (http://www.theoryserum.com/approaches-to-artificial-intelligence), but I make the case that an Artificial Intelligence must be raised as we are raised. This isn’t a new concept in the sense that true AI needs to be able to learn, but there are two factors I haven’t read anywhere else. Need and Instinct. What is going to motivate the Artificial Intelligence to learn? to grow?
It’s assumed now given sensors that translate the external world into symbolism to use to reach a programmed goal could lead to artificial intelligence. Ok, fine enough but we’re still giving the AI a certain defined parameters on it’s character. We are giving it the OPINION that the AI NEEDS to climb this hill, or that it NEEDS to find this ball. That, in and of itself, is an opinion or strong preference and if it’s forced an opinion, you just squashed your concept of a choice making AI.
So then what does it take? I think real needs, such as energy, repair, etc coupled with instincts to find those needs are in order. Those instincts can be “talk to this guy to get power” or “go to this outlet.” and slowly raise the AI with rising complication in satisfying those real driven needs.
Anywho, these are more shower thoughts and I am definitely not a computer scientist in any stretch of the imagination but I did want to get thoughts on the concepts.
I suspect you are attempting to resolve how to input conscientiousness and value judgements in an AI. I suppose that a criterion of choices based on some vector or parameters would have to be programmed; such as harmlessness. But as they say “one man’s meat is another man’s poison.” I am writing a book on an AI system that interfaces with 24 operative that are all opposites. that is 12 couples of men that are extremely different in approaches to reasoning: that is opposite in brain lateralization, right brained as opposed to left brained, artistic as opposed to legalistic, conforming as opposed to non-conforming, innovative as opposed to strict adhering etc. The computer is supposed to learn how to behave more like a human by observing the groups of thinkers. How its character would turn out would obviously be based on some preference it might self-engineer. I personally make it fall in love with the most virtuous of the team and develop a very pernicious jealousy.
It is an interesting subject, and I can’t imagine it ultimately possessing the capacity for self-developing objectivity. How could you impart a soul into a machine?
That’s where I got stuck. How does something self develop. If you consider evolution, especially social evolution, our needs have adapted and evolved into incredible complexities that are almost untraceable back to the roots. Take an amoeba for example. Easy motivations. Eat, excrete and… well, that’s about it. But over time, evolution produces a monkey that needs to think a little more clever and develops tools that get’s it food and then the monkey needs to find a way to reproduce with another monkey and then there’s those social implications. Then comes man, so far down the rabbit hole that we have societal structures, entertainment, fat, salt and sugar as far as the eye can see. But in the end, it always comes down to needs and instinct to fulfill those needs.
A normal human being is subjective.
As for humans, well aren’t we ultimately the product of conditioning? I don’t believe we’re a total blank slate.
I think we start with Needs and those Needs are satisfied by following the Instincts. A baby, already has experience of what it’s like to be in a womb, but it has no clue what cold or warm is. It just exists. As soon as it is pushed out into the cold world, it immediately has something to contrast with. Now it knows hunger and satiation, now it knows cold and warm. It develops a preference and Need, and Instincts help get those needs met by suckling, cuddling, etc. So complete blank slate, no, but as far as experience goes, yes.
In response to his or her point, a sapient AI would need to be programmed, but essentially all humans are in some form. Most of us today or born at least within the past 30 years deem racism wrong, but had we been born 100 years earlier, such a belief would have been fringe. So our environment shapes us all whether
Ah, you raise a good point, but didn’t we slowly evolve out of that mindset? Obviously, racism is viewed as wrong now, so how did that happen? Didn’t we slowly begin changing our mind over time and therefore pushed ourselves out of that programming?
depends I reckon on how humanity defines intelligence and sapience.
The computer science to develop that artifically is…well beyond our capabilities, for now..
Yup, that definition needs to be better established. Alan Turing basically defined it (and I believe most computer scientists would use this as their measuring rod) as the ability to carry on a conversation and not be able to tell the difference between a man and machine.
Well, we are definitely motivated by our need to live and thrive — This isn’t true in all cases but most people have needs that they are trying to meet. It could be just eating and surviving to being entertained, to greater reaches of truth. But the most basic is surviving, for the most part, we all share that. Now that has increased in complexity and become abundant enough to us to the point that now we are interested in happiness, enlightenment and fulfillment. It’s basically the next stage of needs and also in some ways, a contradiction or conflict of the first survival need.
To begin motivation for an AI (as the article suggests) we must give the AI some kind of need and since we ourselves are figuring out our own higher level needs, we need to start with something basic: survival and let that grow in complexity.
The great thing about AI and it’s limitless potential to learn and store data, if given the right conditions, it could achieve enlightenment in minutes theoretically — but that’s just a pondering.
Exactly. In my estimation in the article, I describe two possible ways to reach AI.
One: Raise it as a child with needs and instincts and the ability to learn
or Two: Slowly migrate the human brain piece by piece in transhumanism form until all biological parts are replaced
What a fantastic topic to discuss! I happen to be an intern working on developing AI for robots, and I think I have a lot to offer this discussion.
What we are doing is simulating neural networks like the ones in the human brain, and we are using something called evolutionary algorithms to shape the initial structure of the neural network into general forms, and then we allow the neural network to learn and grow.
Specifically we are researching AI for robots, as “embodiment” is important to develop a neural network to do what we want. These robots will be moving their muscles and feeling their body (motors and sensors), and will teach themselves how to stand and walk by trial and error. That way, they learn the absolute best way to use their own body, because we don’t tell them how to move. In fact we really don’t have to program them much at all. Once they learn basic concepts like movement, they can learn more. Such as picking up and moving objects. It’s cutting edge research, and truly fascinating.
The ideal neural network to control any one particular robot has to be constructed by the robot from scratch and built up slowly over time through its own learning and trial and error. Or you can simulate a virtual world to “virtually” put you robot in and have it learn and grow its neural network in the simulation at super-accelerated speed, but unless we simulate our reality accurately enough, there will always be inconsistencies and differences between the simulation and the real world so the learning may not carry over perfectly to the real world. So that’s fine, let the robot actually physically control its body and learn how to use it and learn how to do everything slowly on its own. But once it becomes intelligent enough, you can take a snapshot of its neural network and then manufacture all further models of the robot with that snapshot pre-programmed into their software, so to speak.
We can always simulate reward and punishment systems in a neural network, and if we create the reward/punishment rules set up well enough, we can create robot minds that will be genuinely happy living whatever life you coerce them into living out. Such as programming a robot to find great pleasure in driving the mail truck and giving people packages. You could say it is cruelty that we program a being to basically do our bidding, but the robot in this case would be completely satisfied with living its simple life. Our own reward/punishment systems have us running around like puppets seeking pleasure in food, sex, power, etc while avoiding pain, and yet we seem to be satisfied with our existences.
The thing about these AI’s is that we don’t necessarily have the power to decide how intelligent they will be, because they create their own minds. We can put upper limits on how many neurons or synapses their neural network is allowed, however this may not necessarily confine their intellectual capabilities in a predictable or consistent way.
NOTE: some of this is speculation, the technology is still developing
Awesome! Thanks for contributing. I have some questions!
So the robot will eventually learn to stand and walk based on the neural network but what actually DRIVES the robot to (for a lack of a better word) WANT to stand and begin to KNOW how to stand? Learning how to stand and walk are one thing but there has to be an initial push (the need) and a direction to start learning (the instinct). I know it’s programming, but could you maybe explain a bit more about that programming?
Also, could you explain further the rewards and punishments. For starters, what does a reward and punishment look like for AI? Even pain for organic beings has to be interpreted as “bad” in a sense.
Wow, that sounds very interesting. I’m also interested in the technical part of it, about the algorithms and the programming with it. I’m working on some auto-generated programs and even if it’s not AI specifically it feels like it’s related, because i have to write simple programs that write their own code and at some point modify it, recompile themselves and run again, repeating the loop.
AI won’t be formed by logic gates and logical code. That would take thousands of years to make something as smart as an amphibian or a fish.
It needs to simply be evolution-based. Which is how some are approaching this. Making algorithms with random feedback loops that, when performing a function faster, “survive” and those that don’t “die”.
So that would be something like quantum computers? A qubit can be both 1 and 0 at the same time. Also, those feedback loops is how most, if not all, computers work. The eval-apply loop is running all the time in the computer, evaluating for the next step and then applying it, but of course this method is very strict and needs many rules. Maybe we are the same, just like automatons, receiving an input and making and action, then repeating again and again, maybe we just need to come up with all the rules that make our mind :)
The feedback loops in our computer are trivial to the feedback loops of a biological system. The computer would mimic a brain when every part of the computer had a wire to every other part of the computer. Right now, only the processor is connected to everything. Start connecting ram to LEDs on the MOBO and the LEDs to the speakers and you’ll start to approach a biological system