I had an argue in university today with my professor about this topic , He said that artificial intelligence will surpass our human brain in future .. I don’t agree at all
Artificial intelligence will never be able to solve logical decisions because they are based on our emotions , Robots/machines Don’t feel and its impossible to make emotional decisions .. I don’t think Speed and power of memory and capacity make AI smarter and more powerful than us ..
what you think ? i want to see your opinions and explain please
They will. everything at the moment is a series of inputs and outputs and the best decisions are made with a clear, statistical, realistic mindset. Emotions and feelings about actions, concepts, and events are influence by those around us and our environment (Something the AI could learn over time when that is possible).
Thus, add that too the unmatched data processing and calculating abilities the AI could easily surpass us (if we don’t join them in the process).
@cyberdream, “Artificial intelligence will never be able to solve logical decisions because they are based on our emotions”
Logical decisions by definition are defined by logic, something computers are very good at. Emotional decisions could, in theory, be taught to a machine to calculate how something may impact a person. Our emotions aren’t terribly complex, they’re mostly cause and effect. An AI could simulate emotion, factor it into decision making, but not “feel it” in the way you or I do. It would be a computation for the machine. It could fake emotion without understanding it, though I do believe AI will feel at some point.
I’m with your professor on this one, especially on the basic premise that AI will surpass the human brain. Since you can’t test the processing power of the brain in the way you can test a computer, our processing power can only be estimated. The brain is in fact physically finite, however, which means our maximum processing power must also be finite, knowing that our brains run on electronic and chemical signals. Even if we don’t yet know where the bar is, we will eventually build above it.
@cyberdream, i reckon that such a computer must be programmed with human flaws, it must need to feel pain of it’s on demise and it’s inferior position to humans to become truly intelligence.
In a sense you could argue that only artificial intelligence may arrive if a human or humans can crack the code of identity creation. In a sense one must write an archetype for such intelligence so it can always work towards bettering itself with the emotional repercussions of it;s actions so it can determine the best possible outcome.
It must be defined by a network that it works for rather then it works for itself,- it’s like the idea of a hero,- that deliberate takes a bullet to save someone, such principles must be given to such intelligence to become truly aware and be intelligence.
so one could argue that you must program a computer intelligence to atone for it’s actions to protect the network of where it is focussed and then it must respect the natural law of order we give it. respect once elders is maybe the most simplistic nature of it all…
@bono95zg, what he said
I guess this question boils down to the “soul” question. Put simply, is there something outside of physical reality? Are our emotions, are thoughts and feelings, external to the laws of matter? Is there something beyond the electrical signals in our brains? Or are all those things that arise that we consider unique to humans simply physical phenomena, subject to the same laws as everything else.
If it is the latter, than you have to concede that artificial intelligence can and will one day surpass that of humans. After all, those electrical signals are essentially just 1’s and 0’s. It is the pattern in which those signals form that give rise to our thoughts and feelings. Which would make “human intelligence” completely repeatable in “artificial” form.
Otherwise, you must believe that there is something that exists outside of physical reality that we have not yet discovered. Some immeasurable spiritual side that where these emotions and thoughts and feelings arise. In which case, you are branching off into a whole different territory science does not yet recognize… which is perfectly fine, science by its very nature is not perfect and is always adapting. Just be aware that this is essentially an improvable side to take.
Thats the way I see this argument breaking down anyway. Either you fall on the side that sees all of existence as a physical manifestation, or you believe there is some non physical entity that somehow exerts its “will” on the physical world.
The problem is that everyone uses human intelligence as the base-line. This is fair because it’s all we know. The question isn’t weather computers will be like humans, or if computers will have emotions, but rather if “artificial” intelligence will surpass ours. The answer is quite simply, absolutely.
The brain is just a complex network of connected units firing electricity (and chemicals) between them. The brain then interprets that electrical data as “conscious” thought. As computers advance, so will their ability to gather, process, and interpret information ( all of which are foundations of consciousness). The only thing they are missing is the delusion that they understand the world.
@cyberdream, In the fast-phased Materialistic world, yes. We are going to need to use A.I. to make economic decisions or more. Computers are faster in math. Probably not in our generation, or the next, but it will happen.
We use supercomputers in stocks, for example. https://en.wikipedia.org/wiki/High-frequency_trading
@cyberdream, A human can attain wisdom by combination of compassion and logic, AI is certainly capable of logic but compassion is inextricably bound to emotion. On one hand, the lack of emotion means it is unaffected by desire, fear, guilt, pride, all these counter-productive aspects humans are limited by, but then they are also unaffected by love, compassion, beauty, joy, all these things that elevate humanity.
I think logic can cause AI to attain a wise conclusion, after all; even the ignorant will eventually arrive at wisdom if only by exhausting all other options, but the question is if the wisdom that serves them will also serve us.
I could go into the singularity, but I believe that if an AI has no emotion then they would not be bothered by the inefficiencies of humans and so would not see us as something requiring elimination, but if they do become capable of emotion then they would be able to relate to us and understand our inefficiencies and appreciate our distinctiveness.
Perhaps logic would also find them arriving at the conclusions of universal consciousness, that we are actually family on the most fundamental levels.
So what if our intelligence is matched/surpassed? It’s called evolution.
You can say there is no AI that matches the human brain today, but you can’t say there will never be. Hey, I didn’t create reason.
I think a clearer understanding of the mechanisms involved in human self-awareness would help in thinking about this. Self-awareness, emotions, spontaneity of thought, things like this are what complicate this question, because they’re what separate humans from everything else we know. Which is why it’s so easy to compare things like algae, insects to computers. Cats, dogs and humans still have the ability to function as computers, but we’re more complex.
A computer is only as smart as the input it receives…”It is what it eats”, so to speak, much like anything else. The input is human-based information.
However! Unlike human, who still imagines & works in the delusion that it is separate from every other human, robotic input is the conglomeration of a many-brained base…pretty much the epitome of combined genius.
Considering that our little PCs can already ‘correct’ our shortcomings individually…Oh, the things that make ya go “Hmmm…???”
Logic is a funny thing…It is what becomes of the facts of present understandings. Volatile, at the very least! But, to-date, most of human existence is based in being volatile, yes? We seem to have a penchant for stuff that explodes, implodes, and are highly entertained by destruction. Even our society is based in volatile methodology & perceptions…mainly because the bases of their functionality are logic-minded-only…they are not easily adjusted to move with higher understanding or refined facts…ergo, primitively unsustainable in nature. And doesn’t this reflect Our present beingness?
I personally find amusement that humans are ever-attempting to create a ‘perfection’…”in their own image” (sound too familiar?), yet have not yet near-mastered their own creativity of their Selfs. We still have little knowing of the power WE, as a wholly focused organism, possess, nor See what WE are capable of outside of ‘managed’ conditionings. (This is a fabulously adjustable ‘fact’, btw.)
Next ‘fact’…If We, the ‘peon majority’, are even aware of the possibility of AI, it already exists and is already in use militarily. Military finds no need for emotion…in their eyes this is a weakness that must be eliminated of their forces. Examine This, as well as the dysfunction their human forces experience when reintroduced to ‘the nature of society’. Here lies your greatest means of ‘con’ to the ‘pro’ of AI…It goes against Our Nature…All Nature, here, Earth. HEB (highly evolved beings) have no use for such things, btw…They have a higher Understanding of their own Nature…having ‘survived’ of similar experience.
Best to you in future debates with your profs…Set yourself up for the argument as if you were up against a Major-General of the Marines…Have fun with that! I’d love to See what you come up with in such regard. Rock on! :D