> Home > News > What will happen if robots become conscious?

What will happen if robots become conscious?

For humans, zombies and aliens that we often see in movies may not be a real threat, but we can't ignore another villain that we often see in movies, that is, conscious robots. Their arrival may only be a matter of time. But what will the world look like when real conscious robots come along??By then, will there be room for human beings to survive??

In recent years, the research field of artificial intelligence has undergone a revolution. Nowadays, AI systems can outperform human beings in Go, and have made remarkable achievements in face recognition, safe driving and other fields. Most researchers believe that truly conscious robots(It's not just about running fixed procedures, it's about emotional and self-conscious machines.)Maybe in a few decades. Machines need to learn reasoning ability and strong generalization ability in order to learn more knowledge. Only when machines possess such capabilities can AI achieve the complexity required to master consciousness.

But some people think that it won't take decades for us to see the emergence of conscious robots soon.

Justin Hart, a computer scientist at the University of Texas, said:“It is believed that self-awareness will become the ultimate game of AI, but in fact, no scientific pursuit is based on the ultimate goal.”Justin and other researchers are already working on robots with basic ideas. They designed robots like newborn babies. Robots can learn to understand their body structure, see new things and scream, and when humans encounter them, robots cry, which is like the behavior of newborn babies. These robots have begun to explore their own world.

Robots don't have inner emotional experience. They don't take pride in the clean floor, nor do they flow through their bodies.120Volt's current is happy. However, robots can now learn something similar to human qualities, including empathy, adaptability and aggressiveness.

Instead of indulging in creating cool robots, researchers began to study possession cybernetics.(CyberneticCybernetics is the science of studying the regulation and control laws of various systems.)Robots of the system try to solve the long-standing shortcomings of machine learning system. Machine learning systems may be powerful, but they are opaque. They work by associating input with output, as in the case of“A”and“B”Like matching lines in columns, AI systems basically remember these relationships, and there is no deeper logic behind the answers they give. This has always been a problem in machine learning system.

Man is a very difficult species to understand. We spend a lot of time analyzing ourselves and others. It can be said that our conscious thinking is at work. If machines have ideas, they may not be so mysterious. If we want to understand machines, we can ask them directly.

Artificial Intelligence Researcher at Rensler Institute of Technology in Troy, New YorkSelmerBringsjordSay,“If we can understand the structure of human brain consciousness, we can let machines learn some interesting abilities.”Although science fiction scares humans of conscious robots, in fact, even temporarily unconscious robots need to be cautious, and conscious robots may become our allies.

Today, autopilot has some of the most advanced AI systems. They determine where the vehicle is heading, when to brake, collect data through continuous radar and laser detection, and input the data into the algorithm. But automatic driving technology hopes that when driving, the vehicle can exercise automatically and defend itself against sudden accidents, which is the ability related to consciousness.

Neuroscientist, Pompeii University, BarcelonaPaulVerschureSay,“Self driving cars need to speculate on the next driving behavior of nearby vehicles.”

To demonstrate this intrinsic principle, Professor of Engineering at Columbia UniversityHodLipson(HodLipsonThe professor is also a co-author of a book on autopilot.)And Sejong University in Seoul, Koreakyung-joongKimAn experiment was carried out and an experiment was developed.“Go crazy”Robot driver. In the experiment, there was a small circular robot.(About the size of a hockey ball.)Move in a circular orbit according to its own motion logic. Then, this“Go crazy”Robot drivers always intercept the first round robot when it starts, so“Go crazy”Robots can not move along a fixed path. It must predict the trajectory of the first circular robot.

By imitating Darwin's theory of evolution,LipsonandKimAn interception strategy is designed.LipsonSaid:“The experimental robot has basically developed an actor's brain, which may not be perfect enough, but it is enough to predict the behavior of the other person.”

  LipsonThe team also designed another robot that can learn and understand the structure of its body. This is a four-legged spider robot, which is about the size of a large tarantula. When the spider robot started, its internal program did not record any information about itself.LipsonSaid:“It does not know how its engine is constructed or how the body's motion logic is designed.”But it has the ability to learn. It can observe all the actions it takes. For example, it can observe how it operates a motor to bend a leg.LipsonSaid:“Like a baby, babies are chaotic.”“It moves the motor in a random way.”

Four days later, the spider robot realized that it had four.“leg”(motor)And figured out how to coordinate and move to allow yourself to slide across the floor. WhenLipsonWhen one of the motors is removed, the robot can realize that it has only three legs, so the original behavior will no longer produce the desired effect.

  LipsonSaid:“I think this robot has a very primitive sense of self.”This is another human-like ability that researchers want to build on artificial intelligence.AlphaGoThe reason why we stand out in the game of Go is that human researchers lead machines to win the game. Machines can't define problems themselves, because defining problems is usually a difficult part.

A neurologistRyotaKanaiAnd the founder of a start-up company in Tokyo is about to publish a paper.——“Trends in Cognitive Science”In their paper, they discussed how to give the machine internal power. In a demonstration, he and his colleagues demonstrated driving a car in a virtual environment.Agent,AgentIt is necessary to climb a steep mountain, which is too steep to climb only when running up.AagentWhen ordered to climb the mountain, it will come up with a way. Before receiving this order,AgentIt's been idle.

Then,KanaiThe team gave these virtual onesAgentIncreased“Curiosity”Mechanism.AgentThe topography of the mountain was surveyed, the mountain climbing was regarded as a problem to be solved, and the method of climbing was found without any instructions.

  KanaiSaid:“We didn't give it.AgentSet any goals.”“AgentOnly in their own exploration of the environment, by predicting the consequences of their own behavior, to understand their own situation.”The key is to give robots enough internal incentives to solve problems better, rather than letting them choose to give up and leave the lab. Machines can be as stubborn as humans.JoschaBachA Harvard Artificial Intelligence researcher who put virtual robots in“Minecraft”——MinecraftIt's full of delicious but poisonous mushrooms.BachhopeAgentLearn to avoid making mistakes by yourself, as inMinecraftIn the same way, if the machine does not know how to avoid the poisonous mushrooms, it will eat poisonous mushrooms and poisoned.

  BachSaid:“Like humans, machines don't care what the actions of the moment will do to the future.”They may just think these mushrooms are delicious, so they have to instill a natural aversion into the machine. In a sense, machines must learn values, not just understand goals.

In addition to self-awareness and self-motivation, a key function of consciousness is concentration. In the field of artificial intelligence, selective attention has always been an important field.AlphaGoThe creator of GoogleDeepMindThe team has done in-depth research in this area.

  “Consciousness is a filter of attention.”Stanley Franklin, professor of computer science at the University of Memphis, said. In a paper published last year in the Journal Cognitive Structures in Biology, Franklin and his colleagues reviewed a paper they created calledLIDAArtificial Intelligence System (AIS). This system chooses the places that need to concentrate through competition mechanism. The algorithm is adopted.20century80Neuroscientist of the AgeBernardBaarsThe proposed method. Machine systems observe some interesting stimuli during competition——Sound, bright, strange stimuli, and then these stimuli compete for dominance. The stimulus of victory in the competition determines where people's minds are concentrated, and then informs the machine system.“Brain”Inform“Brain”Where attention should be focused, and then more brain functions, including those that control thinking and movement, should be told. The cycle of perception, attention and action repeats every second.5reach10Second time.

  LIDAThe first version is the U.S. Navy's Work Matching Server. It reads e-mails and focuses on the relevant ones.——These emails cover job seekers'interests, job difficulties and job requirements of government bureaucracies.

Since then, Franklin's team has used this system to simulate the brains of animals, especially those habits that focus only on one thing at a time. For example,LIDALike humans, it is easy to have a strange psychological phenomenon, namely“Attention blindness(attentionblink)”——When something attracts your attention, you forget about something else in about half a second. This cognitive blind spot depends on many factors, andLIDAIt shows a similar human response.

A Finnish Artificial Intelligence ResearcherPenttiHaikonenAccording to similar principles, a new name is establishedXCR-1Robot.HaikonenThink that he created it.XCR-1Be able to have real subjective experience and basic emotions.

  XCR-1Robots have the ability to associate, which is very similar to the neurons in our brains. When we show it toXCR-1Robot A Green Ball and Say It“green”The word, thenXCR-1The visual and auditory modules of the robot will respond, and the green balls and the auditory modules will be seen.“green”This word connects. IfHaikonenSay again“green”ThenXCR-1The robot's auditory module will respond, and the visual module will respond by recording connections.“Memory”get up“green”Correspondingly, as it really heard the word and saw its color.

On the contrary, if the robot sees green, its auditory module will respond, even if it is not real.“Give voice”This word. In short, robots produce a Synaesthesia(synesthesia)。

  HaikonenSaid:“If we see a ball, we may say to ourselves: Oh, that's a ball.!At that moment, we felt like we really heard the word, but in fact we only saw it.”“xcr-1The same is true.”

When the auditory and visual modules clash, things become interesting. For example, when the visual module sees green, the auditory module hears it.“blue”If the auditory module prevails, then the whole system will turn its attention to the words it hears.——“blue”It ignores the color it sees.——Green. Robots have a simple stream of consciousness, which consists of a kind of instantaneous dominant perception:“green”,“The Ball”,“blue”Wait. WhenHaikonenWhen the auditory module is connected to a voice engine, the robot will silently tell itself everything it sees and feels.

  HaikonenThe vibration is also set as a robot's“Pain spot”It can grab other senses as input and occupy the attention of the robot. In a demonstration,HaikonenTap the robot and let it target. Then the robot suddenly says:“I'm hurt”。

  HaikonenSaid:“For some reasons, some people will be emotionally trapped and do not catch cold with these works, which is considered a bad robot.”

Based on early efforts, researchers will develop more realistic robots. We can see the continuum of the conscious system, as it exists in nature, from single-celled organisms, dogs to chimpanzees, to humans and other species. The gradual development of this technology is good because it gives us time to adapt: one day in the future, we will no longer be the only advanced organism on earth.

For a long time, the AI machines we created were so fragile that they threatened humans more than new pets we created. How we treat them will depend on whether we realize that they are conscious and that machines are capable of enduring pain.

Susan Schneider, a philosopher at the University of Connecticut, said:“We value non-human animals because we see the existence of consciousness in them, just as human beings themselves exist on the basis of their own consciousness.”Susan Schneider studies the meaning of artificial intelligence. In fact, she believes that we deliberately do not create conscious machines to avoid the moral dilemma it creates.

Schneider said,“If you're creating conscious robotic systems to work for us, it's like slavery.”For the same reason, if we don't give advanced robots the ability to perceive, robots may pose a more serious threat to humans, because unconscious robots can't think for themselves, they can't think of any reason to stand in the same position with humans, recognize and cherish humans.

From what we have seen so far, conscious machines will inherit human weaknesses. If robots had to predict the behavior of other robots, they would treat each other as organized creatures. Like us, they may begin to recognize inanimate objects: sculpture animals, statues and wind.

Last year, University of North Carolina social psychologist Kurt Grey andDanielWegnerIn their“Soul Club”It is proposed that this instinct is the origin of religion.VerschureSaid:“I look forward to the religion of robots'own development in movies, because we have designed conscious preferences for them to become part of society.”“But their conscious preferences may have worked first.”

These machines will go far beyond our ability to solve problems, but not everything is a problem that can be solved. They may be addicted to their own conscious experience, and as the range of sensory perception of robots expands, they will see something that humans do not believe.

  LipsonSaid:“I don't think the future robotic species will be as ruthless as we think.”“They may have music and poetry that we will never understand.”


Title of this article:What will happen if robots become conscious?

Link to this article:http://www.aoborobot.com/news/2017062411259.html
Scan QR code
Share It
Or
Look on the phone
share to
Consult us x -

+86 186 1312 1886