Many people have the question: Can artificial intelligence ultimately have its own consciousness? And an AI expert responded by saying that it depends mainly on whether we want to make the machine aware.
This may sound too bold. The intrinsic mechanism of consciousness—why we have such a vivid and direct experience of the world—has always been a mystery in the field of neuroscience. Some even believe that this mystery will never be solved. It seems impossible to explain people's subjective experiences with objective scientific methods. But over the past 20 years, scientists have carried out a lot of in-depth analysis of consciousness and made significant progress. Scientists have discovered a number of neural activities related to consciousness, and have further understood which behaviors require the participation of consciousness. The brain performs many high-level cognitive tasks in accordance with the subconscious.
In short, consciousness is not necessarily a by-product of our cognitive process. This may be true for artificial intelligence as well. In many sci-fi stories, the machine automatically generates its own spiritual world as long as it is sophisticated enough. But in reality, we may have to add consciousness to the machine at the time of design.
From a scientific and engineering perspective, we have good reasons to try. Human beings’ ignorance of consciousness itself is one of them. When 18th and 19th century engineers invented steam engines, physicists did not propose thermodynamic laws. It can be seen that the invention can sometimes promote theoretical progress. Today's situation is no exception. The discussion of consciousness is often too philosophical, always circled around, but does not produce any practical results. And a few experts who study artificial consciousness hope to do the opposite and learn in practice.
In addition, consciousness must be able to play a certain important function, otherwise it will have been eliminated in the process of evolution. These features can also be applied to artificial intelligence. In this regard, science fiction works are also misleading. Whether in fiction or television, consciousness is like a curse for artificial intelligence. They deliberately make unpredictable behaviors that are often detrimental to humans. But in the real world, this dystopian scenario is unlikely to happen. Regardless of the risks posed by artificial intelligence, it does not depend on whether they are independent. On the contrary, conscious machines can help us cope with the impact of artificial intelligence. Some experts say that he would rather work with these conscious machines than without the idea of ??automation.
When AlphaGo competed with human Go champion Li Shizhen, many experts were wondering why AlphaGo would play Go in this way. They want to explain and understand the motivation and logic of AlphaGo. This situation is very common for modern artificial intelligence because their decisions are not set by humans in advance, but are generated spontaneously by learning algorithms and data sets used to train them. Because of the inability to penetrate these artificial intelligences, many people are concerned that their decisions may be unfair and arbitrary. There are already examples of algorithmic discrimination. For example, last year's survey found that a set of algorithms used by Florida judges and parole officers was racially discriminatory, resulting in black criminals being more likely to be labeled as recidivist, while white criminals were lower than the actual situation.
From next year, EU law will give EU residents "request for interpretation." People will have the right to ask the relevant person to explain why an artificial intelligence system makes some kind of decision. But this new requirement is technically challenging. Given the complexity of current neural networks, it is difficult to understand the process of making decisions by artificial intelligence, let alone translating this process into a language that humans can understand.
If we can't figure out why artificial intelligence does this, can you ask them directly? We can equip artificial intelligence with metacognition, let them review their behavior and report their internal mental state. This ability is one of the main functions of consciousness. Neuroscientists are looking for this ability when testing whether a person or animal is conscious. For example, self-confidence is one of the basic forms of post-cognition. The clearer the consciousness experience, the higher the level of confidence. If the brain is unknowingly processing information, we will feel less certain about it; and if we are clearly aware of the existence of a stimulus, we will feel much more confident, such as "I have definitely seen red." ”
Any calculator equipped with a statistical formula can perform reliability estimation, but there is no machine with human cognitive ability. Some philosophers and neuroscientists have suggested that cognitive ability may be the essence of consciousness. According to the "high-order theory of consciousness" hypothesis, the consciousness experience depends on the secondary representation of the direct representation of a certain sensory state. When we know something, we know that we know it. And if we lack this kind of self-awareness, we can say that there is no consciousness, just like in the automatic cruise mode, but simply accept the sensory input and act accordingly, but have not paid attention to it.
These theories provide some guidance for us to create conscious artificial intelligence. Some experts try to embed cognitive ability in the neural network to allow them to communicate with their internal state. The project is called “machine phenomenology, which mimics the concept of “phenomenology” in philosophy, which is to study the structure of consciousness through a systematic review of the experience of consciousness. But it is not easy to teach the machine to express in human language. Difficulty, researchers are currently focusing on training in machine language, letting them share their own retrospective analysis. The analysis is the instruction of artificial intelligence to perform a certain task. This is beyond the scope of general machine communication. Researchers are not Specify how the machine should compile these instructions, because the neural network itself can generate corresponding strategies. In the training process, if the machine can successfully pass instructions to other machines, you can get rewards. Researchers hope to expand the method, use The communication between humans and artificial intelligence ultimately allows artificial intelligence to explain what it is doing.
In addition to allowing us to achieve self-understanding to a certain extent, consciousness can also help us achieve the “spiritual time travel” that neuroscientist Endel Tulving calls. We can consciously predict the outcome of a certain behavior or plan for the future. We can imagine what it feels like to wave your hand in front of you, and you can imagine going to the kitchen to make tea in your mind, but you don't need to do these moves.
In fact, even our perception of the present moment is built by consciousness. Many experiments and case studies have proved this. For example, a patient with aphasia is unable to recognize what he or she is seeing because the part of the visual cortex that is associated with the identified object is damaged, but does not affect reaching for it. If you give them an envelope, they also know to put it in the mailbox. However, if there is a delay between displaying the object to the patient and issuing an instruction to let the subject take the object, the patient cannot complete the task of taking the object. Obviously, consciousness has nothing to do with complex information processing itself. As long as a stimulus can immediately trigger the corresponding action, there is no need for conscious participation. But if you want to keep sensory information for more than a few seconds, consciousness becomes indispensable.
From a special kind of psychological conditioning experiment, the importance of consciousness to fill the time difference can also be seen. In classical conditioning experiments (such as the famous Pavlov and dogs), the experimenter will use one stimulus (such as blowing an eyelid or applying an electric shock to the fingertip) to another unrelated stimulus (such as a Pure tone) is paired. The subject automatically learns that there is a correlation between the two stimuli and does not require conscious effort. As soon as the corresponding pure tone (that is, a sound with only one vibration frequency) is heard, they automatically associate with the airflow blown on the eyelids or the electric shock of the fingertips, subconsciously making a cowering action. And when the experimenters asked why they were doing this, they couldn’t tell why. But this subconscious learning will only occur when both stimuli occur simultaneously. If the experimenter delays the second stimulus a little, the subject can only understand that if there is a relationship between the two, he can explain that “hearing pure tone means the eyes are being blown”, in order to learn. The association between the two stimuli. In this way, if the subject wants to retain the memory of the stimulus after the stimulus is stopped, it must be consciously involved.
These examples show that one of the functions of consciousness is to broaden the time window in which we perceive the world, so that this moment can be extended. With the help of consciousness, even after the stimulus disappears, the sensory information of the stimulus can be maintained for a period of time, maintaining a flexible and usable state. The brain can also continuously generate sensory representations without direct sensory input. The delay function of consciousness can be proved by experiments. Scientists Francis Crick and Christof Koch suggest that the human brain can use only a portion of the visual input to plan for future behavior. Only these visual inputs should be done in a conscious state.
What these examples have in common is that they all involve the "generation of counterfactual information", that is, the corresponding feeling is generated without direct sensory information input. We call it “counterfactual” because it involves a memory of the past, or a prediction of future behavior, rather than an actual event that is happening. We also use the word “generating” because it is not just information processing, but also a positive process of creating and testing hypothetical scenarios. In a one-way "feedforward" process in which sensory input flows from a low-level brain region to a high-level brain region, the sensory input is compressed into a more abstract sensory representation. But neuropsychological research shows that no matter how complex this feed-forward flow process is, it has nothing to do with the experience of consciousness. To participate consciously, you also need to send feedback from the high-level brain area to the low-level brain area.
With the ability to generate counterfactual information, the ideology can be separated from the current environment and made non-reflective behaviors, such as waiting for three seconds to act. To generate counterfactual information, we need to establish an "internal model" that masters the laws of the external world, relying on the model to complete activities such as reasoning, motion control, and mental simulation.
Current artificial intelligence has a complex training model, but it depends on the data provided by humans to learn. With the ability to generate counterfactual information, artificial intelligence can generate data on its own, imagine what it might encounter in the future, and be more flexible in adapting to new situations that have not been seen before. In addition, this can make artificial intelligence curious. If artificial intelligence is not sure what will happen in the future, you will try it yourself.
There are already research teams trying to equip artificial intelligence with this capability, and it has been several times that artificial intelligence seems to have taken unexpected actions. In one experiment, the researchers simulated an artificial intelligence system capable of driving a truck. If you want to climb the hill, you usually need humans to set this action as the mission target, and artificial intelligence to find the best path to complete the task. But the curiosity of the artificial intelligence system sees the hillside as a problem, even if there is no human command, it will actively find ways to climb the hillside. However, the findings require further research to verify.
If we look at "review" and "imagine" as the two major elements of consciousness, we will develop conscious artificial intelligence sooner or later, because these two functions are very useful for any machine. We want the machine to explain how and why it does things. To build such a machine, we can also exercise our own imagination. This will be the ultimate test of the ability to “generate counterfactual information”.