Shouldn't robots have their own independent ideas ?, Robot should not
Robots have developed very mature technologies. According to recent news, high-imitation robots have their own independent ideas, so they can freely talk to people and distinguish conversation content, knowing how to talk and having rich facial expressions, although seemingly unnatural, at least shows that the current technology has reached a new level, whether or not to vigorously develop and produce these robots in the future is a question worth pondering. In the future, the robots in the movies will have the same mentality as humans, so they will have a steel body, and human IQ may become our biggest enemy! So how do chatbots learn from humans? "The computer first needs to find out what it doesn't know and then find the correct answer. In fact, this is also a model for how people learn from each other ." Rudnicky says they can also be learned through experiments. However, Rudnicky does not regard Siri as a strict chat system. "I call it an information access system. It allows you to call someone in the address book, learn about the weather, or learn how to get some information. Of course, this program can also do some smart things, for example, "can you marry me? 'This type of problem ." He said. Although Alexa has more skills, it is essentially similar.
Almost every technology company has a virtual assistant: Apple's Siri, Amazon's Alexa, Microsoft's Cortana and Google's assistant. What are the functions of these electronic products called chatbots? Recently, science magazine interviewed Alexander Rudnicky, a computer scientist at Carnegie Mellon University in the United States about chatbots. As for chatbots, Rudnicky said that they were initially conversational systems that could interact with people through text or voice. In the scientific community, the word "chat" is non-purposeful interaction.
Can different chatbots combine their knowledge? Rudnicky indicates that, to some extent, they all have certain standards. "They all need to agree to the same knowledge representation, that is, ontology. In this way, in principle, people can share data to some extent. However, knowledge sharing has a series of strict terms ." He said.
In addition, chatbot programs are still facing huge challenges. Rudnicky mentioned that from a historical perspective, chatbot developers need to list all the content that people may say, which is a huge obstacle for a long time. However, the latest system uses the so-called "purpose recognition" to obtain the implied meaning in people's discourse. "They associate words, find the closest expression, and respond ." He said that the use of context and world knowledge is also difficult.