Although the slogan of artificial intelligence is loud, it is difficult to establish a service driven by artificial intelligence. To what extent is it difficult? Some startups have found in practice that it is much easier and cheaper to let humans work like machines than to make machines work like people.
“In fact, using a manual to complete a job can allow companies to skip a lot of technical and business development challenges. Obviously, this will not scale a company, but it will allow a company to skip the most in the early days. The difficult part and getting a certain user base," said ReadMe CEO Gregory Koberger. He said that he has repeatedly encountered a lot of pseudo artificial intelligence (Pseudo AI). In his view, this is the prototype of the relationship between human and artificial intelligence.
Pseudo-artificial intelligence feature 1: manual work instead of AI
In the case of Koberger's statement of the relationship between humans and artificial intelligence, the most typical case is that Google Gmail was exposed to developers to read user emails earlier this week.
According to the Wall Street Journal, Google allows hundreds of external software developers to scan more millions of users for more accurate advertising; these developers can train computers and even let employees read their Gmail. Mail, but Google is negligent in management.
In addition, there is Edison Software, based in San Jose, where the company's artificial intelligence engineers improve the "smart response" feature by browsing the personal emails of hundreds of users. Of course, the premise is that the company did not mention manual review of users' emails in their privacy policy.
If we do a deeper study of the practice of humans replacing artificial intelligence, as early as 2008, there have been signs. At that time, Spinvox, a company that aimed to convert voice mail into cost information, was accused of doing the job in an overseas call center instead of a machine instead of a machine.
Then, in 2016, Bloomberg also reported on the dilemma that humans would work 12 hours a day instead of chatting robots to service users. At the time, some people said that the work would be numb, so that they all hope that they can be replaced by robots.
Then, in 2017, Expensify, a smart business solutions application, acknowledged that the company has been manually transcriptionally referred to as part of the receipts that were processed using "smart scanning technology." The results of these receipts were posted to Amazon's Mechanical Turk crowdsourced labor tools, which were read and transcribed by low-paid employees.
Even Facebook, which has invested heavily in artificial intelligence, relies on humans to act as its virtual assistant called Messenger.
Pseudo-artificial intelligence feature 2: AI results fraud, access to investment
In some cases, humans are used to train artificial intelligence systems and improve their accuracy. The business of a company called Scale is to provide human labor and provide training data for autonomous vehicles and industrial intelligence systems. For example, “Scalers” will review the data of the camera or sensor, and label the car, pedestrians, etc., and then with sufficient manual proofreading, the AI ??will begin to learn to identify the objects themselves.
In some cases, some companies will always pretend in this way, telling investors and users that they have developed scalable AI technology while continuing to rely on human intelligence in secret.
Pseudo-Artificial Intelligence Feature 3: Call the preset program AI
Today, most of the products on the market have been labeled as “smart”. For example, conversational children's intelligent robots are actually toys of pre-programmed programs; AI training for bag-study packages is actually It is programming training; the gods' AI stock trading software is just using a quantitative method to select stocks. In this regard, Alibaba's former CEO Wei Zhe even concluded that the current pseudo artificial intelligence ratio may be as high as 90% or 99%.
To put it simply, these products are just “wearing a vest”. What labels are popular when they are popular, thinking that they are labeled with artificial intelligence, and it really becomes artificial intelligence. At present, there is no accurate definition of the artificial intelligence industry. The more widely accepted is that artificial intelligence needs to have the ability to learn autonomously. Therefore, it is inevitable that there is a product to exploit this loophole. But this will also disrupt the market and cast a shadow over the development of technology.
User's attitude towards artificial intelligence - transparency
Alison Darcy, founder and psychologist of the chat bot Woebot, said, “Many times, artificial intelligence is behind artificial rather than algorithms. Building a good artificial intelligence system requires 'a lot of data', sometimes investors will invest before they I want to know if there is enough service demand in the field."
But she said that this method is not suitable for psychological support services like Woebot. “As a psychologist, we have to follow moral principles, and not deceiving is one of the very obvious moral principles.”
However, studies have shown that in people's minds, it is easier for them to reveal their own voices than to face human doctors. A team from the University of Southern California tested it with a virtual therapist named Ellie. They found that veterans with post-traumatic stress disorder were more likely to talk about their symptoms when they knew that Ellie was an AI system.
But some people think that companies should always be transparent about how their services operate.
"I don't like this. It's dishonest to me, it's cheating. From my point of view, these are not things I want to get from the services I'm using." Some come from pretending to provide artificial intelligence services, actually The employees who hired humans to work said, "And on the workers side, I feel that we have been pushed behind the scenes. I don't like a company using my workforce, but instead lie to customers and not tell them what really happened."
This moral dilemma has also caused people to think about artificial intelligence systems. Take Google Duplex as an example. A robotic assistant can make a creepy and realistic dialogue with humans and complete a restaurant or hair salon reservation. This is not a good feeling for humans.
Darcy said, "In their demo version, I feel cheated." Under the feedback of users, Google has finally changed the way Duplex talks, and will indicate his identity before speaking. Despite this, people still feel a little concerned. For example, if the voice of the assistant simulates some celebrities or politicians to call again, what will happen?
With the development of artificial intelligence, most people have already feared artificial intelligence. Therefore, in the absence of transparency in the application of AI technology, this technology will not really facilitate people's lives. In addition, whether it is called artificial intelligence with a simple preset program, or the concept of stealing automation equipment is called artificial intelligence, these pseudo-innovative hype will only produce more and more artificial intelligence. Big damage.
Artificial intelligence requires lucid and objective judgment and solid efforts. Therefore, for the moment, for the development of artificial intelligence, establishing a more pragmatic development environment is the top priority.