Original link
One theory among human evolutionary theorists is that puppies are evolved from beasts, because only dogs that have gained the social intelligence can survive. Thousands of years ago the wolves in the human gathering area around the activities, gradually began to acquaint with the human intentions and feelings. In other words, their brains begin to adapt to human brain activity. Over time, their behavior, even appearance, has become less ferocious, more adaptable to human emotions, and more symbiotic. At this time, they become dogs.
Here's an example of dog evolution because, * * humans are now living with another species other than us, and it's more dangerous and powerful than a canine: this is the algorithm. **facebook content is determined by the algorithm, Amazon content is the algorithm decision, Spotify and Netflix content is also determined by the algorithm. Now, some sort of algorithm may be controlling my room temperature through the thermostat at home. If you interact with the digital world-who doesn't? -You'll be connected to the algorithm. We need to make sure that these code systems understand our needs and intentions in order to design products that are human-aware and user-friendly.
The evolution of the algorithm is part of human evolution.
The technology writer Christopher Steiner (Christopher Steiner) described the algorithm as "a huge decision tree, consisting of successive two-yuan judgments?" A set of instructions executes sequentially and obtains an ideal result. The information is processed by a known algorithm, and the output requires an answer. ”
The surviving state of the algorithm is certainly not like a puppy, and the algorithm is invented by humans. But like those who are in early evolutionary times, humans do not understand them, and algorithms are often not designed to respond in a way that humans are accustomed to coding. Algorithms that interact with humans (probably all systems that humans use, such as the stock exchange market) should evolve, not only effectively, but also understandable.
But the domestication and evolution of puppies cannot be overlooked: Humans are also evolving to live with them. They also change the human race. Dogs have become part of the human ecosystem. There is evidence that dogs and humans collectively drive the evolution of the brain's processing processes, such as serotonin (serotonin, a nerve conduction substance). As long as there is enough time, the algorithm may also have this effect on us, changing the way we think. (unlike puppies) algorithms may not change us at the genetic level, but are changing our behavior.
What the algorithm does best
There are five things that are particularly good at algorithms: performing repetitive tasks quickly, making logical judgments between different choices, analyzing predictions, evaluating historical data, and discovering neglected links. All these are the least of the things that humans do.
If there is a competition between your work and the algorithm, such as high-frequency stock trading, then you are likely to fail. The working speed of the algorithm is not accessible to humans. Even the slowest decisions are faster than humans, almost instantaneous. The algorithm works in milliseconds, the hummingbird's time concept. Much has been written about high-frequency stock trading to create wealth. The New York and Chicago exchanges will soon be connected at near-light speeds: 15 milliseconds. Back and forth Only algorithms can make this speed useful.
This fast processing speed allows the algorithm to be judged in different decisions. These decisions are often based on predictions of data logic analysis-for example, a set of conditions often leads to a certain result. Of course, these predictions are not always correct. But because the algorithm can handle very much more data, and the speed is very, very fast, humans can't be the same, so they can make predictions faster-and act on the results.
The algorithm also provides an excellent assessment of past events and historical data sets to improve predictions for the future and suggest possible actions. Today, humans are making massive amounts of data-large data in large systems, small-scale data on personal devices and self-quantification-we need to rely on algorithmic help to figure out what the data means and where the value of the data is.
All of these are the advantages of algorithms, but they also become inferior when humans are in contact with algorithms.
Awkward algorithmic interactions
The algorithm brings a new, confusing experience. The first is that some algorithms are very effective places, is simply magic: Users get the right recommendation, or calculate the fastest path from home to the company. You feel as if there is a powerful force in your favor: Elf interaction (The Genie Reaction).
The other side is the "frustration frustration" (FAIL) in the face of algorithmic stupidity, which is often caused by the algorithm ignoring the context. Information about the application environment or content theme is unknown or cannot resolve the difference. For example, you get stuck in a traffic jam navigation system may not know that there is an accident, there was a set-top box product will be straight male audience as a gay, recommend relevant content.
In addition to the good and bad recommendations, there are some strange scenarios when coexistence with the algorithm. At the end of a "planet show: New Hope" (Star wars:a new Hope), Luke (Luke, the character) turned off the computer to aim. In the same way, we can trust our feelings and take the initiative to decide whether we need algorithms to help us. This may be inconvenient, but sometimes exciting. If you cancel an algorithm recommendation or travel navigation, try the "Beat algorithm" (Beat the algorithm) to be a delightful new pastime, though it may be accompanied by annoyance. What if Luke doesn't hit the target? What if ITunes's Genius algorithm recommendation works well? What if the other route home is really faster?
The algorithm will bring uncomfortable and impersonal situation to mankind. For example, a route that looks reasonable on the program map may need to pass through three congested streets. Although can also walk--very reluctantly. It's almost impossible. People generally do not choose. And few people want to be an algorithm to experiment with mice, but this happens occasionally.
In the same vein, the value rift (Rift of values) also exists: The value of an algorithm may not be at all comparable to a person's value. Most algorithms only have a higher rate of operation and convenience. For example, the navigation algorithm thinks it can save you a minute, usually letting you turn left on the road, rather than staying on a trunk road, or wondering if you are familiar with the area's roads, and ignoring the hassle of multiple turns compared to keeping straight. Sometimes the extra minutes are not worth it at all, but it's impossible to make the algorithm understand that.
Algorithm: Heterogeneous in the crowd
Ian Borgstedt, who bogost in the Alien phenomenology, wrote, "We don't need to go to other planets to find a different species, they're living among us in algorithmic form." Algorithms are not human beings, they do not care or feedback human intentions and feelings, unless they can evolve like ancient wolves, to meet human needs.
But unlike wolves, there is no thousands of-year time to evolve the algorithm. The problems and consequences of the development of the algorithm are serious. The 2010 crash of the stock market (Flash Crash) was an example of the collapse of a small stock market in which the Dow fell 1000 points in a few minutes. Think about the same thing that happens when you're in a grid or an unmanned car.
Evolve alone
One way to speed up this evolution is to tell the algorithm what human needs and values are. We need to add awareness of human and competency boundaries in our code. Tell the algorithm what the environment is, what our intentions are, how we feel--or let the algorithm detect itself through (past and current) behavior. For example: If the user has never been driving this route, try to stay on the main road; If the user seems a bit anxious, don't give too many options. When judging errors, we also need to let the algorithm know, such as this is not my favorite music or the experience I want.
The feedback of the algorithm also needs to be adjusted based on human cognitive ability. We cannot enter information at the speed of the code system. We don't need to know all the data, just some meaningful key points. Tell me it's not helpful to have an accident on the route 20 miles away, but this is the result of the algorithm, and it's likely to affect the speed of traffic.
These exist in the form of code, and the ghosts of these machines are becoming more incredible than their creator. As algorithms begin to take over our critical systems, humans need to make sure that they understand humans as well as puppies. In this way, perhaps the future we will consider the algorithm as human's best friend.
Original link
Why do we tame the algorithm like a domesticated puppy?