I have more than 10 years of experience in complex large agile projects: Generally speaking, people are planning and budgeting based on the prediction of development results. Many times, the development team estimates the story, but the entire project's budget is independent of the estimate. Especially in complex projects, this often leads to unexpected results (not welcome). In the face of these problems, learning about Daniel Kahneman and the work of those who go beyond the budget theory has helped me to better understand how norms, estimates and budgeting relate to each other and why traditional methods are unworkable. Of course, we all know how these work in small projects, and we know how to plan and control a project through iterations. However, how do we decide whether to approve or disagree with a project, how to define a budget to start a large project, and how to know how our tiny iterations (across many feature teams) fit into long-term project goals? Note that I am not talking about small or regular projects, but about large projects with 50 to 300 developers, lasting for a period of 3-5 years, or large-scale products (product line) development of a similar scale.
Predictions aren't reliable.
Daniel Kahneman--, the Nobel laureate in economics, though he is a psychologist, has come to the conclusion that most things happen based on coincidence. One of his examples is related to the history of the past-the embryo that developed into Adolf Hitler was, in fact, 50% of the possible birth of a woman. Who knows the different direction of this branch of history, and what impact will it have on the centennial history of the world? This means that it is impossible to predict complex things.
In addition, Kahneman has collected similar work from colleagues working in the field to verify his views. For example, Philip Tetlock, a psychologist at the University of Pennsylvania, has collected more than 80,000 political and economic predictions. Note that the people who make these predictions live on this. So we're talking about some real experts! The problem is that these predictions are even worse than those predicted using the normal distribution curve. But when it is proven that predictions are wrong, few admit that their predictions are false. Most experts start to find excuses or reasons why they think they are actually right, but the timing is a problem-but they don't provide any inferences as to what type of timing is right.
In another study, Terry Odeon, an economics professor at the University of Berkeley, analyzed some 10,000 agency results, covering 163,000 deals. The interesting thing about brokerage fees is that in every transaction there must be someone who believes that selling the stock or something else is more advantageous, while others believe that buying is more advantageous. However, both sides of the deal are generally considered experts in the field. In the analysis, Odeon found that selling shares was 3.2% better than buying.
In the last example, I want to mention something based on Kahneman's own experience. He used to serve the Israeli military. There he created a test for him and his team to assess the candidate's military career. We may have a clear picture of when taking this test after an interview, the candidate has to solve some very difficult problems, build something, and overcome something, and the results of who leads, who opposes, who is a good team member, become "obvious". By observing the applicant in the test, Kanneman and his team always establish a high degree of confidence in the applicant's further qualifications. Every few months, however, the feedback they received from the commanders indicated that the results were only a little better than the blind guesses. Interestingly, Kahneman and his colleagues have neither adjusted the methodology nor changed the conclusions of their observations based on this feedback. It's clear that this is still the perfect candidate for a career in the military when you see an applicant leading the test. They are so confident in their impressions that it seems impossible for them to change their conclusions.
So, what can we learn from these studies? There are actually two very important lessons to keep in mind:
First, predictions will always be an error-prone job, because the world-or any complex event-is unpredictable.
Second, a high degree of self-confidence (like the kind of self-confidence that an expert often displays-whether recommending soldiers or estimating a project) does not mean accurate, but it means that the expert has a complete story. It is only when you are in a stable environment that you are highly confident (sometimes called intuition) to believe.
The following is actually the third lesson in Kahneman research-I didn't have the trouble to test it by research because it's the basis of iterative development: Unlike long-term trends, people can get very accurate short-term predictions based on past behavior, patterns, and results.