NLF said that without considering specific problems, none of the algorithms were better than the other, and not even guesswork. Without a specific application, the universally applicable "optimal classifier" learning algorithm must make a "hypothesis" related to the problem domain, and the classifier must adapt to the problem field.
However, the premise of the NFL theorem is that all problems appear equal opportunities, or that all problems are equally important. But in reality, we tend to get specific data, specific distributions, and solve specific problems, so we just need to solve our own concerns without having to consider whether the model solves other problems well. Only for a specific problem, we compare the different models have meaning.
Not only machine learning, we are doing other algorithms when this is the case, if not to consider the actual problem solved, it is difficult to say the pros and cons of the algorithm.