In my eyes, many trends in the past few years have been pointing in the same direction, and now it's time to sum them up. There is nothing new in this article, just a generalization of my thoughts and observations. I am not in a situation to depict the picture of the mind (even if the background of the painting is entirely from my painful lessons). Let's get started.
Years of neglect of chunking
Even after 16 years, I still remember clearly the [Booch OOAD] book discusses how to use layering and chunking (some readers may prefer the synonym of "module"). At that time I was teaching a course on object-oriented analysis and design, which was a textbook. I felt it was easy to discuss layering because there was a lot of contact at the time (it was still deeply affected, the next session, and so on), but it was a bit hard to talk about chunking.
In the actual projects that I have been involved in, we have indeed been taking a chunking approach, but it is difficult to use short examples to demonstrate chunking. And as I remember it, we take chunks just for technical reasons, not as naturally as layering.
The feeling of not being able to "make the most of it" is always with me, and I'm letting go.
Misuse of layering
When I wrote the first book [Nilsson NED] In 2001, I reached the top of the blind indulgence of layering. The layered diagram in Figure 1 is already a simplified version.
Figure 1. Typical tiering in the past (simplified version)
Figure 1 shows that the middle tier includes the façade layer, the business logic layer, and the data access layer. There is also a common stored procedure (SPROC) layer, a private stored procedure layer on the data tier, and sometimes a view layer on top of the datasheet ... (The UI part is also layered, imagine ...) )。
I call this layered approach my "default schema," As the name suggests, and I start with each new project. Of course, the specifics of the architecture will change over time, but the point is that I have a preconceived view that projects are inseparable from this rigorous layered approach.
I observed that:
When database schemas, views, private stored procedures, public stored procedures, data access tiers, and layers of manual writing ((to ensure everything is just as expected, such as performance), many times, there is little time to spend on the business layer. So the business layer is often very thin, nothing more than doing left-handed hand-right work, incidentally, a number of placeholder notes.
As can be seen from Figure 1, data storage is heavily defended, and the layer of protection that passes through the UI to the data store is not generally much. In most cases, however, the database schema is severely exposed by using the recordset pattern to complete data transfer from the database to the UI. When the UI finishes modifying the data, it is almost written directly to the database ... So-called protection.
Each layer is introduced on the basis that "there is better than nothing", not because "proof is needed". A perfect example of violating the Yagni principle, right?
Today, my typical initial layering scheme is quite different from what it used to be. Start with just one domain model layer, first take the part that we really care about, which is the solution to the business problem.
Then when necessary to increase the level, such as the need for the user interface will be added to the UI layer. As appropriate, the UI layer can even be stacked directly above the domain model. If it's not appropriate, consider other needs.
Next, if we want to persist the domain model to a relational database, we need to take this part into the scenario. Usually an object/relationship mapping can be solved well, which can automatically solve 80% of the domain model to the data mapping requirements of the database. Take a look at Figure 2.
Figure 2. A case of typical stratification nowadays
I also tried to make the database schema a "result" of the domain model, which automatically generated the database schema. This means avoiding a lot of heavy work and giving us time to devote ourselves to the part of the real concern-the domain model itself. (The same approach can also be used on the UI.) )
In this way, I don't have to stick to the rigid layered approach and focus instead on finding the right pieces. And because the entire solution is divided into small chunks, each chunk of the layered form can be more flexible, because the scale is smaller.
Split Team by Technology
When I write the above section, I think of the Software Architecture Symposium held in Lillehammer, Norway, in 2005 [Fowler Layeringprinciples]. At the meeting we discussed how to split large teams into smaller teams.
No matter before or after the meeting, I have the experience of splitting the team, many times split by the technical split, but also by function split. I especially remember a project that, for one of the team members, failed to split by function because he had no experience, so he had to struggle almost every aspect of it.
In another project, splitting by technology looks very successful, but the members of that project are very experienced. Even in this successful project, there are a lot of things that are more efficient if they are broken down by functional chapters. For example, small changes across units can involve fewer people.
In short, I prefer to split the team by function. Everyone must have a part that is particularly good at it, but if you don't need to coordinate and wait for someone else to do the job, most of the work can be done faster.
Enterprise Domain model
A few years ago it was very popular to set up the whole enterprise "eminence" data model. The idea behind this is that if once finds and describes such a data model, it can create huge business value from it. So the message is communicated to the business people:
"Now give us two years of uninterrupted time, and then we'll give you a well-defined model, and everything you want can be created effortlessly." ”
In my opinion, the enterprise data model is a big failure. There are certainly many reasons, but the following may be the most important:
Even in mid-sized companies, the size of the data model is too large.
The data model attempts to depict a dynamic target in a static way. And it's much more difficult to portray a larger goal than a small one.
Large models are more or less generalized and thus lose contextual information.
I personally strongly believe that the big task is to chew in one mouthful, and the context is kingly.
As I said earlier, attempts to build an enterprise data model often fail. Now I've found a slightly more ironic scenario--and the project is almost repeating the same attempt, but this time it was replaced by the Enterprise domain model. The argument has not changed, I think the result and the cause of the result will not change ...
To be fair, the more common scenario is not to really chase the "Enterprise Domain Model", but to create a single large domain model. In addition, the team found that there was a need to split large domain models, but found that it is not easy to find a satisfactory solution.
In any case, I strongly recommend that the domain model be divided when it shows signs of increasing. And don't forget that the model has context.
Interestingly, I've also heard that the first step in some very large SOA projects is to set up an enterprise document model. Will it be more successful than the enterprise data/domain model? If I had a bet, I'd bet on the other side.