Introduced
In a paper called "Dynamic Business Applications Imperative," a senior analyst at Forrester, John R. Rymer, identified a fatal flaw in today's application:
Today's applications force people to find a way to map isolated information and functional groups to their tasks and processes, forcing IT staff to spend a high budget on tracking changing markets, policies, rules, and business models.
In the next 5 years, the main goal of it should be to invent a new generation of enterprise software that adapts to business and business work and evolves as the business evolves.
Forrester calls this new generation a dynamic business application that emphasizes the close alignment of business processes and jobs (designed for humans) and the adaptation of business changes (built for change). At this stage, the requirements for dynamic business applications are clearer than the design practices needed to create them. Tools are out-of-the-box: Pioneers in service-oriented architecture (SOA), Business Process Management (BPM), and business rules-including independent software developers (ISVs)-have begun to show this approach to us. Now is the time to start this journey.
In this two-part article, we look at the development of these dynamic business Applications (DBAs) from the perspective of architecture and methodology, using a historical perspective. Our goal is to get a build approach that makes applications easy to adapt to business changes and other necessary modifications. As companies focus on flexibility in the 21st century, DBAs are key to making business and it successful in the next few 10 years.
Figure 1. Flexibility and efficiency--the two main drivers of enterprise in the 21st century
What does dynamic mean to us?
In the field of software engineering, many frameworks or products are claimed to be adaptive. Before we try to understand how good a solution is to adapt to change, we need to give a reliable definition of how systems change-their dynamics.
Early object-oriented methodologies recognize that [1]: To make system analysis neutral, it must be based on two types of real-world needs:
Real-world entities-gathering information and relationships between real-world entities helps analysts begin to look at demand in a systematic, structured, objective way, rather than a technical, subjective point of view
Events in the real world--system behavior is driven only by the appearance of events that change the physical state of the real world
In such a context, we can always identify one or more of the most important entities for each system being analyzed. Each entity also contains 3 associated elements: event, State, and lifecycle. Each event represents a change in state in which all ordinary entity states are ordered and represent a lifecycle. However, there are significant differences between events that trigger state changes and those that are part of the normal process and those that trigger state changes but are not part of the normal process. For example, after a product order has been submitted, a set of possible events includes paid processing and order delivery. When a user changes the order or when the enterprise changes the price, we cannot assume that these actions are part of the normal process, so they have nothing to do with the lifecycle of an entity, such as an order. The lifecycle of a core entity instance uniquely defines what the system is most likely to handle in normal operation. All other event types, such as changes or intermediate steps, are treated differently.
This scenario is not unfamiliar to many engineers: A system model contains a core entity structure that has a set of events that form the lifecycle of an entity. The system model is clear to both the analyst and the designer and is easy to understand. Modeling tools, such as finite state machines, entity diagrams, entity state transitions, and data flow diagrams, have been perfected for nearly 20 years to help this approach. The software, which serves billions of lines of code for complex systems such as Airbus 380 or F-22, the world's most advanced fighter, is written. Using an object flow diagram (which is the underlying model for capturing events and state transitions) The virtualization entity lifecycle is the key to this model. In this case, the schema can be considered static because the entire system state is determined at any point on the timeline.
Figure 2. Event models, state changes, and lifecycles are the core of normal operations
Common events, States, relationships between lifecycles, and common events from other event types are the basis for understanding the proposed dynamic operational framework. As James Martin and James Odell wrote a long time ago in object-oriented analysis design, analysts, designers, and implementations should use the same system model. Analysts use data flow diagrams to think, designers use structural diagrams to think, programmers use Java and SQL to think. In the context of the data flow, the analyst recognizes the object type and thinks about events that change the state of the object. The end user also understands the same understanding. They should also think in terms of object types, events, changes in object state, and business rules that trigger and control events. Martin and Odell emphasized the importance of object flow diagrams to System Designers: "Event patterns are appropriate to describe processes according to events, triggers, conditions, and actions." But this is not a good way to describe a large, complex process. A system domain is often too large or too complex to be represented as events and triggers. In addition, there may be only one high level of awareness that is necessary. This is especially true for strategic-level planning. In such cases, an object flow diagram is useful. Object Stream Graphs (OFD) are similar to data flow diagrams (DFD) because they describe the interfaces between activities and other activities. In DFD, this interface passes data. In object technology, we are no longer limited to data transfer. Instead, the diagram should represent any type of thing that is passed from one activity to another: whether it is a report, parts, finished goods, designs, services, hardware, software-or data. In short, ofd refers to the objects that are produced and the activities that produce and exchange them. ”
To capture the flow of information associated with a business operation, the business analyst complements the OO methodology with the value stream Mapping. The value flow chart originates from Toyota and is closely related to lean manufacturing. The National Environmental Protection Agency defines the value flow chart as "A lean process mapping method used to recognize the sequence of activities and information processes that produce or deliver services." "The key words here are" products "and" services. " They show the unified role of the right information process throughout the enterprise.
Combining the two concepts of process flow chart and value flow chart, it produces a framework foundation that can be easily translated into oo (Figure 2), which represents the business scope of the whole enterprise.