10 times Times Efficiency improvement--web basic research and development system establishment

Source: Internet
Author: User
Tags line web

1 Guide

The Web basic research and development system refers to the technology, tools, and the combined structure of the first-line engineers that are directly involved in web development. In the past discussion of improving the effectiveness of enterprise research and development, the topics around it were basically "encapsulating the underlying core technologies into infrastructure through cloud computing, cloud storage, etc." And we found in practice that in

    • The internet penetrated into all walks of life, business broke out
    • Enterprise competition is hot, the speed and quality requirements are more and more high
    • A growing team of first-line engineers and higher management costs

In this multi-background, in addition to the underlying core technology, the problem of first-line web research and development effectiveness has gradually become an important factor in winning the battlefield.

In reality, however, we see that because the first-line research and development work can be replaced, so there is not enough attention, but also lack of more unified, more in-depth planning and management. In fact, the application framework, testing, deployment, monitoring and other fields directly contacted by the first-line engineers as a complete system to think, and to create an integrated infrastructure, to the enterprise's business development to bring a huge performance improvement.

In the moon phase chapter, we will cover the components of the Web infrastructure development system and will delve into key technologies and issues. "Tides" chapter will introduce how to cooperate with the research and development system, make some adjustments in the organizational structure, through the management means to further explore the potential of the team to create more efficient organization.

In addition, it is hoped that this content can also provide some career planning guidance for the first-line engineers.

February phase

The research and development system we are going to discuss covers the two dimensions of the "development process" and "system". You can use a large picture to depict:

It can be seen that these content as a whole, in line with the current Internet company "core technology" + "Web development capabilities" model, can quickly produce applications. Among them, "user", "authority", "process" can be said to be the most system of iron triangle, so we also zoned into the basic research and development system.

Next look at each section. From a process point of view, the key to improving performance is "tooling" and "Automation", and we're going to cut it in at two points.

2.1 Design

The first is the design, the combination of design and coding is currently the industry's most imaginative space, but also the most immature areas. For the implementation of automation, the current attempt can be summed up in two categories:

The first category, with the designer agreed rules, according to the rules to convert the design draft. The key to this approach is that "the rules are simple and easy to be accepted by the designer, and the visual relationship can be fully transformed into the relationship in the program". Let's give an example of "visual relationships" and "Common Program Relationships": A scenario on a Web page:

It can easily be understood as a tab component with a button embedded in it, which is a "nested relationship" that may be written in the program using HTML:

<Tabs>  <Tabs.Pane title="tab1"><Button/>  </Tabs.Pane>  <Tabs.Pane title="tab2" /></Tab>

However, in modern design tools, the layer information represents only the visual back-and-forth relationship.

There is a mismatch that designers can use to express the same visual effects in 10,000 different ways. Therefore, to ensure that the relationship is correctly identified, you must agree with the designer that you can create the layer in some way. But the problem is that the Convention itself has no practical meaning for the designer, and is only a constraint to him. In addition to nesting relationships, positional relationships are the same--the current design tool output design is only a specific size of the visual effect, and our actual product size will vary depending on the device, and even with the browser window zoom and other functions dynamic change:

How to show that this change is an extra constraint for designers. The optimism is that, technically speaking, it is always possible to implement it.

The second kind of attempt, like the game to do the specialized design tool, then fundamentally solves the above problem.

(React Studio)

The idea is very simple, since the designer can have a variety of ways to express, then we from the tool angle to constrain, is there no problem? Although the same constraints, but for the designer of the burden is much smaller, no need for additional memory, according to the tool to use the guidelines. We can even offer some advanced features to prevent some human error, to attract designers. The only drawback to this approach is that there is a one-time learning cost.

Although the current automation solution is only from "visual manuscript" to "program Static View" Automation, does not include the automation of interactive logic, but has a great significance. In the statistics of the front-end programmer's work, it is found that more than half of the time is in "resizing, adjusting position, pixel, color value", and the better the front end, the time ratio is greater. Because the writing logic is able to improve their own quality to achieve the magnitude of the reduction, and writing style of the work itself is difficult to achieve the magnitude of the time to shorten.

If the design is automatically converted into usable code in the research and development system, there is no doubt that the development of the traditional Web page will be greatly improved. While it may seem like the final direction of a dedicated tool, in today's real world it can be resisted by increasing the burden of a designer's use, so it may be more appropriate to make a transition by making a contract in the original design tool. In the scheme with the agreement, how to let the agreement is not to create too much burden on designers, but also to solve the above-mentioned rules transformation problem, it becomes the focus. The solution in practice is to compensate the designer through the advanced capabilities of the tool. This detail has been described in detail in the outlook for automatically generating available pages for the design draft, which is not mentioned here.

2.2 Development, testing and monitoring

We discuss these three links together because they have upstream and downstream relationships in technology decisions. In the past, planning a research and development system in a large team, there is often a phenomenon, that is, research and development, testing, monitoring are planned by different teams, and each team wants to do platform. Later, slowly found that the idea is problematic, because the platform must take into account the different end of the access, to spend the cost of their services to abstract enough to the bottom, the cost of the different end to do the matching. In these three links, the development of the Runtime Framework (application framework) is the core of the tool and automation, as long as the runtime framework for a little more investment, back testing, feedback, monitoring the development costs can be reduced to very low!

A front-end example. When building a platform for visualizing pages, we designed a solution that unifies all of the component data into one tree.

The state and props of all components in a traditional React can be combined to express a page's unique status, which is scattered among the components and is not easily collected. In this design, the global state tree expresses a status of the page, and if each of the changed states tree is stored, it can be played back to show the process of dynamic changes in the page. Further, using this feature, we implemented the "record as test case" function within 200 lines of code. Users do not have to write any obscure use case code, when debugging their own pages as long as they feel no problem, you can save the debugging process as a use case.

Give an example of an interface layer. The interface of our runtime framework takes graphql, and we say goodbye to the manual write interface, all using the view tick generation.

This solves two common problems in research and development:

    • Eliminate the spelling and other errors that may occur when you manually contract an interface.
    • can automatically count to all of the interface to consume the page, once the interface to adjust, you can automatically notify the downstream, and even can automatically generate adaptive code, does not affect downstream.

This also greatly reduces the pressure in the testing process. The previous idea is basically to scan the code to find interface errors, to consume a lot of resources, now it seems unnecessary.

These two examples are from the perspective of research and development to think about the benefits we see, and we are from the point of view of testing and monitoring alone.

Testing field has a hotspot,--ui automation. There are two scenarios for current UI automation, compared to the DOM tree.

(Screenster)

Both scenarios have a common disadvantage, namely, "the type of change cannot be identified correctly." For example, there is a requirement that the two elements on the view need to be swapped, but the logic does not change, and the test platform is not to be alerted. Unless human intervention, these two scenarios are difficult to determine, because they are based on the "final rendering results" as the basis for judging. But if our tests are designed for the runtime framework, it's easy to implement. As an example of the component tree scheme described above, whether the page has any logical changes is related to state tree, because the status of the page is the states tree, and the logical operation is the same tree, so as long as we think that the tree has not changed, You can assume that the page has not changed without triggering an alarm.

In addition to the ability to identify changes, using the design of the runtime framework, we can also achieve more advanced features, such as the B/s architecture to restore errors on the browser side of the site. In the past debug, we usually have to communicate with the test, according to the operation of the manual restore to the scene, if can be automated by the program to achieve the error site, it will undoubtedly give the debug speed to improve quality. The key to this ability is that any data that represents the state of the page can be exposed externally and reset by external incoming. Once there is a variable that determines the state of the page in the function, cannot be taken out, cannot be serialized and passed to the server, it cannot be done. There is no doubt that this capability needs to be supported by the application framework. The design of state tree in the example is partly due to the consideration of supporting this capability.

Then look at the focus of surveillance, "no buried point" monitoring, and automated testing has the same wonderful. "No burial point" refers to the absence of manual embedding in code, usually using a visual technique to "mark".

(Growingio)

At present, some of the industry's solutions, the problem is also not to correctly identify changes. For example, if the element on the page changes position, can the burying point not be affected? In this project, we have also designed the solution at the level of the application framework:

Our page uses a template-like approach to nesting components, a structure we call component tree, which is statically parsed, so it's easy to visualize. If the user wants to control the component to make a change, it must give the component a unique name, and use the name in the logic code to manipulate its data to implement the change.

With this premise, without any additional input, the "No burial point" has been achieved. Because the "buried point" in itself is the logical function of the statistics, so the buried point will be buried in a logically related components, so there will be a unique name, then no matter how the component changes, as long as not deleted, we will not be affected by the buried point information. At the same time, if a component with a buried point is renamed or deleted, we can also automatically prompt the alarm. These features, too, are implemented in less than 200 lines of code.

(Buried point Name)

In conclusion, we can see from the two angles of testing and monitoring that, as long as the runtime framework provides a little help, it can be implemented at a very small cost. Downstream development of a defined upstream research and development framework, without any consideration of compatibility issues with various frameworks, can also allow downstream capabilities to go further and achieve more advanced functionality.

2.3 Framework Core Technology

We found in the online communication that many of the team's input to the framework rests on the writing gadget and packaging open source framework. Because the direction is not clear, do not know how to invest, also do not know how much revenue after input, so dare not go deep. In fact, the direction and specific technology should be put in, there are traces to follow. The source of this trace, as we have mentioned in the ideal application framework, is the nature of the procedure- data and logic .

2.3.1 Data

First, it's like a bit of data. The framework's data is the data structure of the object that is stored inside the framework when it is run, so long as it answers two questions, it can show great power:

    • Where's the data?
    • What is the life cycle of the data?

Knowing where the data is is the basic condition for managing the data. Any state of the application can be seen as an expression of internal data. Therefore, only the framework has mastered all the data, it is possible to implement functions such as restore site. This has two guiding meanings for our research and development:

One is that when you use a web framework that already has control inversion and dependency injection, you should fully follow the framework's conventions, and manage objects such as services entirely from the framework. Some framework syntax is cumbersome to write, and can be generated automatically by command line or IDE tools.

Second, when we transform or create a framework, we should use the unified management of data as the most basic bottom line, which is the basis for the upstream and downstream automation. The ability to test recordings mentioned in the previous section is based on a unified data source. To give a more interesting example, the Ajax requests of the past front end are basically the independent call API, the non-centric mode. The problems that this pattern may cause are:

After request a issued, because of network and other problems, has not returned, then the user has to resend the request A1. The result A1 quickly returns prompt success in the callback function. Then request A time-out to return, prompting for failure in the callback, causing the last user to see the failed message.

When a saga is introduced, all asynchronous operations are uniformly nationalized into the communication pipeline and can be managed across requests. The cancellation of a single asynchronous request, whether multiple asynchronous requests are independent, competitive, or just the last, can be easily implemented. Centralized-based request management, which can also be visualized:

(Kuker)

Continue to the second question, what is the life cycle of the data? The life cycle is usually triggered by an external event, or it is automatically triggered when it is run to a certain stage. For deep development, there are two basic competencies that must be provided by the framework. That is, the framework supports:

    • Manual Drive Life cycle
    • Replication and substitution of internal data

The manual drive lifecycle is important for features such as automated testing, especially when you are doing some basic testing, with the ability to completely simulate external triggering conditions. The replication and substitution of internal data can provide a basis for advanced functions such as recording, restoring field, and collaboration. We're trying to do this now. Based on this ability, "users can send their own error scenes to the developer to reproduce" function. It is worth noting that in some languages copying objects is a very expensive operation, it may be necessary to consider whether the use of immutable data format is better? Or does it rely on conventions and tags to provide itself with cheap replication? Confined to space, and will no longer unfold here. Readers who are interested in data issues in the framework can search for topics such as single Source of Truth and Shared Mutable state, and there are a lot of great discussions in the industry.

2.3.2 Logic

After chatting with the data, I finally came to the most interesting logical part. The framework is, in a sense, a way of providing a logical expression. There are two stages of development to improve performance in logical expression:

    • Provide some patterns or techniques. Let users write low-coupling, easy-to-reuse, easy-to-extend code. Improve performance when writing code. such as MVC, IOC, and so on.
    • In order to design a more reasonable DSL for different scenes, it can realize the conversion of code, graph and other expressions. Automate the entire chain of research and development.

We see that most of the frameworks are in the first phase, whether it's server-side MVC or the front-end MVVM. But there are a few second-stage attempts. For example, flow Based programming, trying to interpret the logic in the business entirely in terms of data flow. Its code can be naturally analyzed into graphs, and can even be observed at runtime:

(Noflo)

The event-based representation of business logic, as mentioned in the ideal application framework, is also a DSL. However, these attempts are still a gap from the final goal, the final ideal state should be able to achieve business flow chart, time series diagram, decision tree and other business areas commonly used in the expression of the code and the interaction. Although there is a gap, and it seems to be a long way to go, but for some of the more stable scenes, have some good experience. The CMS framework Drupal is a good example. It defines the entire process of data publishing and the corresponding hook system, allowing developers to modify or add functionality to the hooks in the form of a module. Once achieved a very prosperous community. More definitely, many of the modules in the community are visualized, and the end user does not need to write any code, according to the module's visual guidance to complete the corresponding functions. This is actually equivalent to the DSL and the code of the mutual transfer.

No matter which stage of the key technology, can not be separated from the ability to analyze semantics . To be blunt, it is the ability to "know what code is doing".

In the first stage of the framework, the most influential factor is not "how much to write the code", but the writing of the framework concept of the code is easy to understand, easy to maintain. This is evident in the larger, more participatory projects. And whether the "semantics" of the code is clear to some extent directly determines whether we can improve maintainability through technical means. For example, in a system that uses dependency injection, if all of the code's injection declarations clearly illustrate what is injected, then we can get a dependency graph through language-level support or simple string matching. Conversely, if the injected information is vague, it may be a function or model, without any obvious constraints, it may be through the syntax tree, to find the injected portal to analyze, so that the cost of implementation multiplied.

(Rekit Studio Dependency Analysis Chart)

More important than dependency analysis is " Call relationship Analysis ", which is particularly useful for helping to understand the process, especially troubleshooting problems. As a simple example, in a data-driven front-end framework, because the view is completely data-revealing, the view is incorrect, and it can be useful to dynamically display the business stack (not the function stack) that modifies the data. No step-by-step breakpoints are required.

Of course it's even harder to achieve. There are two difficulties: first, the dependency analysis is usually run before, is static, and the call relationship is generally run-time, is dynamic, and the other is the dependency is usually declared, easy to read out, and the call is usually in the active statement, you will encounter conditional judgment, loop, and even through the variable in different functions, class scope passed, More difficult to analyze. The difficulty, in fact, is the semantic ambiguity. In the design of the frame or in our own transformation, we can provide the explicit semantics by three points as far as possible:

    • Try to be active as a passive
    • Try to fragment.
    • Eliminate side effects in user code and dependencies on variables in external scopes

The first two points are well understood, and the passive declarative code structure is easier to analyze. While the practice requires a lot of experience to ensure that the design of the declaration format can cover all scenarios and easy to write, but it brings the most significant benefits. GRAPHQL is one of the best examples of how the data structures and relationships are expressed in a declarative structure on the server side, expressed in the data structures that the client is going to acquire with the structure of the Declaration, and the backend can use a unified engine to generate the call, eliminating the amount of time it takes to write the interface. 2nd, as far as possible fragmentation refers to when we guide users to write code, should be the life cycle and so on the concept of the smallest, so that more detailed semantics. Tedious questions can be solved by means of tools or grammatical sugars.

3rd most important, eliminating side effects is that code snippets that run users at any time should not have an impact on the external environment. The elimination of dependency on variables in the outer scope refers to the data to be used, the service as far as possible in the form of parameters passed in. There are two benefits to this, and for some complex, difficult-to-parse invocation relationships, you can wrap the object you want to see and then pass in, so that you can dynamically get the call relationship. Before running the whole, it is easy to get some information by trying to run these fragments because there are no side effects. Although the syntax Tree tool is now popular, it is still a lot of work to get enough semantics entirely through parsing. The above three points can be seen as a fast, inexpensive way to achieve, and the effect is very good in practice.

Comprehensive

Finally, it is worth mentioning that the principles and techniques in the data and logic mentioned above are not independent of each other. In the two articles, "front-end service--the death and life of page building tools" and "The Stone of Babel-enterprise-class front-end component library", many of the technologies we use are in fact hybrid support for data modification traceability and component attribute visualization. There are also dependencies among them. But these are not important compared to the two sources of "data and logic". As long as you have mastered the problems faced by these two sources, others can be deduced.

The same applies to how the open source framework is used. For serious enterprise production, we should find the source of the scene that the business faces, absorb the advanced idea that solves the problem, but realize it, just like the programming language realizes the language characteristic. And should not just stay in the packaging open source framework this level. Open source framework in order to adapt to the scene as wide as possible, there is a greater mass base, given is a universal solution, this universality in business development to a certain extent, with enough uniqueness will become a huge burden bite research and development effectiveness. By the time you consider your own development, migration, adaptation, research and development costs and risks can become very large. And we see from the previous article, mastered the framework of the development of several core, from a small scene began to invest, the cost is not high. The most important is the long-term accumulation of the formation of the system, the resulting "process automation", "lower downstream realization costs" and other capabilities can continue to help enterprises improve research and development effectiveness.

2.3 General-purpose subsystems and core interfaces

From the point of view of the process, the main increase in efficiency depends on automation. From the system point of view, it is mainly by the reuse of capacity. "User", "process", "permission" are almost any business systems exist, so the three are also included in the scope of the basic research and development system. Here we are not going to go deep into the specific problems that each system faces, just say two points:

First, the product or sub-system, not premature platform, for the future of the overall package of systems is beneficial. In the past, internet companies were accustomed to use these public systems as platforms and business systems to access them. But in the past few years the Internet business has entered new areas, the market, users are often required to isolate. The ability to replicate the system as a whole becomes very important at this point, so the three are treated in the same way as subsystems or sub-products at the outset, providing more flexibility for subsequent developments.

Two, three are not tied relations, do not have to struggle with the ability to decouple, make core system access specification is the most important. The privilege system, whether RBAC or DAC, is inseparable from the support of the user system. The process is dependent on both the permissions and the user. If the runtime framework is considered a motherboard, these three should be complementary parts of the motherboard, together with the core system access to provide pins. The same is the ability to package the system as a whole, should be the early development of core system access specifications.

3 Tides

In the "Moon phase" chapter, we discuss the structure and some problems of web basic research and development system. Compared to the specific technology itself, this package is more important for the growth and remodeling of the organization. In this chapter we will start with two enlightening questions from a large team and step into how to create a more efficient research and development organization program. Although the problem arises in large teams, there are still two points for small teams to learn from:

    • A small team may encounter the same problem as the business grows into a large team.
    • The problem itself sprout in the growth process, give the correct guidance can save manpower more, help the company development.
3.1 Platforms lined

First of all, we're still talking about the web layer. This phenomenon is most common in large companies with many different businesses. There are two reasons for this phenomenon, one is the company to million people scale, in fact, the equivalent of hundreds of hundred small companies, must have a lot of ideas, there are many natural repetition.

Another more important is that the infrastructure changes too fast in this area of the web, especially in the front-end.

Whether the API is supported by the underlying browser or the JavaScript language itself changes very quickly. At the bottom of a change, the development of the various links will inevitably appear based on new technology vacancies, many frameworks and platforms are emerging with these vacancies. The phenomenon itself is reasonable, but if there is better unified management, it can extract great efficiency from the following two aspects.

First, if the development process in all aspects of the independent platform unified, can greatly accelerate business development . The direct decision of the business development speed is the business development engineer. What they want most is that only one platform will take care of all the processes being developed and be as automated as possible. More platforms, for business development engineers, the cost of learning, communication and collaboration will be steep. How big is the cost? As we know through the line, this cost will often even exceed the business development itself.

Second, to increase the trend and direction of research, to prevent disorderly framework and investment platform, so as to prevent deviation and ineffective labor, is also an upgrade . The bigger the team, the more obvious this is. "Infrastructure change" such a huge ship has crushed many frameworks and platforms over the years, many of which are not effective enough to cover their research and development costs. Worse, for personal gain and other reasons, some platforms are outdated and will not go offline, blocking the entire team with the infrastructure to progress together the opportunity to consume more of the future of manpower.

The research and development system we put forward in the moon phase is the specific means of achieving these two improvements.

First of all, there is a complete system to treat the development process as a whole, and through a unified platform to achieve, can better automate the process. Reduce the cost of communication and learning for first-tier engineers.

Second, the investment in the runtime framework itself contains a study of trends and new technologies that can alleviate the problem of being run by infrastructure. And we see that the runtime framework has been researched to a certain extent, which can greatly reduce the subsequent testing, the cost of monitoring the implementation, and the domino effect of infrastructure changes. The reflection on management here is that we should be cautious of "platform" thinking. Especially downstream links, because "platform" thinking, it is necessary to consider a variety of different end of the adaptation, it is necessary to develop a variety of norms, these are human costs. As shown in the moon phase, upstream is clear and unified, downstream is able to be targeted, very low cost to achieve, there is no need for "platform-level" cost input. Of course, what is said here is only to reduce the technical input, the link itself is very important.

Platform from a certain aspect also indicates that the team has a large number of disorderly forces, this force in the team appeared in different parts of the division of Labor arose, as soon as possible to establish a research and development system can be used as soon as possible to create real value.

3.2 Resource Pools

Many large companies ' web development teams are used as resource pools, which business needs to be put into place. The direct cause of the phenomenon is two: first , from the management point of view, the Web layer of research and development work is relatively replaceable, with the possibility of forming a resource pool . The second is that Web development is at the bottom of the business decision chain, the staff is relatively the most nervous, so through the resource pool way can dynamically support business development, is the simplest solution . But the scheme is actually quite inefficient. We cut into this topic from the personal attention of the engineers.

First of all, from a subjective point of view, a passionate, professional future with a vision of engineers and ferocious engineers in the efficiency of the multiplier is very different. In addition to its own character, the creation of these two attitudes has a large part of the problem of career rising channels. First, as with any job that can be used as a resource pools, because it is replaceable, it is not easy to be taken seriously. Second, there is not enough current growth channels. Web development engineers ' rising channels are either vertical, and when the business grows well enough, the importance increases and becomes the system leader. Either horizontal, as the team expands, management needs to become the manager. Both of these are based on a business one based on human, and technology-related little. So talented engineers will naturally want to build a framework, build a platform, and strive to become an important part of the company's technology. But without the right guidance, it will become the "power of disorder" above, the development of the wrong will become the consumption of the company. And when he finds that his efforts are not rewarded, it is possible to become a ferocious engineer.

The ultimate impact is not just the engineers themselves, but the company also pays for its management costs. When we communicated with some senior HR, they further explained the problem. Employees who have worked in the company for four or five years are the mainstay of the company, who understand the company's culture and understand the company's problems. If there is not a good rise, there may be two situations: one of them has the ability to act, and most will choose to leave. It will not only affect the stability of the organization, but also let the company pay the cost of training, the company can be regarded as a great loss, and the other is not to leave, and not as hard as before. They have a certain voice in the team, but lost the passion, no longer play a positive role. When companies need rapid expansion and rapid change, they become hidden management costs.

The solution, in fact, is simple. Large companies usually have "frame group", "platform group" similar departments, but basically stay in the "available" stage to complete the task. This is not enough, if the establishment of a web-based research and development system as a goal, to product as a measure of standards, strengthen attention and investment. The rise of Web engineers expands, and the power of disorder within the company enters the right field. The value of creation can further provide energy for research and development, form a positive cycle, gradually alleviate the situation of human tension, resource pool phenomenon will naturally disappear. This is much like when we are injured, external swelling is only inflammation of the performance, anti-inflammatory, swelling naturally good. The focus is on finding the key to do anti-inflammatory, rather than thinking around external phenomena.

The benefit of building a system is that by improving the efficiency of web research and development, reducing the number of people needed to develop a business, you can help big companies get back to the pace of small steps. Zhang Xiaolong has a speech, "Beware of KPIs and complex processes," and talk about the importance of small teams, which will become more and more important as internet companies compete more and more fiercely. We from some of the internet's new giants, in fact, have seen the " core technology + basic web research and development capabilities " to quickly produce products, occupy the market trend. For small companies, a unified web-based research and development system is an important part of helping to achieve overtaking. For large companies, it is a required course to further explore the effectiveness and prevent falling behind.

4 PostScript

The author of this article rewritten three times, because the face of the Web front-line engineers such a huge group, the effectiveness of the promotion has not only the technology or management of a certain aspect of the matter, but to synthesize all aspects to seek personal and corporate development of the win. Web front-line research and development is a lot of engineers to enter the field of the first thing to do, but I see a lot of talented people trapped in the inefficient, repetitive work, and also because the direction of the wrong and futile. This is a loss for both the individual and the company. In the technology to find the direction, in the management of support, multi-force, in fact, can enhance the effectiveness of more than 10 times times. Of course, an article is not enough to achieve any change, this article is the main goal or to stimulate, but also want to have ideas in this direction, like-minded friends contact me, to communicate, together to promote.

Email: [email protected].

5 answers to the reader's Questions

Q: Is the unified planning of the research and development system not undermining competition and innovation?

A: Unified planning does not mean not to compete, not to innovate. Instead of preventing a reversal of the situation, it is a direction for competition and innovation. For example, we clarify upstream and downstream relations in the text, the purpose is to guide the research and development forces should be put into place, in the process of investment can still take the form of competition.

Q: What is the trend of framework development to improve efficiency?

A: in two directions. One is that the framework itself will move in the direction of "enhanced understanding and control of user code". Can the user use the tool to analyze the specific semantics of the code written by the framework? Can manual low-level errors be automatically detected and automatically corrected? Can it be translated into some form of expression of human habit, than? When the control force reaches a certain level, these capabilities are realized, and then down should be the automatic generation of code.

Another noteworthy is that the framework capabilities and business-specific properties are precipitated together into the IDE, with a proprietary framework-oriented IDE for individual project-specific Ides. Analysis diagram, the automatic scanning tool is used by the IDE as a carrier, just like many games dedicated editor. Only precipitation to the IDE is the most convenient, most targeted. Readers interested in this direction can talk to me in the mail, and we have some preliminary ideas at the moment.

Q: The questions mentioned in the article are big companies, when should we start building a basic research and development system for small teams?

A: The establishment of research and development system can be divided into two stages of planning and implementation. Planning, you should plan from the start. In the early days of insufficient manpower, without forcing the pursuit of creating their own framework and tools, you can use open source. But should always be open-source things to their own system, clearly know its role in the system, but also clearly know what they use its characteristics, the future need to add what. When it comes to implementation there is a symbolic standard that we have previously mentioned in the forces of disorder-when there are different parts of the division of labor. Some team expansion is too fast, manpower has not been enough, then it is recommended to One-fifth to One-third of the proportion of manpower to build infrastructure. AX, not to mention the return on investment is a chainsaw.

10 times Times Efficiency improvement--web basic research and development system establishment

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.