Minimum version selection (VGO)

Source: Internet
Author: User
Tags upgrades and downgrades
This is a creation in Article, where the information may have evolved or changed.

This article is translated from the 4th part of Minimal Version Selection, Go & Versioning, and the copyright @ the original.

The versioned Go command must decide which version of the module to use in each version. I refer to the list of modules and versions used in the specified build as the build list. For stable development, today's build list must also be the build list for tomorrow. However, developers must also allow changes to the build list: Upgrade all modules, upgrade a module, or downgrade a module.

So the issue of version selection is to define its meaning and give an algorithm implementation, and the 4 actions in the build list are:

    1. Constructs the current build list.
    2. Upgrade all modules to the latest version.
    3. Upgrade a module to a specific newer version.
    4. Demote a module to a specific old version.

The last two operations specify a module upgrade or downgrade, but doing so may require upgrading, demoting, adding or removing other modules, ideally as few as possible to meet dependencies.

This article describes the minimum version selection, which is a new, simple version selection issue. The minimum version selection is easy to understand and predict, which should make it easy to use. It can also generate high-fidelity builds where user-built dependencies are as close as possible to the dependencies that the package author developed. It is also very efficient and does not need to be more complex than recursive graph traversal, so the complete minimum version selection in Go has only hundreds of lines of code.

The minimum version selection assumes that each module declares its own dependency requirement: The minimum version list of the other modules. Suppose the module follows the import compatibility rule-Any newer version of the package should work just like the old version-so the dependency requirement only gives the lowest version, not the highest version or the incompatible version of the list.

Then these four operations are defined as:

    1. Build a list of builds for a given goal: use the target itself to launch the list, and then append the build list for each requirement. If a module appears in the list more than once, keep only the latest version.
    2. To upgrade all modules to the latest version: Construct the build list, but read each requirement (requirement) as if it had requested the latest version of the module.
    3. To upgrade a module to a specific newer version: Construct a build list that is not upgraded, and then add a build list of the new modules. If a module appears in the list more than once, keep only the latest version.
    4. To downgrade a module to a specific old version: Rewind the desired version of each top-level requirement until the build list for that requirement no longer references a newer version of the degraded module.

These operations are simple, efficient and easy to implement.

Example

Before we test the minimum version selection in more detail, let's see why we need a new approach. Throughout this article, we will use the following set of modules as a running example:

The diagram shows a module requirement diagram with one or more versions of seven modules (dashed boxes). After semantic versioning, all versions of a given module share a major version number. We are developing the module A 1 , and we will run the command to update its dependency requirements. The figure shows A 1 the current requirements and the requirements of the published modules B 1 to F 1 claims from each version.

Because the main version is part of the module identifier, we must know that we are processing A 1 instead of A 2 , but the exact version of a is not specified-our work is not published. Again, different major versions are just different modules: in terms of these algorithms, B 1 B 2 The relationship is not as good as C 1we can use the A 2 A 7 B1 in the alternative diagram to F1, but there is a significant loss, but there is no change in how the algorithm handles this example. Since all the modules in the example have major version 1, we will omit the major version as much as possible from now on, A 1 shortening the A. Our current version A needs to be B 1.2 and C 1.2 . B 1.2 D 1.3 is required in turn. Earlier versions are B 1.1 required D 1.1 . Please note that it is F 1.1 required G 1.1 , but G 1.1 also required F 1.1 . When a single When a feature is moved from one module to another, it may be important to declare this loop. Our algorithm cannot assume that the module requirements diagram is non-cyclic.

Low-fidelity Construction

The current version selection algorithm for Go is simple and provides two different versions of the selection algorithm, but none of them are correct.

The first algorithm is go get the default behavior: If you have a local version, use this version, otherwise please download and use the latest version. This mode can be used too old version: If you have installed B 1.1 and run go get download A, then you go get will not be updated to B 1.2 , resulting in build failures or bugs.

The second algorithm is go get -u the behavior: Download and use the latest version of all packages. This mode fails because the version used is too new: If you run go get -u download A, it will be updated correctly B 1.2 , but it will also be updated to C 1.3 and E 1.3 , which is not required by a, and may not have been Test and may not work correctly.

I call these results a low-fidelity build: A build that is seen as an attempt to reproduce the authors of a, and these constructs vary for no reason (translation is not good here). After we see the details of the minimum version selection algorithm, we'll see why they generate a high-fidelity build.

Algorithm

Now let's look at a more detailed algorithm.

Algorithm 1: Construct the Build list

There are two useful (and equivalent) ways to define the construction of a build list: As a recursive process and as a graph traversal.

The recursive definition for building list constructs is as follows. Build a rough build list of M by starting an empty list, adding m, and then attaching the build list to each requirement for M. Simplify the rough build list to generate a final build list by keeping only the latest versions of any of the listed modules.

The recursive constructs of the construction list are mainly used as psychological models. The literal implementation of this definition is inefficient, and may require a time exponent of the size of the non-cyclic module requirement graph and run forever on the loop graph.

An equivalent, more efficient construct is based on the graph of accessibility. The rough build list of M is also just a list of all the modules required for subsequent arrows starting with M. This can be computed by simply recursive traversal of the graph, taking care not to access the nodes that have already been accessed. For example, a rough build list is a highlighted module version that starts at a and is found behind the highlighted arrows.

(The simplification from the rough build list to the final build list is still the same)

Note that this algorithm accesses each module in a rough build list only once, and because it accesses only those modules, the execution time is proportional to the size of the Rough build list | b| , plus the number of arrows that must be traversed (up | B|2). The algorithm completely ignores the version in the Rough build list: For example, it loads information about D 1.3 , D 1.4 and, E 1.2 but it does not load D 1.2 information about, E 1.1 or E 1.3 . In dependent management settings, loading information about each module version may mean that the single A single network round trip, avoiding unnecessary module versions is an important optimization.

Algorithm 2. Upgrade all Modules

Upgrading all modules may be the most common modification of the build list. This is what we get -u have done today.

We calculate the build list of the upgrade by upgrading the module requirements graph and applying the previous algorithm. Each arrow in the upgraded module requirements graph that points to any version of the module is replaced by a pointer to the latest version of the module. (You can also discard all older versions, but the build list construct does not look at them anyway, so you don't need to clean up the diagram.)

For example, the following is the upgraded module requirements graph, the original build list is still marked in yellow, and the upgraded build list is now marked in red:

While this tells us about the upgraded build list, it doesn't tell us how to make future builds use the build list instead of the old build list (still marked in yellow). To upgrade the chart, we changed the needs of all modules. However, the upgrade of module A during the development process must be recorded in some way in A's requirements list (in a go.mod file), so that algorithm 1 generates the build list we want, pick the red module instead of the yellow module. To get something done. This effect can be achieved by adding a list of requirements to a, we introduce a Helper algorithm R.

Algorithm R. Calculating the minimum requirement list

Given a build list that is compatible with the module requirements diagram below the target, we want to compute a list of requirements for the target so that the build list is generated. It is always sufficient to list each module in the build list except the target itself. For example, the upgrade we consider above can be C 1.3 (replace),, C 1.2 D 1.4 E 1.3 , F 1.1 and G 1.1 add to the list of requirements for a. But in general, not all of these additions are necessary, and we want to list as few additional modules as possible. For example, G 1.1 (and vice versa), so we don't need to list both. At first glance, it seems natural to start by adding a module version labeled Red but not yellow (in a new list, but missing from the old list). That heuristic would be wrongly discarded D 1.4 , as the old demand C 1.2 implies, without Is the new demand C 1.3 .

Conversely, it is correct to access the module in reverse order, that is, to access the module only after considering all the modules that point to it, and to preserve the module only if the module is not implied by the accessed module.

For non-circular graphs, the result is unique, with the smallest set of additions. For circular graphs, the reverse post-post traversal must break the loop, and then for modules that do not participate in the loop, the add set is unique and minimal. As long as the result is correct and stable, we will accept the non-minimum answer in the case of the loop. In this example, the upgrade needs to be added C 1.3 (replaced C 1.2 ), D 1.4 and E 1.3. It can discard F 1.1 (by C 1.3 implication) and G 1.1 (also by C 1.3 implication).

Algorithm 3. Upgrade a module

Prudent developers typically only need to upgrade one module, without having to upgrade all the modules, and simply change the build list as little as possible. For example, we may want to upgrade to C 1.3 , and we do not want the operation to make unnecessary changes, such as upgrading to E 1.3 . and algorithm 2, we can upgrade a module by upgrading the requirements graph, from which a build list (algorithm 1) is constructed, and then the list is restored to the top-level module (algorithm R A set of requirements. To upgrade the demand map, we add a new arrow from the top-level module to the upgraded module version.

For example, if we want to change the build of a to upgrade to C 1.3 , here is the upgraded demand graph:

As before, the module for the new build list is marked red, and the old build list is yellow.

The impact of the upgrade on the build list is the only minimal way to upgrade, adding new module versions and any implied requirements, but no additional requirements. Note that when building an upgraded chart, we can only add new arrows, not replace or delete old ones. For example, if a to C 1.3 New arrow To change the old arrow from A to C 1.2 , the upgraded build list will be omitted. That is, the upgrade of D 1.4 C will degrade D, which is an unexpected, unwanted, and non-minimal change. Once we have calculated the upgrade build list, we can run (above) the algorithm R to determine how to update A list of requirements. In this case, we will end up with C 1.3 C 1.2 a replacement, but also D 1.4 add a new requirement on it to avoid accidental demotion of D. Note that this selective upgrade will only update the other modules to the minimum requirement of C: C's upgrade will not simply get the latest per C The dependency.

Algorithm 4. Downgrade a module

We may also find that after upgrading all modules, the latest version of the module is problematic, which must be avoided. In this case, we need to be able to downgrade to the previous version of the module. Downgrading a module may require downgrading other modules, but we want to downgrade as few other modules as possible. Like an upgrade, a downgrade must be changed by modifying the target's list of requirements. Unlike upgrades, downgrades must be implemented by removing requirements rather than adding them. This idea leads to a very simple downgrade algorithm that can consider the needs of each target separately. If the requirement is incompatible with the proposed downgrade (that is, if the build list of requirements contains a module version that is not currently allowed), Try an earlier version until you find a version that is compatible with the downgrade.

For example, starting with the original build, let's say we found that D 1.4 there was a problem, actually D 1.3 introduced in, so we decided to downgrade to D 1.2 . Our target module A depends on B 1.2 and C 1.2 . To D 1.4 downgrade from D 1.2 , we have to find Earlier versions of B and C, they did not need to be (directly or indirectly) later than the D 1.2 version.

Although we can consider each requirement separately, it is more efficient to consider the module requirements diagram as a whole. In our example, the downgrade rule is equivalent to deleting the unavailable version of D, and then never using the back arrow of the module to find and delete other unusable modules. Finally, the rest of a The latest version of the requirements can be recorded as new requirements.

In this case, downgrading to D 1.2 means downgrading to B 1.1 and. To C 1.1 avoid unnecessary downgrades E 1.1 , we must also E 1.2 add new requirements on the. We can apply the algorithm R to find the minimum set of requirements written to Go.mod.

Note that if we first upgrade to C 1.3 , then downgrade to D 1.2 will continue to use C 1.3 , it does not use any version of D. But downgrading is limited to downgrading packages, not upgrading packages, and if you need to upgrade before downgrading, users must explicitly ask for it.

Principle

The minimum version selection is very simple. It achieves simplicity by eliminating all flexibility about what the answer is: The build list is exactly the version specified in the requirement. Real systems require greater flexibility, such as the ability to exclude certain module versions or replace other modules. Before we add it, it is worth studying the theoretical basis of the simplicity of the current system, So we need to understand which extensions are simple to keep and which don't.

If you are familiar with the way most other systems handle version selection, or if you remember the edition SAT article that I released a year ago, the most notable feature of the minimum version selection is that it does not solve the general Boolean gratification ( Boolean satisfiability ) or SAT. As I explained in previous articles, Version search is rarely used to solve the SAT; The search for versions in these systems is inherently complex, and we don't know if these problems have an efficient solution. If we want to avoid this fate, we need to know where the boundaries are and where we should not go when we explore the design space. Conveniently, Schaefer. The dichotomy theorem accurately describes these boundaries. It determines the restriction classes of six Boolean formulas, whose gratification can be determined in polynomial time, and then proves that the satisfying is NP-complete for any kind of formula beyond these classes. To avoid NP completeness, we need to limit the version selection problem to In a restricted class of Schaefer.

As it turns out, the minimum version selection is located at three intersections of six manageable SAT sub-problems: 2-sat, Horn-sat and Dual-horn-sat. The formula corresponding to the minimum version selection build is a set of clauses and, where each clause is a single body text (this version must be installed, For example, during an upgrade), a single negative text (this version is not available, for example during a downgrade), or a negative and a positive text (hint: If this version is installed, you must also install this other version). The formula is a 2-CNF formula because each clause has a maximum of two variables. The formula is also a Horn formula, because there is at most one positive text per clause. The formula is also a double Horn formula, because each clause has a maximum of one negative text. That is, each of the satisfying issues that the minimum version selection brings can be solved by selecting three different efficient algorithms. As we have done above, it is simpler and more efficient to use the very limited structure of these problems to further specialization.

Although 2-sat is the most famous example of the SAT sub-problem and has an effective solution, these questions are more interesting than the fact that both Horn and the double Horn formula. Each Horn formula has a unique and satisfying allocation, with the fewest variables set to true. This proves the construction of the build list to And each upgrade has a unique minimum answer. Unless absolutely necessary, a unique, minimal upgrade does not use a newer version of a given module. Instead, each double Horn formula has a unique and satisfying allocation, with the fewest variables set to false. This proves that each downgrade has a unique minimum answer. Unless absolutely necessary, a unique, lowest-level demotion does not use an older version of a given module. If we want to extend the minimum version selection, such as the ability to exclude certain modules, we can only maintain uniqueness and minimize attributes by continuing to use constraints that can be represented as Horn and double Horn formulas.

(Insert: Minimum version selection The problem solved is the NL complete problem: Because NL is a subset of 2-sat and it is a nl-hard problem, because st-connectivity can simply be converted to the minimum version to select the build list construction problem. Happily we've used a The NL complete problem replaces a NP-complete problem, but there is little real value in knowing this: NL only guarantees a polynomial time solution, and we already have a linear time solution.

Exclude Modules

The minimum version selection always selects the minimum (oldest) version of the module that meets the overall build requirements. If the version is in some way problematic, the upgrade or downgrade operation can modify the list of requirements for the top-level target to force the selection of a different version.

It is also useful to have a clear record of the issue in this version to avoid re-introducing it in future upgrades or demotion operations. But we want to do this in a way that keeps the uniqueness and the least of the previous section, so we have to use Horn and double Horn the constraints of the formula. This means that the build constraint can only be an unconditional affirmation assertion (x:x must be installed), an unconditional negative assertion (¬y:y cannot be installed), and a positive meaning (x→z, equivalent to x∨z: If X is installed, then Z must be installed). The meaning of negation (x→¬y, equivalent to ¬x∨¬y: If X is installed, Y cannot be installed) cannot be added as a constraint without destroying the form. Therefore, module exclusions must be unconditional: they must be independent of the choices made during the construction of the list.

What we can do is to allow a module to declare its own local list of excluded module versions. As far as I'm concerned, I mean that the list is only being consulted within the module; A larger build that uses a module as a dependency ignores the exclusion list. In our example, if the build of a references a D 1.3 list, the exact exclusion set will depend on whether the build is selected or not, so that the D 1.3 D 1.4 exclusion condition becomes conditional and results in a np-complete search problem. Only the top-level module guarantees Only the exclusion list for the top-level module is used. Note that if you decide to use the list before you start the build, and if the list content does not depend on which modules, you can refer to the exclusion list from other sources, such as the global exclusion list that is loaded over the network, which is selected during the build.

Although all concerns are unconditionally excluded, it seems that we already have conditional exclusions: C 1.2 need D 1.4 and therefore implied exclusions D 1.3 . But our algorithm does not treat it as an exclusion. When algorithm 1 runs, it D 1.3 adds (for B) and D 1.4 (for C) to a rough Build lists together with their minimum requirements. Only D 1.4 there, the final simplification process is removed D 1.3 . The distinction between this statement incompatibility and the claim minimum requirement is critical. A declaration must not use a D 1.3 build C 1.2 , only describe how it failed. C 1.2 the declaration has to be D 1.4 built, not how it is successful .

The exclusion must be unconditional. It is important to know this fact, but it does not tell us how to implement the exclusion. A simple answer is to add exclusions as a build constraint, such as "D 1.3 cannot be installed" clauses. Unfortunately, adding this clause alone will cause D 1.3 the required modules (e.g. B 1.2 ) can be uninstalled. We need to express in some way B 1.2 can choose D 1.4 . The simple way to do this is to modify the build constraints, change "B 1.2→d 1.3" to "b 1.2→d 1.3∨d 1.4", and generally allow all future versions of D. But the The terms (equivalent to B 1.2∨d 1.3∨d 1.4) have two positive characters, so that the overall build formula is no longer a Horn formula. It's still a double Horn formula, so we can still define a linear time-build list construct, but constructs-that is, how to perform the upgrade problem- Will no longer be guaranteed to have a unique, minimal answer.

Instead of implementing exclusions as new constraints, we can implement them by changing existing constraints. In other words, we can modify the demand graph just as we do with upgrades and downgrades. If a particular module is excluded, then we can remove it from the module requirements diagram, but you can also change any existing requirements on that module to replace it with the next updated version. For example, if we exclude D 1.3 , we will also update B 1.2 to require D 1.4 :

If you delete the latest version of the module, you need to delete any modules that require that version, as described in the downgrade algorithm. For example, if it G 1.1 is removed, it C 1.3 needs to be removed.

Once the exclusion has been applied to the module requirements graph, the algorithm continues as before.

Replacement module

During the development of a, suppose we found a D 1.4 bug in, and we wanted to test a potential fix. We need some way to replace our build with an D 1.4 unpublished copy of U. We can allow a module to declare it as a replacement: "Like D 1.4 the source code and requirements module has been replaced by U as the move forward. "

As with exclusions, substitution can be achieved by modifying the module requirements graph in the preprocessing step, rather than by increasing the algorithm complexity of the processing graph. As with exclusions, the replacement list is local to a module. The construction of a refers to the replacement list of a, not from B 1.2 , C 1.2 or any other module in the build. This avoids conditional substitutions, which are difficult to implement, and also avoid the possibility of replacing conflicts: if B 1.2 C 1.2 E 1.2 different replacements are specified for , what should I do? Even more generally, maintaining local exclusion and substitution of a module limits the control of the module to other builds.

Who controls your build

The dependencies of the top-level modules must have some control over the top-level construction. B 1.2 Need to be able to ensure that it is D 1.3 built with or later, not D 1.2 . Otherwise, we will end the current go get outdated dependency failure mode.

At the same time, to keep the build predictable and understandable, we cannot rely on any fine-grained control over top-level builds. This can lead to conflicts and surprises. For example, suppose B declares that it requires an even version of D, and C declares that it requires a prime version of D. D is constantly updated and achieved D 1.99 . Using B or C alone, you can always use a relatively new version of D (respectively, D 1.98 or D 1.97 ). But when A uses both B and C, the build silently chooses the older (and slower) D 1.2 . This is an extreme example , but it raises the question: why should the authors of B and C have such extreme control over the construction of a? When I write this article, there is an open bug report, the Kubernetes Go client declares a requirement that relies on gopkg.in/yaml.v2 a specific version two years ago. When developing When a person tries to use a new feature of the YAML library in a program that has already used the Kubernetes Go client, the code that uses the new feature will not compile even after attempting to upgrade to the latest possible version, because "latest" is limited by the requirements of Kubernetes. In this case, use two years The previous YAML library version may be perfectly justified in the context of the Kubernetes code base. Obviously Kubernetes authors should have complete control over their own builds, but this level of control extends to other developers ' builds without any sense.

In the design of module requirements, exclusions, and replacements, I tried to balance the need to allow competitive attention to rely on enough control to ensure a successful build without allowing them too much control, thereby damaging the build. The combination of minimum requirements does not conflict, so it is feasible (and even easy) to collect them from all dependencies. However, exclusions and substitutions can and will conflict, so we only allow them to be specified by the top-level module.

Therefore, the module author has complete control over the building of the module because it is the main program being built, but does not fully control the build of other users that depend on the module. I believe this difference will minimize the size of the version selection for larger, more fragmented code libraries than existing systems.

High Fidelity Build

Now back to the issue of high-fidelity builds.

At the beginning of the article, we see that using go get build A can use dependencies that are different from those used by authors of a, without a good reason. I call it a low fidelity build because it is a poor rendition of the original build of a. With minimum version selection, the build is High fidelity. Module requirements contained in the source code of the module The only thing that determines how to build it directly. A user-built a will exactly match the author's build: Replicable build. But high fidelity means more.

A replicable build is generally understood as a two-dollar attribute built by the entire program: the user's build is exactly the same as the author, or not the same. When building a library module as part of a large program? It is helpful for users to build a library that matches the author's build as closely as possible. The user then runs the same code (including dependencies) that the author developed and tested. Of course, in a larger project, a user building a library might not exactly match the author's build. Another part of the build might force the use of newer dependencies, thus leaving the user's library building out of the author's build. When the build deviates from the author's own build just to meet the needs of the rest of the build, we will build a build called High fidelity.

Consider our initial example:

In this example, although the author of B uses D 1.3 , but the internal combination of a B 1.2 and D 1.4 . This change is necessary because A is also used C 1.2 while it is needed D 1.4 . The build of a is still a B 1.2 high-fidelity build: It deviates by use D 1.4 , but only because it is necessary. Conversely, if the build is used E 1.3 , as is get -u often done by Dep and Cargo, the build will be a low-fidelity build: Because it is unnecessarily deviating from the .

Minimum version selection provides a high-fidelity build by using the oldest version that meets your needs. The release of the new release has no impact on the build. In contrast, most other systems, including Cargo and DEP, use the latest version available to meet the "manifest file" The requirements listed in the new release. To get a replicable build, these systems add a second mechanism, the "lock file," which lists the specific versions that the build should use. Locking the file ensures that the entire program is replicable, but it is ignored for the library module; The Cargo FAQ explains this as "this is precisely because the library should not be recompiled by all of its users." Indeed, perfect replication is not always possible, but by giving up completely, the Cargo scheme admits unnecessary deviations from the construction of the library author. That is, it provides a low-fidelity build. In our case , when A is first B 1.2 or C 1.2 added to its build, Cargo will see that they need E 1.2 or later, and will be selected E 1.3 . However, until further instructions, as authors of B and C, continue to E 1.2 build it seems better. Make With the oldest allowed version, redundancy with two different files (manifest and lock) was eliminated, and all two files specify which module versions to use.

Automatic use of newer versions also makes the minimum requirements error prone. Suppose we start using the latest version of the time to B 1.1 develop a, and we record a only need B 1.1 . But then B 1.2 came out and we started using it in our own build and lock file without updating Manife St.. At this time, there is no further B 1.1 development or testing of a and. We may start using B 1.2 the new feature in or rely on it for bug fixes, but now a mistakenly lists its minimum requirements B 1.1 . If the user always chooses a version that is newer than the minimum requirement, then there is not much harm: they will also make When the B 1.2 system does try to use the minimum version of the claim, it will go wrong. For example, when a user tries to make a limited update to a, the system cannot see the updates that are B 1.2 required. More generally, each time the minimum version (in manifest) and the build (lock) are not , why do we think that using the minimum build builds will generate the available libraries? To try to detect this problem, Cargo developers have suggested cargo publish building with all dependencies that try to use the minimum version before publishing. When A starts using B 1.2 the new feature, it detects the use of the B.1.1 build will fail-but it will not detect when A begins to depend on the new bug fix.

The fundamental problem is that the module that selects the latest allowed version during version selection produces a low-fidelity build. For the entire program building, the Lock file is a partial solution; cargo publishAdditional build checks like this are part of the solution. A more complete solution is built using the version of the module that the author uses. This allows the user to build as close as possible to the author's build: High-fidelity builds.

Upgrade speed

Since the minimum version selection takes the minimum allowable version of each dependency, it is easy to assume that this results in an old copy of the package, which can lead to unnecessary errors or security issues. However, in practice, I think the situation is the opposite, because the minimum allowable version is the largest of all constraints, so One control lever provided for all modules in the build is the ability to enforce the use of newer versions of the dependency, rather than using that version. I hope that the users of the minimum version will eventually get some programs that, like their friends, are using the latest version of a more aggressive system like Cargo.

For example, suppose you are writing a program that relies on a few other modules, all of which rely on some very common modules, such as gopkg.in/yaml.v2 . The build of your program will use the latest YAML version requested by your module, as well as a small number of dependencies. Even if there is only one dependency, It will also force your build to update many other dependencies. This is just the opposite of the client problem I mentioned earlier Kubernetes Go .

If there is any difference, the minimum version choice will encounter the opposite problem, that is, the "maximum value of the minimum" answer is like a ratchet, forcing the dependency to move too fast. But I think that in practice, the dependence will move forward at the right speed, and the end result is that it's faster than Cargo and its peers A little more slowly.

Upgrade time

A key feature of the minimum version selection is that it does not occur until the developer requires an upgrade. You won't get an untested version of the module unless you ask to upgrade the module.

For example, in Cargo, if package B is dependent on the package C 2.9 , and B is added to your build, you cannot get it C 2.9 . You got the latest allowed version at that moment, maybe. It might have been C 2.15 C 2.15 released a few minutes ago, and the author hasn't been told a heavy Error. This is too bad for you and your build. On the other hand, in the minimum version selection, module B's Go.mod file will list the exact version of C developed and tested by the author of B. You're going to get that version. Or, other modules in your program are developed and tested with a newer version of C. Then you'll get that version. But you never get a version of C that is not explicitly required in a module in the Go.mod file. This means that you can only get a version of C that works for someone else, not the latest version. And may not have worked for anyone.

To be clear, my purpose here is not to be picky about Cargo, I think it is a well-designed system. Here I am using Cargo as an example of a model that many developers are familiar with to try to communicate what is different in the minimum version selection.

Minimization of

I will select this system minimum version as the system minimum version selection, because the entire system appears to be minimal: I don't know how to delete anything without destroying it. There is no doubt that some people would say that too much has been removed, but so far, It seems perfectly capable of dealing with real-world cases that I have already checked. We will be experimenting with vgo prototypes to discover more problems.

The key to the minimum version selection is its preference for the minimum allowable version of the module. When I apply the go get-u "upgrade everything to the latest" method to a system that relies on importing compatibility rules, I realize that both manifest and lock exist for the same purpose: To fix "upgrade all things To the latest "default behavior. manifest describes which new versions are not needed, and lock describes which new versions are not desired. Instead, why not change the default value? Use the minimum allowable version, usually the exact version used by the author, and leave the upgrade time entirely to the user's control. This approach leads to a build that can be duplicated without a lock file, and more generally, a high-fidelity version that deviates from the author's own build only when needed.

Above all, I want to find an understandable, predictable, and even boring version selection algorithm. Other systems seem to be optimized for showing original flexibility and power, and the goal of minimum version selection is intangible. I hope it succeeds.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.