Semantic Import version control

Source: Internet
Author: User
Tags deprecated
This is a creation in Article, where the information may have evolved or changed.

This article is translated from Semantic Import Versioning, the 3rd part of Go & Versioning, the copyright @ the original.

How do I deploy incompatible changes to an existing package? This is the fundamental challenge and decision in any package management system. The answer to the question determines the complexity of the resulting system, which determines how easy or difficult it is to use package management. (It also determines how easy or difficult it is to implement package management, but the user experience is more important.)

To answer this question, this article first introduces the import compatibility rules for Go:

If the old package and the new package have the same import path, the new package must be backwards compatible with the old package.

We advocated this principle from the outset, but we did not give it a name or a direct statement.

Importing compatibility rules greatly simplifies the experience of using incompatible versions of packages. When each different version has a different import path, there is no ambiguity about the expected semantics of the given import statement. This makes it easier for developers and tools to understand Go programs.

Today's developers want to use the semantic version to describe the package, so we apply them to the model. Specifically, the module my/thing is imported my/thing as V0, which is expected to have disruptive changes, is not protected, and continues to the first stable major version of V1. But when adding v2 , we no longer redefine the meaning of what is now stable my/thing , but give it a new name: my/thing/v2 .

I call this Convention semantic import versioning, which is the result of adhering to the import compatibility rule when using semantic versioning.

A year ago, I believed that putting the version into such an import path was ugly, undesirable, and could be avoidable. But over the past year, I've begun to understand how much clarity and simplicity they bring to the system. In this article, I hope to let you know why I changed my mind.

A story of dependence

For the discussion to materialize, consider the following story. The story is, of course, fictional, but it is driven by a real problem. When DEP was released, the team that Google wrote the OAuth2 software package asked me how they should introduce some of the incompatibilities that they had long wanted to do. The more I think about it, the more I realize it's not as easy as it sounds, At least not as a semantic import version.

Prologue

From the perspective of package management tools, it is divided into code authors and code users. Alice, Anna and Amy are the authors of different code packages. Alice works for Google and writes the OAUTH2 package. Amy works at Microsoft and writes the Azure client library. Anna works on Amazon and writes the AWS client library. Ugo is the user of all these packages. He is developing the ultimate cloud application Unity and uses all of these packages and other packages.

As authors, Alice, Anna and Amy need to be able to write and publish new versions of their packages. Each version of the package specifies the desired version for each of its dependencies.

As a user, Ugo needs to be able to build Unity with these other packages. He needs to have precise control over which versions are used in a particular build; He needs to be able to update to the new version when he chooses.

Of course, our friends may expect to get more from package management tools, especially in terms of discovery, testing, portability and useful diagnostics, but these are not related to the story.

As our story turns on, Ugo's Unity build dependencies look like this:

Chapter 1

Everyone writes software independently.

At Google, Alice has been designing a new, simpler and easier-to-use API for the OAuth2 package. It can still do all the work that the old package can do, but only half of the API interface is needed. She publishes it OAuth2 r2 . (The representative here is r revised. Currently, the revision number does not represent anything other than the order: in particular, they are not semantic versions)

At Microsoft, Amy is on a long vacation, and her team decides not to make any changes that are relevant until she returns OAuth2 r2 . The Azure package will now continue to work OAuth2 r1 .

At Amazon, Anna found that using it allows OAuth2 r2 her to remove a lot of ugly code in the implementation AWS r1 process, so she changed AWS to use it OAuth2 r2 . She fixes some errors and publishes the results AWS r2 .

Ugo gets a bug report about azure behavior and tracks a bug to the Azure client library. Amy has Azure r2 released a fix for the bug before taking a vacation. Ugo adds a test case to Unity, confirms that it failed, and asks the Package management tool to update to Azure r2 .

After the update, the Ugo build looks like this:

He confirmed that the new test passed, and that all his old tests still passed. He locks the Azure update and publishes the updated Unity.

Chapter 2

Amazon has launched their new cloud service with great fanfare: Amazon Zeta Functions. To prepare for the release, Anna added Zeta support to the AWS package, which she now publishes as AWS r3 .

When Ugo heard about Amazon Zeta, he wrote some test programs and was very excited about the effect of their work, so he skipped the lunch update to Unity. Today's update is less than the last one. Ugo wants Azure r2 to use and AWS r3 (the latest version of each version) to build Unity with Zeta support. But Azure r2 need (not OAuth2 r1 R2), but AWS r3 need OAuth2 r2 (not R1). Classic diamond-dependent, right? Ugo doesn't care what it is. He just wants to build Unity.

Worse, it doesn't seem to be anybody's fault. Alice wrote a better OAuth2 bag. Amy fixes some Azure bugs and goes on vacation. Anna felt that AWS should use the new OAuth2 (internal implementation details) and later added Zeta support. Ugo wants Unity to use the latest Azure and AWS packages. It's hard to say that any one of them has done something wrong. If these people are not wrong, then perhaps the package management is wrong. We have been assuming that there can be only one version of OAuth2 in Ugo's Unity build. Maybe that's the problem: Maybe the Package manager should allow different versions to be included in a single build. This example seems to indicate that it must.

Ugo still stuck, so he searched for StackOverflow and found the-FMULTIVERSE flag for the manager, which allows multiple versions so that his program is built to:

Ugo tried it. It doesn't work. Further in-depth study of this issue, Ugo found that both Azure and AWS are using the popular OAUTH2 middleware library called Moauth, which simplifies some OAuth2 processing. Moauth is not a complete API replacement: Users still import OAuth2 directly, but they use Moauth to simplify some API calls. The details of the Moauth participation are not OAuth2 r1 changed from R2, so Moauth r1 (the only existing version) is compatible with both. Azure r2 and AWS r3 both. Moauth r1 This works well in programs that use only Azure or AWS only, but Ugo's The Unity build looks like this:

Unity requires two OAuth2 copies, but which copy does Moauth import?

To make the build work, we seem to need two identical copies of Moauth: one OAuth2 r1 for Azure and another OAuth2 r2 for AWS. A quick StackOverflow search shows that the package manager has a flag: . The UGO program uses this flag to build to:

This can actually work and pass the test, although Ugo now wants to know if there are any more potential problems. He needs to go home for supper.

Chapter 3

Amy has returned to Microsoft after her holiday. She feels that Azure can continue to use OAuth2 r1 it for a while, but she realizes that it can help the user pass the Moauth token directly to the Azure API. She adds it to the Azure package in a backward-compatible way and publishes it Azure r3 . On Amazon's side, Anna likes Azu. The re package is based on Moauth's new API and adds a similar API to the AWS package, published as AWS R4.

Ugo saw these changes and decided to update to the latest version of Azure and AWS in order to use the Moauth-based API. This time he had been stuck for an afternoon. First, he temporarily updates Azure and AWS packages without modifying Unity. It's a program that can be built!

The exciting fact is that Ugo has changed Unity to use Moauth-based Azure APIs and can also be built. However, when he changes Unity to use Moauth-based AWS API , the build fails. Confused, he resumed his azure changes, leaving only the AWS changes to build successfully. He put the azure changes back in, and the build failed again. Ugo return to StackOverflow (continue search).

Ugo learned that -fmultiverse -fclone Unity is implicitly built when using only one Moauth-based API (in this case, Azure):

But when he uses two Moauth-based APIs, a single import "Moauth" in Unity is ambiguous. Since Unity is the main package, it cannot be cloned (as opposed to moauth itself):

A StackOverflow comment recommends moving the Moauth import to two different packages, and then having Unity import them. Ugo tried this, incredibly, it can work:

Ugo back home on time. He is not satisfied with the management of the package, but he is now a loyal fan of StackOverflow.

Semantic version-based retelling

Assuming package management uses them instead of the ' R ' numbers of the original story, let's wave a wand and retell the story with the semantic version.

Here are some changes:

    • OAuth2 r1IntoOAuth2 1.0.0
    • Moauth r1IntoMoauth 1.0.0
    • Azure r1IntoAzure 1.0.0
    • AWS r1IntoAWS 1.0.0
    • OAuth2 r2Changed OAuth2 2.0.0 to (partially incompatible API)
    • Azure r2Change to Azure 1.0.1 (bug fix)
    • AWS r2Change AWS 1.0.1 to (bug fix, internal use OAuth2 2.0.0 )
    • AWS r3Change to AWS 1.1.0 (feature update: Add Zeta)
    • Azure r3Change to Azure 1.1.0 (feature update: Add Moauth API)
    • AWS r4Change to AWS 1.2.0 (feature update: Add Moauth API)

The story doesn't change any more. Ugo still encounters the same build problem, he still has to turn to using StackOverflow to understand build flags and refactoring techniques to keep Unity built successfully. According to Semver, Ugo should not have any trouble with the update: there is no Unity import in the story The package changed the major version. Only OAuth2 deep into Unity's dependency tree. Unity itself does not import OAuth2. What's wrong with the place?

The problem here is that the Semver specification is actually not just a way to select and compare version strings. It didn't say anything else. In particular, there is no mention of how incompatible changes are handled after you increase the major version number.

The most valuable part of Semver is to encourage backward-compatible changes where possible. FAQs are correctly recorded to:

"Incompatible changes should not introduce software that has a lot of relevant code. The cost of upgrading must be significant. Having to publish incompatible changes by adding the major version number means that you will consider the impact of the change and evaluate the cost/yield involved. "

I certainly agree that "incompatible changes should not be introduced easily". What I think Semver lacks is that "having to increase the major version number" is a step that motivates you to "think about the impact of your changes and evaluate the cost/yield involved." The opposite: Reading Semver is too easy, because it means that as long as you increment the major version when you make incompatible changes, all other can be solved. This example shows that this is not the case.

From Alice's point of view, there is a OAuth2 API need for backward-compatible changes, and when she makes those changes, Semver seems promising to release the incompatible OAuth2 package, provided she gives it the 2.0.0 version. But this semver-sanctioned change has led to Ugo A series of questions with Unity.

The semantic version is an important way for authors to communicate their expectations to users, but that's all they have. In itself, these larger build issues cannot be expected to be addressed. Instead, let's look at ways to solve the build problem. After that, we can consider how to properly use semver in this approach.

Import version Control retelling

Again, let's re-tell the story using the Import compatibility rule:

In Go, if the old package and the new package have the same import path, the new package must be backwards compatible with the old package.

Now the plot changes more significantly. The story starts the same way, but in the 1th chapter, when Alice decides to create a partially incompatible new one OAuth2 API , she cannot use "oauth2" as its import path. Instead, she named the new version Pocoauth and provided it with an import path of "Pocoauth".

In the face of two different OAuth2 packages, Moe (Moauth's author) must write a second package Moauth for Pocoauth, named Pocomoauth and given the import path "Pocomoauth".

When Anna updates the AWS package to new OAuth2 API , she also changes the import path in the code from "Oauth2" to "Pocoauth" and changes the import path in "Moauth" to "Pocomoauth". Then, with the AWS r2 c4/> 's release, the story will continue.

In the 2nd Chapter, when Ugo eagerly uses Amazon Zeta, everything is fine. All the imports in the package code exactly match the code that needs to be built. He did not have to look for the special flag on the StackOverflow, and he was only five minutes late for lunch.

In the 3rd chapter, Amy adds the Moauth-based API to Azure, while Anna adds the same Pocomoauth-based API to AWS.

When Ugo decides to update Azure and AWS, there is no problem again. His updated program does not require any special refactoring:

At the end of the story version, Ugo didn't even think of his package manager. It works normally; He hardly noticed it in there.

The use of import versioning has changed two key details compared to the semantic version of a story. First, when Alice introduced her backwards compatibility OAuth2 API , she had to publish it as a new package (Pocoauth). Second, since Moe's package moauth exposes the type definition of the OAUTH2 package in its API, Alice has released a new package forcing The Moe has also released a new package (Pocomoauth). Ugo The final Unity build is progressing well, as Alice and Moe's package splits create the structures needed to build and run clients such as Unity. Instead, Ugo and users like him no longer need the complexity of incomplete -fmultiverse -fclone package management such as external refactoring assistance , the import compatibility rule pushes a small amount of extra work to the package author, allowing all users to benefit.

There is certainly a cost to introducing a new name for each backward incompatible API change, but as the Semver FAQ describes, the cost should encourage the authors to consider more clearly the impact of these changes and whether they are really necessary. In the case of import versioning, the cost will yield significant benefits to the user.

One advantage of importing version control (import Versioning) is that the package name and the import path are concepts that Go developers can well understand. If you tell a package author that making backward incompatible changes requires creating different packages with different import paths,-even without any versioning knowledge-the author can infer from the impact of the client package: The client needs to change their imports once; Moauth no longer applies to new packages, and so on.

To better predict the impact on users, authors may make different, better decisions about their changes. Alice may seek to introduce a new, clearer API into the original OAUTH2 package with the existing API to avoid packet splitting. Moe may consider more carefully whether an interface can be used to enable Moauth to support OAuth2 and Pocoauth, thereby avoiding the use of the new Pocomoauth package. Amy may think that updating to Pocoauth and Pocomoauth is worthwhile, rather than exposing The Azure API uses outdated OAuth2 and Moauth package facts. Anna might try to make the AWS API allow Moauth or pocomoauth to make it easier for Azure users to switch.

By contrast, the meaning of Semver "major version Upgrade (bump)" is far less clear and does not impose the same design pressure on authors. It is clear that this approach creates more work for the author, but it is justified by bringing significant benefits to the user. In general, this tradeoff makes sense because the goal of the package is to have more users than the author, and to want all packages to have at least as many users as the author.

Semantic Import version control

The previous section shows how the import version brings a simple, predictable build during the update. However, selecting a new name for each backward-compatible change is difficult for the user and does not help. Given the choice between OAuth2 and Pocoauth, which should Amy use? There is no way to know without further investigation. In contrast, semantic versioning makes this easy: OAuth2 2.0.0 obviously OAuth2 1.0.0 the expected substitute.

We can make the semantic version control and follow the import compatibility rules by including the major version in the import path. Alice can call her new API with the new import path "Oauth2/v2" without OAuth2 2.0.0 having to create a lovely but unrelated new name Pocoauth.moe also: Moauth 2.0.0 (import as "Moauth/v2") can also be a OAuth2 2.0.0 secondary Package, just like Moauth 1.0.0 OAuth2 1.0.0 the auxiliary package.

When Ugo adds Zeta support to the 2nd chapter, his build looks like this:

Because "Moauth" and "moauth/v2" are just different packages, Ugo is fully aware of how he needs to use Azure's "Moauth" and "Moauth/v2" with AWS: import both.

In order to be compatible with the existing Go usages and as a small encouragement to not make backward incompatible API changes, I hereby assume that major version 1 is omitted from the import path: import "Moauth" instead of "moauth/v1". Similarly, major version 0 explicitly rejects compatibility and also saves from the import path The idea here is that by using V0 dependencies, the user explicitly acknowledges the possibility of the breach and assumes the responsibility to handle it when the update is selected. (Of course, it's important that updates don't happen automatically.) We'll see in the next article how to minimize the version selection is helpful.

Functional names and immutable meanings

20 years ago, Rob Pike and I were modifying the internal structure of the Plan 9 C library, and Rob taught me a rule of thumb that when you change the behavior of a function, you also change its name. The old name has a meaning. By using a different meaning for the new name and removing the old name, we make sure that the compiler loudly complains about each piece of code that needs to be checked and updated, rather than silently compiling the incorrect code. If people have their own programs using this function, they will get compile-time failures instead of long debugging sessions. In today's distributed versioning world, the last problem is magnified, which makes the name more important. The old semantics expected by the old code that is written concurrently should not be replaced by new semantics at the time of merging.

Of course, deleting the old function only works if you can find all of it, or when the user understands that they have a responsibility to keep up with the changes, such as in a Plan 9 research system. For exported APIs, it is generally better to retain the old name and old behavior and only add new names for the new behavior. In his 2016 "spec-ulation" speech, Rich Hickey mentioned this by adding only new names and behaviors, not deleting old names or redefining their meanings. It is the individual variables or data structures that are encouraged by functional programming. The functional approach brings clarity and predictability to small-scale programming benefits, and is more effective when applied, just like importing compatibility rules for the entire API: Relying on hell is actually just a variability. This is only a small point in the speech; The whole lecture is worth a look.

In the early days of go get, when people asked about making backward incompatibility changes, our response was-based on the intuition of years of experience with these software changes-is to give the import version rule, but there is no clear explanation why this method is better than not putting the main version in the import path. Go 1.2 Adds a FAQ entry for package versioning, which provides basic recommendations (Go 1.10 also):

Packages intended for public use should be as backward-compatible as possible. The Go 1 compatibility guidelines are a good reference here: Do not delete the exported name, encourage the tag's compound literal, and so on. If you need a different feature, add a new name instead of changing the old name. If you need to completely compromise compatibility, create a new package with the new import path.

One motive for this blog post is to show why it is so important to follow the rules with a clear and credible example.

Avoid single-case (Singleton) problems

A general objection to the semantic import versioning approach is that the package author today expects only one copy of the package in a given build. Allowing multiple packages in different major versions can cause problems due to accidental duplication of the Singleton. An example is registering an HTTP handler. If my/thing /debug/my/thing an HTTP handler is registered, then having two copies of the package will result in duplicate registrations, which can cause panic when registering. Another problem is if there are two HTTP stacks in the program. Obviously, only one HTTP stack can be on port 80. Listen We do not want half of the program registration will not be used by handlers. Go developers are already having problems with the vendored package.

migrating to VGO and semantic import versions can clarify and simplify the current situation. By replacing the vendoring caused by an uncontrolled duplication problem, the package author will ensure that each major version of the package has only one instance. By including the main version in the import path, the package author should be clearer my/thing and my/thing/v2 different and need to be able to coexist. Maybe that means that on the /debug/my/thing/v2 export V2 debugging information. Or maybe it means coordination. V2 may be responsible for registering the handler, but can also provide a hook for V1 to provide information to display on the page. This means my/thing importing my/thing/v2 or vice versa; with different import paths, this is easy and easy to understand. Conversely, if V1 and V2 are both, it is my/thing difficult to understand how to import your own import paths and get the meanings of other import paths.

Automatic API Updates

One of the key reasons for allowing packages of V1 and V2 to coexist in a large program is the client that can upgrade the package one at a time, and still have the results that can be built. This is a concrete example of the more general problem of gradually repairing code. (See my 2016 article, "Codebase refactoring (with help from Go)," Learn more about the issue.)

In addition to maintaining the build of the program, semantic import versioning has a great advantage over the gradual fix of code, as I mentioned in the previous chapters: A major version of the code package can be imported and written in another version. It is easy to write the V2 API as a V1 implementation, and vice versa. This allows them to share code, and to select and use type aliases through appropriate design, and even allow clients with V1 and v2 to interoperate. It can also help resolve key technical issues in defining automatic API updates.

Before go 1, we relied heavily on the go fix user to run it and find a program that no longer compiles after updating to the new Go version. Updating compiled code makes it impossible to use most of our program analysis tools, which require that their input be a valid program. In addition, we would like to know how to allow Go The author of a package outside the standard library provides a fix that is specific to its own API update. The ability to name and process multiple incompatible versions of a package in a single program hints at a possible solution: if the V1 API function can be implemented as a wrapper for the V2 API, the wrapper implementation is twice times the revision specification. For example, suppose the V1 function of the API has Enablefoo and Di The Sablefoo function, V2, SetFoo(enabled bool) replaces the function with a. v2 when published, V1 can be implemented as a wrapper for V2:

package p // v1import v2 "p/v2"func EnableFoo() {    //go:fix    v2.SetFoo(true)}func DisableFoo() {    //go:fix    v2.SetFoo(false)}

A special //go:fix note will indicate that the following wrapper should be inline to the call. Then run go fix the override call v1 EnableFoo as v2 SetFoo(true) . Overrides are easy to specify and type checking because it is a simple go code. Better yet, rewriting is clearly safe: v1 EnableFoo v2 SetFoo(true), so the rewrite call will obviously not change the meaning of the program.

It may be reasonable to use symbolic execution to fix reverse API changes, from using Setfoo v1 to v2 using Enablefoo and Disablefoo. v1 SetFooimplementations can read:

package q // v1import v2 "q/v2"func SetFoo(enabled bool) {    if enabled {        //go:fix        v2.EnableFoo()    } else {        //go:fix        v2.DisableFoo()    }}

It go fix will then be updated to SetFoo(true) EnableFoo() , SetFoo(false) for DisableFoo() . This hotfix even applies to API updates in a single major release. For example, V1 may be deprecated (but reserved) Setfoo, and introduced Enablefoo and DISAB Lefoo. The same fix will help callers get rid of deprecated APIs.

To be clear, this has not been achieved today, but it looks promising and makes this tool possible by giving different names different things. These examples demonstrate the ability to attach persistent, immutable names to specific code behaviors. We just have to follow this rule: When you change something, you change its name.

Commitment to Compatibility

Semantic import versioning is more useful for package authors. They can't just decide to release V2, stay away from V1, and leave users like Ugo to deal with the consequences. But the package authors who do so are hurting users. In my opinion, it seems a good thing if the system is hard to hurt the user and the package author does not act to hurt the user naturally.

More generally, Sam Boyer on Gophercon 2017 about how package management regulates our social interactions and the collaboration of people building software. We can decide. Do we want to work in a community built around the system that optimizes compatibility, smooth transitions, and work together well? Or do we want to work in a community around system creation that optimizes the creation and description of incompatibilities, which makes it acceptable for authors to disrupt user programs? Import versioning, especially by elevating the semantic master version to the import path to handle semantic versioning, is how we ensure that we work in the first community.

Let's work on compatibility.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.