Kubeadm Source Analysis (kubernetes offline installation package, three-step installation)

Source: Internet
Author: User
Tags sprintf etcd k8s
This is a creation in Article, where the information may have evolved or changed.

K8s Offline installation package Three-step installation, simple to unbelievable

Kubeadm Source Code Analysis

To say the truth, Kubeadm code is sincere, the quality is not very high.

A few key points to first talk about some of the core things Kubeadm did:

    • Kubeadm Generate certificate in/etc/kubernetes/pki directory
    • Kubeadm generate a static pod Yaml configuration, all under/etc/kubernetes/manifasts
    • Kubeadm generate Kubelet configuration, kubectl configuration, etc. under/etc/kubernetes
    • Kubeadm to start DNS via client go

Kubeadm Init

Code entry cmd/kubeadm/app/cmd/init.go suggest you go see Cobra.

Find the Run function to analyze the following main process:

    1. If the certificate does not exist, create a certificate, so if we have our own certificate can put it under the/etc/kubernetes/pki, a closer look below if the certificate is generated
    if res, _ := certsphase.UsingExternalCA(i.cfg); !res {        if err := certsphase.CreatePKIAssets(i.cfg); err != nil {            return err        }
    1. Create a Kubeconfig file
        if err := kubeconfigphase.CreateInitKubeConfigFiles(kubeConfigDir, i.cfg); err != nil {            return err        }
    1. Create the manifest file, ETCD Apiserver Manager Scheduler are created here, you can see if you have written in the configuration file Etcd address, it is not created, we can install the ETCD cluster, instead of the default single point of ETCD, Very useful
controlplanephase.CreateInitStaticPodManifestFiles(manifestDir, i.cfg); if len(i.cfg.Etcd.Endpoints) == 0 {    if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(manifestDir, i.cfg); err != nil {        return fmt.Errorf("error creating local etcd static pod manifest file: %v", err)    }}
    1. Waiting for Apiserver and Kubelet start success, here will encounter the mirror we often encounter pull down error, in fact, sometimes kubelet for other reasons will also report this mistake, let people mistakenly think it is mirror can not get down
if err := waitForAPIAndKubelet(waiter); err != nil {    ctx := map[string]string{        "Error":                  fmt.Sprintf("%v", err),        "APIServerImage":         images.GetCoreImage(kubeadmconstants.KubeAPIServer, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),        "ControllerManagerImage": images.GetCoreImage(kubeadmconstants.KubeControllerManager, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),        "SchedulerImage":         images.GetCoreImage(kubeadmconstants.KubeScheduler, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),    }    kubeletFailTempl.Execute(out, ctx)    return fmt.Errorf("couldn't initialize a Kubernetes cluster")}
    1. Tag master, add a stain, so you want the pod to be dispatched to master to clear the stain.
if err := markmasterphase.MarkMaster(client, i.cfg.NodeName); err != nil {    return fmt.Errorf("error marking master: %v", err)}
    1. Generate Tocken
if err := nodebootstraptokenphase.UpdateOrCreateToken(client, i.cfg.Token, false, i.cfg.TokenTTL.Duration, kubeadmconstants.DefaultTokenUsages, []string{kubeadmconstants.NodeBootstrapTokenAuthGroup}, tokenDescription); err != nil {    return fmt.Errorf("error updating or creating token: %v", err)}
    1. Call Clientgo to create DNS and Kube-proxy
if err := dnsaddonphase.EnsureDNSAddon(i.cfg, client); err != nil {    return fmt.Errorf("error ensuring dns addon: %v", err)}if err := proxyaddonphase.EnsureProxyAddon(i.cfg, client); err != nil {    return fmt.Errorf("error ensuring proxy addon: %v", err)}

The author criticizes the code without the brain of a process in the end, if the author of the abstract interface renderconf Save Run clean and so on, the DNS kube-porxy and other components to implement, and then the problem is not the DNS and Kubeproxy configuration rendering out, Maybe they're not the reason for the static pod, and then the bug in join is mentioned below

Certificate generation

Loop call this lump function, we only need to see one or two of them, the others are almost

certActions := []func(cfg *kubeadmapi.MasterConfiguration) error{    CreateCACertAndKeyfiles,    CreateAPIServerCertAndKeyFiles,    CreateAPIServerKubeletClientCertAndKeyFiles,    CreateServiceAccountKeyAndPublicKeyFiles,    CreateFrontProxyCACertAndKeyFiles,    CreateFrontProxyClientCertAndKeyFiles,}

Root certificate generation:

//返回了根证书的公钥和私钥func NewCACertAndKey() (*x509.Certificate, *rsa.PrivateKey, error) {    caCert, caKey, err := pkiutil.NewCertificateAuthority()    if err != nil {        return nil, nil, fmt.Errorf("failure while generating CA certificate and key: %v", err)    }    return caCert, caKey, nil}

K8s.io/client-go/util/cert there are two functions in this library, one for generating a key cert:

key, err := certutil.NewPrivateKey()config := certutil.Config{    CommonName: "kubernetes",}cert, err := certutil.NewSelfSignedCACert(config, key)

Config we can also fill in some other certificate information:

type Config struct {    CommonName   string    Organization []string    AltNames     AltNames    Usages       []x509.ExtKeyUsage}

The private key is the function that encapsulates the RSA library:

    "crypto/rsa"    "crypto/x509"func NewPrivateKey() (*rsa.PrivateKey, error) {    return rsa.GenerateKey(cryptorand.Reader, rsaKeySize)}

Since the visa book, so the root certificate only commonname information, organization equivalent to not set:

func NewSelfSignedCACert(cfg Config, key *rsa.PrivateKey) (*x509.Certificate, error) {    now := time.Now()    tmpl := x509.Certificate{        SerialNumber: new(big.Int).SetInt64(0),        Subject: pkix.Name{            CommonName:   cfg.CommonName,            Organization: cfg.Organization,        },        NotBefore:             now.UTC(),        NotAfter:              now.Add(duration365d * 10).UTC(),        KeyUsage:              x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,        BasicConstraintsValid: true,        IsCA: true,    }    certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &tmpl, &tmpl, key.Public(), key)    if err != nil {        return nil, err    }    return x509.ParseCertificate(certDERBytes)}

To write to the file after it is created:

 pkiutil.WriteCertAndKey(pkiDir, baseName, cert, key);certutil.WriteCert(certificatePath, certutil.EncodeCertPEM(cert))

The PEM library is called here to encode

encoding/pemfunc EncodeCertPEM(cert *x509.Certificate) []byte {    block := pem.Block{        Type:  CertificateBlockType,        Bytes: cert.Raw,    }    return pem.EncodeToMemory(&block)}

Then we look at the Apiserver certificate generation:

caCert, caKey, err := loadCertificateAuthorithy(cfg.CertificatesDir, kubeadmconstants.CACertAndKeyBaseName)//从根证书生成apiserver证书apiCert, apiKey, err := NewAPIServerCertAndKey(cfg, caCert, caKey)

At this time need to pay attention to altnames is more important, all need to access the master address domain name must be added, corresponding to the configuration file Apiservercertsans field, other things and root certificate no difference

config := certutil.Config{    CommonName: kubeadmconstants.APIServerCertCommonName,    AltNames:   *altNames,    Usages:     []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},}

Create a k8s configuration file

You can see that these files were created

return createKubeConfigFiles(    outDir,    cfg,    kubeadmconstants.AdminKubeConfigFileName,    kubeadmconstants.KubeletKubeConfigFileName,    kubeadmconstants.ControllerManagerKubeConfigFileName,    kubeadmconstants.SchedulerKubeConfigFileName,)

The k8s encapsulates the functions of two render configurations:
The difference is that your kubeconfig file will not generate tokens, such as when you enter dashboard need a token, or you call the API requires a token then please create a token configuration
The generated conf file basically always just like clientname these things different, so after the encryption of the certificate is different, ClientName will be encrypted to the certificate, and then k8s out when the user uses

So the point is, we have to do this when we do more tenants. Then bind the role to the tenant.

return kubeconfigutil.CreateWithToken(    spec.APIServer,    "kubernetes",    spec.ClientName,    certutil.EncodeCertPEM(spec.CACert),    spec.TokenAuth.Token,), nilreturn kubeconfigutil.CreateWithCerts(    spec.APIServer,    "kubernetes",    spec.ClientName,    certutil.EncodeCertPEM(spec.CACert),    certutil.EncodePrivateKeyPEM(clientKey),    certutil.EncodeCertPEM(clientCert),), nil

And then it's filled with the config structure, and finally it's written in a file, slightly

"k8s.io/client-go/tools/clientcmd/apireturn &clientcmdapi.Config{    Clusters: map[string]*clientcmdapi.Cluster{        clusterName: {            Server: serverURL,            CertificateAuthorityData: caCert,        },    },    Contexts: map[string]*clientcmdapi.Context{        contextName: {            Cluster:  clusterName,            AuthInfo: userName,        },    },    AuthInfos:      map[string]*clientcmdapi.AuthInfo{},    CurrentContext: contextName,}

Create a static pod Yaml file

This returns the pod structure of the Apiserver Manager scheduler,

Specs: = Getstaticpodspecs (cfg, k8sversion) Staticpodspecs: = Map[string]v1. pod{kubeadmconstants. Kubeapiserver:staticpodutil. Componentpod (v1. container{name:kubeadmconstants. Kubeapiserver, Image:images. Getcoreimage (kubeadmconstants. Kubeapiserver, CFG. Getcontrolplaneimagerepository (), CFG. Kubernetesversion, CFG. Unifiedcontrolplaneimage), Command:getapiservercommand (CFG, k8sversion), Volumemounts:staticpodutil . Volumemountmaptoslice (mounts. Getvolumemounts (kubeadmconstants. Kubeapiserver)), Livenessprobe:staticpodutil. Componentprobe (CFG, kubeadmconstants. kubeapiserver, int (CFG). Api. Bindport), "/healthz", V1. URISCHEMEHTTPS), Resources:staticpodutil. Componentresources ("250m"), Env:getproxyenvvars (),}, mounts. Getvolumes (kubeadmconstants. Kubeapiserver)), kubeadmconstants. Kubecontrollermanager:staticpodutil. Componentpod (v1. container{name:kubeadmconstants. KubecontrollermanageR, Image:images. Getcoreimage (kubeadmconstants. Kubecontrollermanager, CFG. Getcontrolplaneimagerepository (), CFG. Kubernetesversion, CFG. Unifiedcontrolplaneimage), Command:getcontrollermanagercommand (CFG, k8sversion), Volumemounts:stati Cpodutil. Volumemountmaptoslice (mounts. Getvolumemounts (kubeadmconstants. Kubecontrollermanager)), Livenessprobe:staticpodutil. Componentprobe (CFG, kubeadmconstants. Kubecontrollermanager, 10252, "/healthz", V1. urischemehttp), Resources:staticpodutil. Componentresources ("200m"), Env:getproxyenvvars (),}, mounts. Getvolumes (kubeadmconstants. Kubecontrollermanager)), kubeadmconstants. Kubescheduler:staticpodutil. Componentpod (v1. container{name:kubeadmconstants. Kubescheduler, Image:images. Getcoreimage (kubeadmconstants. Kubescheduler, CFG. Getcontrolplaneimagerepository (), CFG. Kubernetesversion, CFG. Unifiedcontrolplaneimage), command:getscheduLercommand (CFG), Volumemounts:staticpodutil. Volumemountmaptoslice (mounts. Getvolumemounts (kubeadmconstants. Kubescheduler)), Livenessprobe:staticpodutil. Componentprobe (CFG, kubeadmconstants. Kubescheduler, 10251, "/healthz", V1. urischemehttp), Resources:staticpodutil. Componentresources ("100m"), Env:getproxyenvvars (),}, mounts. Getvolumes (kubeadmconstants. Kubescheduler)),}//gets a specific version of the Mirror func getcoreimage (image, Repoprefix, k8sversion, overrideimage String) string {if override Image! = "" {return overrideimage} kubernetesimagetag: = Kubeadmutil. Kubernetesversiontoimagetag (k8sversion) Etcdimagetag: = Constants. Defaultetcdversion etcdimageversion, err: = constants. Etcdsupportedversion (k8sversion) If Err = = Nil {Etcdimagetag = etcdimageversion.string ()} return MAP[STR ing]string{constants. Etcd:fmt. Sprintf ("%s/%s-%s:%s", Repoprefix, "Etcd", runtime. Goarch, Etcdimagetag), constAnts. Kubeapiserver:fmt. Sprintf ("%s/%s-%s:%s", Repoprefix, "Kube-apiserver", runtime. Goarch, Kubernetesimagetag), constants. Kubecontrollermanager:fmt. Sprintf ("%s/%s-%s:%s", Repoprefix, "Kube-controller-manager", runtime. Goarch, Kubernetesimagetag), constants. Kubescheduler:fmt. Sprintf ("%s/%s-%s:%s", Repoprefix, "Kube-scheduler", runtime. Goarch, Kubernetesimagetag),}[image]}//and then the pod is written into the file, relatively simple staticpodutil.  Writestaticpodtodisk (componentname, Manifestdir, spec);

Create Etcd the same, not much nonsense

Wait for Kubelet to start successfully

This error is very easy to meet, see this basic is kubelet not up, we need to check: SELinux swap and Cgroup driver is not consistent
Setenforce 0 && swapoff-a && systemctl Restart Kubelet if not, please ensure kubelet Cgroup is consistent with Docker, Docker info |grep Cg

go func(errC chan error, waiter apiclient.Waiter) {    // This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything special    if err := waiter.WaitForHealthyKubelet(40*time.Second, "http://localhost:10255/healthz"); err != nil {        errC <- err    }}(errorChan, waiter)go func(errC chan error, waiter apiclient.Waiter) {    // This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything special    if err := waiter.WaitForHealthyKubelet(60*time.Second, "http://localhost:10255/healthz/syncloop"); err != nil {        errC <- err    }}(errorChan, waiter)

Creating DNS and Kubeproxy

That's where I found Coredns.

if features.Enabled(cfg.FeatureGates, features.CoreDNS) {    return coreDNSAddon(cfg, client, k8sVersion)}return kubeDNSAddon(cfg, client, k8sVersion)

The Coredns yaml configuration template is then written directly in the code:
/app/phases/addons/dns/manifests.go

    CoreDNSDeployment = `apiVersion: apps/v1beta2kind: Deploymentmetadata:  name: coredns  namespace: kube-system  labels:    k8s-app: kube-dnsspec:  replicas: 1  selector:    matchLabels:      k8s-app: kube-dns  template:    metadata:      labels:        k8s-app: kube-dns    spec:      serviceAccountName: coredns      tolerations:      - key: CriticalAddonsOnly        operator: Exists      - key: {{ .MasterTaintKey }}...

Then render the template, and finally call K8sapi to create, this way can learn, although a bit clumsy, this place is far less than kubectl good

coreDNSConfigMap := &v1.ConfigMap{}if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), configBytes, coreDNSConfigMap); err != nil {    return fmt.Errorf("unable to decode CoreDNS configmap %v", err)}// Create the ConfigMap for CoreDNS or update it in case it already existsif err := apiclient.CreateOrUpdateConfigMap(client, coreDNSConfigMap); err != nil {    return err}coreDNSClusterRoles := &rbac.ClusterRole{}if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), []byte(CoreDNSClusterRole), coreDNSClusterRoles); err != nil {    return fmt.Errorf("unable to decode CoreDNS clusterroles %v", err)}...

Here is worth mentioning is Kubeproxy Configmap really should put apiserver address in, allow customization, because do high availability need to specify the virtual IP, have to modify, very troublesome
Kubeproxy big difference Not bad, don't say, want to change words change: app/phases/addons/proxy/manifests.go

Kubeadm Join

Kubeadm join is relatively simple, a word can be said clearly, get cluster info, create kubeconfig, how to create the kubeinit inside has said. Take token and let Kubeadm have permission.
Can pull

return HTTPS. Retrievevalidatedclusterinfo (CFG.  Discoveryfile) Cluster info content type cluster struct {//Locationoforigin indicates where this object is came from.    It's used for round tripping config Post-merge, but never serialized.    Locationoforigin string//Server is the address of the Kubernetes cluster (https://hostname:port). Server string ' JSON: ' Server '///insecureskiptlsverify skips the validity check for the server ' s certificate.    This would make your HTTPS connections insecure. +optional insecureskiptlsverify bool ' JSON: ' insecure-skip-tls-verify,omitempty '//certificateauthority is the P    Ath to a cert file for the Certificate authority. +optional certificateauthority string ' JSON: "Certificate-authority,omitempty" '//Certificateauthoritydata Contai NS pem-encoded Certificate Authority certificates. Overrides certificateauthority//+optional Certificateauthoritydata []byte ' JSON: "Certificate-authority-data,omitem Pty "'//Extensions holds additional information. This was useful for extenders so, reads and writes don ' t clobber unknown fields//+optional Extensions Map[strin G]runtime. Object ' JSON: "Extensions,omitempty" '}return kubeconfigutil. Createwithtoken (Clusterinfo. Server, "Kubernetes", Tokenuser, Clusterinfo. Certificateauthoritydata, CFG. Tlsbootstraptoken,), nil

Createwithtoken mentioned above, so we can go to generate Kubelet configuration file, and then the Kubelet start up can

The problem with Kubeadm join is that the rendering configuration does not use the command line incoming Apiserver address, and the address in the Clusterinfo, which is not conducive to our high availability, we may pass in a virtual IP, but the configuration is still apiser address

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.