10.4.2 运行机制

用于安装配置Istio CNI插件的istio-cni.yaml文件提供了5个功能。

1)用于将install-cni容器部署为daemonset的清单,具体定义如下所示:


kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: istio-node
  namespace: kube-system
  labels:
  k8s-app: istio-node
spec:
  selector:
  matchLabels:
    k8s-app: istio-node
  updateStrategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 1
  template:
  metadata:
    labels:
    k8s-app: istio-node
    annotations:
    # This, along with the CriticalAddonsOnly toleration below,
    # marks the pod as a critical add-on, ensuring it gets
    # priority scheduling and that its resources are reserved
    # if it ever gets evicted.
    scheduler.alpha.kubernetes.io/critical-pod: ''
  spec:
    nodeSelector:
    beta.kubernetes.io/os: linux
    hostNetwork: true
    tolerations:
    # Make sure istio-node gets scheduled on all nodes.
    - effect: NoSchedule
      operator: Exists
    # Mark the pod as a critical add-on for rescheduling.
    - key: CriticalAddonsOnly
      operator: Exists
    - effect: NoExecute
      operator: Exists
    serviceAccountName: istio-cni
    # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
    # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
    terminationGracePeriodSeconds: 0
    containers:
    # This container installs the Istio CNI binaries
    # and CNI network config file on each node.
    - name: install-cni
      image: docker.io/tiswanso/install-cni:v0.1-dev
      imagePullPolicy: Always
      command: ["/install-cni.sh"]
      env:
      # Name of the CNI config file to create.
      - name: CNI_CONF_NAME
        value: "10-flannel.conf"
      # The CNI network config to install on each node.
      - name: CNI_NETWORK_CONFIG
        valueFrom:
        configMapKeyRef:
          name: istio-cni-config
          key: cni_network_config
      volumeMounts:
      - mountPath: /host/opt/cni/bin
        name: cni-bin-dir
      - mountPath: /host/etc/cni/net.d
        name: cni-net-dir
    volumes:
    # Used to install CNI.
    - name: cni-bin-dir
      hostPath:
      path: /opt/cni/bin
    - name: cni-net-dir
      hostPath:
      path: /etc/cni/net.d

2)配置项istio-cni-conf ig:包含了CNI插件的网络配置的conf igmap,具体定义如下所示:


kind: ConfigMap
apiVersion: v1
metadata:
  name: istio-cni-config
  namespace: kube-system
data:
  # The CNI network configuration to add to the plugin chain on each node.  The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "type": "istio-cni",
      "log_level": "info",
      "kubernetes": {
        "kubeconfig": "__KUBECONFIG_FILEPATH__",
        "cni_bin_dir": "/opt/cni/bin",
        "exclude_namespaces": [ "istio-system" ]
      }
    }

3)创建集群角色istio-cni,具体定义如下所示:


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: istio-cni
rules:
  - apiGroups: [""]
  resources:
    - pods
    - nodes
  verbs:
    - get

4)创建服务账户istio-cni,具体定义如下所示:


apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-cni
  namespace: kube-system

5)绑定集群角色istio-cni到服务账户istio-cni上,以便使得该服务账户具有获取pod信息的权限,具体定义如下所示:


apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: istio-cni
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: istio-cni
subjects:
- kind: ServiceAccount
  name: istio-cni
  namespace: kube-system

前面提到install-cni.yaml会将install-cni容器部署为daemonset,而在每一个install-cni容器中具体包含了哪些内容以及执行了哪些逻辑呢?这是Istio CNI插件的关键部分,具体包括:

·复制istio-cni二进制文件和istio-iptables.sh文件到/opt/cni/bin目录。

·为服务账户:istio-cni创建kubeconf ig。

·将CNI插件配置注入到环境变量CNI_CONF_NAME指向的配置文件,例如环境变量CNI_CONF_NAME设置为10-f lannel.conf,那么jq会把配置项CNI_NETWORK_CONFIG对应的值插入到/etc/cni/net.d/${CNI_CONF_NAME}下的plugins列表中。

需要注意的是,与其他pod网络控制器方式相比较,Istio CNI插件方式有效规避了netns设置与pod初始化同步的问题,Kubernetes可以支持在新的pod中不启动任何容器,直到完整的CNI插件链成功完成。此外,在架构上,CNI插件更适合负责容器运行时网络设置。