10.3 编译与打包适配器

接下来我们通过五个步骤,编译打包适配器Docker镜像,并将其部署到Kubernetes集群中以便在Istio中使用这个适配器的功能。

步骤1:创建适配器的Docker镜像

让我们创建一个封装适配器的Docker镜像,使用的Dockerf ile文件如下所示:


FROM golang:1.11 as builder
WORKDIR .
COPY ./ .
RUN CGO_ENABLED=0 GOOS=linux \
  go build -a -installsuffix cgo -v -o bin/mygrpcadapter ./src/istio.io/istio/mixer/adapter/mygrpcadapter/cmd/

FROM alpine:3.8
RUN apk --no-cache add ca-certificates
WORKDIR /bin/
COPY --from=builder /go/bin/mygrpcadapter .
ENTRYPOINT [ "/bin/mygrpcadapter" ]
CMD [ "53814" ]
EXPOSE 53814

通过如下命令构建并推送Docker镜像到镜像仓库:


docker build -t osswangxining/mygrpcadapter4auth .
docker push osswangxining/mygrpcadapter4auth

步骤2:将适配器部署为集群内服务

在此模式下,我们将适配器作为一个名为mygrpcadapterservice的集群内服务运行在Kubernetes中,将以下内容保存为名为cluster_service.yaml的文件:


apiVersion: v1
kind: Service
metadata:
  name: mygrpcadapterservice
  namespace: istio-system
  labels:
    app: mygrpcadapter
spec:
  type: ClusterIP
  ports:
  - name: grpc
    protocol: TCP
    port: 53814
    targetPort: 53814
  selector:
    app: mygrpcadapter
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mygrpcadapter
  namespace: istio-system
  labels:
    app: mygrpcadapter
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mygrpcadapter
      annotations:
        sidecar.istio.io/inject: "false"
        scheduler.alpha.kubernetes.io/critical-pod: ""
    spec:
      containers:
      - name: mygrpcadapter
        image: osswangxining/mygrpcadapter4auth:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 53814

通过以下命令部署适配器服务:


kubectl apply -f cluster_service.yaml

步骤3:将适配器配置部署到Istio中

将适配器所需的配置定义部署到Istio所在的Kubernetes集群中,执行以下命令:


kubectl apply -f $MIXER_REPO/adapter/mygrpcadapter/testdata/attributes.yaml 
kubectl apply -f $MIXER_REPO/adapter/mygrpcadapter/testdata/template.yaml
kubectl apply -f $MIXER_REPO/adapter/mygrpcadapter/testdata/mygrpcadapter.yaml

编辑$MIXER_REPO/adapter/mygrpcadapter/testdata/sample_operator_cfg.yaml并更改服务的连接地址,如下所示:


spec:
 adapter: mygrpcadapter
 connection:
   #address: "[::]:53814"
   address: "mygrpcadapterservice:53814"

通过执行如下命令部署适配器的配置内容:


kubectl apply -f $MIXER_REPO/adapter/mygrpcadapter/testdata/sample_operator_cfg.yaml

kubectl -n istio-system logs $(kubectl -n istio-system get pods -listio=mixer -o jsonpath='{.items[0].metadata.name}') -c mixer 
2019-02-17T06:46:29.068210Z info  grpcAdapter Connected to: mygrpcadapterservice:
53814
2019-02-17T06:46:29.068320Z info  ccResolverWrapper: sending new addresses to cc: [{mygrpcadapterservice:53814 0  <nil>}]
2019-02-17T06:46:29.068337Z info  ClientConn switching balancer to "pick_first"
2019-02-17T06:46:29.068402Z info  pickfirstBalancer: HandleSubConnStateChange: 0xc4257c7de0, CONNECTING
2019-02-17T06:46:29.070540Z info  pickfirstBalancer: HandleSubConnStateChange: 0xc4257c7de0, READY

步骤4:部署示例应用验证适配器

仍然使用前面章节中使用的httpbin与sleep服务作为应用示例,运行以下命令部署应用示例:


$ cd mixer/custom-adapter
$ kubectl create -f httpbin.yaml
$ kubectl create -f sleep.yaml

示例应用启动正常之后,登录到sleep的容器中,并发送键值满足条件的请求头,即请求头中x-custom-token对应的值是abc的情况下,会得到如下返回结果:


/ #  curl -vk -H "x-custom-token: abc" httpbin:8000/get
*   Trying 172.19.3.252...
* TCP_NODELAY set
* Connected to httpbin (172.19.3.252) port 8000 (#0)
> GET /get HTTP/1.1
> Host: httpbin:8000
> User-Agent: curl/7.60.0
> Accept: */*
> x-custom-token: abc
>
< HTTP/1.1 200 OK
< server: envoy
< date: Sun, 17 Feb 2019 06:55:00 GMT
< content-type: application/json
< access-control-allow-origin: *
< access-control-allow-credentials: true
< content-length: 410
< x-envoy-upstream-service-time: 7
<
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Content-Length": "0",
    "Host": "httpbin:8000",
    "User-Agent": "curl/7.60.0",
    "X-B3-Sampled": "1",
    "X-B3-Spanid": "ed235048e72e7706",
    "X-B3-Traceid": "ed235048e72e7706",
    "X-Custom-Token": "abc",
    "X-Request-Id": "40f5f583-7a9b-9db6-ac6f-6aafdca6bc24"
  },
  "origin": "127.0.0.1",
  "url": "http://httpbin:8000/get"
}

现在发送键值不满足条件的请求头,即请求头中x-custom-token对应的值不是abc的情况下,会得到如下返回结果:


#  curl -vk -H "x-custom-token: abc2" httpbin:8000/get
*   Trying 172.19.3.252...
* TCP_NODELAY set
* Connected to httpbin (172.19.3.252) port 8000 (#0)
> GET /get HTTP/1.1
> Host: httpbin:8000
> User-Agent: curl/7.60.0
> Accept: */*
> x-custom-token: abc2
>
< HTTP/1.1 403 Forbidden
< content-length: 69
< content-type: text/plain
< date: Sun, 17 Feb 2019 06:55:11 GMT
< server: envoy
< x-envoy-upstream-service-time: 5
<
* Connection #0 to host httpbin left intact
PERMISSION_DENIED:myhandler4auth.handler.istio-system:Unauthorized...

类似地,如果不发送满足条件的请求头,即请求头中不包含x-custom-token的情况下,会得到如下返回结果:


#  curl -vk  httpbin:8000/get
*   Trying 172.19.3.252...
* TCP_NODELAY set
* Connected to httpbin (172.19.3.252) port 8000 (#0)
> GET /get HTTP/1.1
> Host: httpbin:8000
> User-Agent: curl/7.60.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< content-length: 69
< content-type: text/plain
< date: Sun, 17 Feb 2019 06:59:45 GMT
< server: envoy
< x-envoy-upstream-service-time: 4
<
* Connection #0 to host httpbin left intact
PERMISSION_DENIED:myhandler4auth.handler.istio-system:Unauthorized...

在适配器容器的日志中,可以看到类似的提示信息:


2019-02-17T07:05:54.199905Z info  abc
2019-02-17T07:05:54.199918Z info  map[custom_token_header:]
k: custom_token_header v:
2019-02-17T07:05:54.199923Z info  failure; header not provided
2019-02-17T07:06:15.534122Z info  received request {&InstanceMsg{Subject:&SubjectMsg{User:,Groups:,Properties:map[string]*istio_policy_v1beta11.Value{custom_token_header: &Value{Value:&Value_StringValue{StringValue:abc2,},},},},Action:nil,Name:mycheck.instance.istio-system,} &Any{TypeUrl:type.googleapis.com/adapter.mygrpcadapter.config.Params,Value:[10 3 97 98 99],XXX_unrecognized:[],} 15226769987886589660}

2019-02-17T07:06:15.534153Z info  abc
2019-02-17T07:06:15.534165Z info  map[custom_token_header:abc2]
k: custom_token_header v: abc2
2019-02-17T07:06:15.534170Z info  failure; header not provided

步骤5:清理环境

至此,如何定制开发一个适配器以及如何打包到Istio环境的所有步骤都已经进行完毕。为了进行后续的章节,此时你可以删除整个集群或完全还原更改,如下所示:


cd $MIXER_REPO/adapter/mygrpcadapter/testdata/

$ kubectl delete -f .

attributemanifest.config.istio.io "istio-proxy" deleted
attributemanifest.config.istio.io "kubernetes" deleted
adapter.config.istio.io "mygrpcadapter" deleted
handler.config.istio.io "myhandler4auth" deleted
instance.config.istio.io "mycheck" deleted
rule.config.istio.io "myrule" deleted
template.config.istio.io "authorization" deleted