Kubernetes has an operator pattern, the operator is aimed to be developed by domain expert to leverage the kubernetes automation and domain skills to resolve speicific problems. CoreOS also developed a operator framework to help to develop kubernetes operator. but unfortunately it's written in go language and I don't know go language. luckly I meet Kopf when reading some technical articles, it's an kubernetes opeerator framework written in python and super easy to use.
I'm in charge of a complex GRPC microservice based data processing system deployed to AWS EKS, the Kubernetes cluster is doing a fantanstic job on the service orchestration, however one challenge I'm facing is the service discovery. all the GRPC are within the same EKS cluster and I rely on kubernetes service as a static endpoint to act as grpc endpoint, this works very well , everytime I redeploy any microservice, I don't need to care about what's the pod's ip, the service will balance the request to the live pods out of box, but the service is just an random ip within kubernetes cluster, it's not easy to to understand which ip map to which service, so we assign each service ip to a meaning full dns name and use the dns name in the code as the grpc service endpoint. for example, I have a service hello-world
like below
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world NodePort 10.108.222.86 <none> 8000:31022/TCP 15s
every time I deploy a service into kubernetes cluster, I will have to add a dns record into route53
kube-hello-world.vipmind.me -> 10.108.222.86
before I write the kubernetes operator, I have to do it manually or write a terraform script to apply it. I always wondering if there is any way to do it automatically.
when I meet Kopf I think it's time to open the door.my requirement is simple
- when a new service is deployed to my eks cluster, I want to create a new dns entry
kube-${service-name}.vipmind.me
point to the service ip - when a service is deleted from my eks cluster, I want to delete the dns entry
kube-${service-name}.vipmind.me
point to the service ip
I could achieve this by onle 87 lines of python code in kube_service_dns_exporter.py , let's break them down to explain the details
- use decorator
@kopf.on.create('', 'v1', 'services')
to tell kopf that I want to monitor api group''
or'core'
, version'v1'
, kind'services'
, when the any service is created the functioncreate_fn
will be called. - the parameters in the function are all keyword based parameter, and I could all the service metadata, spec etc from the parameter.
- inside function
create_fn
I could extract the servcie name frommeta['name']
and service ip fromspec['clusterIP']
- I could pass service_name and service_ip field into function route53_dns to create the dns record in route53
- in function
route53_dns
, I'm using boto3's route53 client to create the dns record , boto3 is stable now and it's recommended to use boto3 over boto if we are developing new project - after create the dns record I could use get_change funciton to monitor the the process.
- we could also use logger parameter from
create_fn
to attach event into kubernetes service event listlogger.info(f"deleted dns {result[0]} point to {spec['clusterIP']}")
will attach an event like below
kubectl describe svc hello-world -n test
Name: hello-world
Namespace: test
Labels: <none>
Annotations: kopf.zalando.org/last-handled-configuration={"spec": {"ports": [{"name": "web", "protocol": "TCP", "port": 8000, "targetPort": 8000, "nodePort": 31022}], "selector": {"app": "hello-world"}, "clusterIP...
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-world","namespace":"test"},"spec":{"ports":[{"name":"web","port":8000,"p...
Selector: app=hello-world
Type: NodePort
IP: 10.108.222.86
Port: web 8000/TCP
TargetPort: 8000/TCP
NodePort: web 31022/TCP
Endpoints: 172.17.0.4:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Logging 41s kopf creating dns for service hello-world which point to ip 10.108.222.86
Normal Logging 4s kopf route53 record {'ResponseMetadata': {'RequestId': '4585fae2-2b01-4d5e-b77b-72e628ed1860', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '4585fae2-2b01-4d5e-b77b-72e628ed1860', 'content-type': 'text/xml', 'content-length': '336', 'date': 'Mon, 07 Oct 2019 09:34:11 GMT'}, 'RetryAttempts': 0}, 'ChangeInfo': {'Id': '/change/C2QQJVYW684DNC', 'Status': 'INSYNC', 'SubmittedAt': datetime.datetime(2019, 10, 7, 9, 33, 36, 564000, tzinfo=tzutc()), 'Comment': 'CREATE dns record for kube-hello-world.vipmind.me point to 10.108.222.86'}}
Normal Logging 4s kopf created dns kube-hello-world.vipmind.me point to 10.108.222.86
Normal Logging 4s kopf All handlers succeeded for creation.
Normal Logging 4s kopf Handler 'create_fn' succeeded.
- use decorator
@kopf.on.delete('', 'v1', 'services')
to tell kopf that I want to monitor api group''
or'core'
, version'v1'
, kind'services'
, when the any service is deleted the functiondelete_fn
will be called. - to test the operator locally we could run command below to start the operator locally and use minikube to do the integration test
export hosted_zone_id=xxxxxxxxxxxx && \
export domain_name=vipmind.me && \
export domain_prefix=kube && \
export aws_access_key_id=xxxxxxxxxxxxxxx && \
export aws_secret_access_key=xxxxxxxxxxxxxxxxxxxx && \
kopf run kube_service_dns_exporter.py --verbose
Check out this article to see how to deploy it into kubernetes cluster