I'm looking for a suitable ingress controller for years to deploy to my EKS env, I have a various of type of use cases around ingress controller,I've used to use multiple ingress controllers to meet different purpose.
- I need to exposee my grpc servers inside vpc (EKS service with node port)
- I need to expose my REST service inside vpc (EKS service with node port)
- I need to expose my REST service to internet with ssl support + authentication (Kong Ingress Controller and Nginx Ingress Controller,the the typs of ingress contoller to provide different auth types)
- I need to expose my mysql protocol compatible service inside vpc (EKS service with node port)
now I meet ALB V2 AWS Load Balancer Controller, I used to use ALB V1, that's not meet my use case,one big problem about V1 is that it could not monitor service across namespace, Version 2 of ALB has improve significantly. I could use it to create both NLB and ALB both internally or externally.
use case 1: I have a service written in grpc, it will expose both GRPC service and REST service (the REST service is a bonus service when I use GRPC + Google Endpoint). I want to expose it inside VPC so that other service could use the endpoint. when I redeploy the pod, the endpoint will change accordingly. here is the the deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: acme-api
spec:
selector:
matchLabels:
app: acme-api
replicas: 1
template:
metadata:
labels:
app: acme-api
spec:
containers:
- name: acme-api
image: us.gcr.io/acme/acme-api:1
imagePullPolicy: Always
resources:
requests:
memory: 128Mi
readinessProbe:
exec:
command:
- /bin/bash
- -c
- ps -ef | grep server | grep -v "grep"
initialDelaySeconds: 8
timeoutSeconds: 10
livenessProbe:
exec:
command:
- /bin/bash
- -c
- ps -ef | grep server | grep -v "grep"
initialDelaySeconds: 60
timeoutSeconds: 10
ports:
- name: grpc
containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /app/.env
subPath: .env
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
imagePullPolicy: Always
args: [
"--http_port=9000",
"--backend=grpc://127.0.0.1:8080",
"--service={{.service_name}}",
"--version={{.service_conf}}",
"--transcoding_preserve_proto_field_names",
"--transcoding_always_print_primitive_fields",
"--service_account_key=/etc/nginx/creds/google-service-account-key-name"
]
ports:
- name: http
containerPort: 9000
volumeMounts:
- mountPath: /etc/nginx/creds
name: google-service-account-key
readOnly: true
volumes:
- name: config-volume
configMap:
name: acme-cm
- name: google-service-account-key
secret:
secretName: google-service-account-key
the service yaml
apiVersion: v1
kind: Service
metadata:
name: acme-api
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
spec:
type: LoadBalancer
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: http
port: 9000
targetPort: 9000
selector:
app: acme-api
I deploy a LoadBalancer type of service, with 3 annotations. ALB V2 will create a NLB for me and it will forward any request on 8080 or 9000 to my pod directly. service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
means the NLB will be private, after NLB is created I could use the NLB's dns as service endpoint in my source code , whenever my deployment/pod get redeployed, the NLB's target group will refresh the IP address automatically. so I could say good by to my node port + customised DNS name for my node IP.