Kubenete study notes (Service)

2021-04-09 15:26

阅读:422

标签:oss   types   fqdn   ubi   mes   confirm   benet   config   orm   

Service:

Ways to create service:
Kubectl expose created a Service resource with the same pod selector as the one
used by the ReplicationController
Kubectl create with service specs

 apiVersion: v1 
 kind: Service 
 metadata: 
   name: kubia 
 spec: 
   ports: 
     - port: 80 
       targetPort: 8080 
   selector: 
     app: kubia 

kubectl exec [pod name] -- [command name] to execute command within pod
Kubenete service only supports 2 types of session affinity (none/client ip), since it deals with TCP/UDP packet, it does not know cookie

Discover service:

  • Use kubectl exec [pod name] env to find out service host/port: KUBERNETES_SERVICE_HOST KUBERNETES_SERVICE_PORT
  • Use kubenete’s own DNS server in pod kube-dns in kube-system namespace
  • Use FQDN: [pod name].[namespace].[configurable cluster domain suffix] or [pod name].[namespace] or [pod name] to access service in pod
  • Ping does not work within pod, since cluster ip is a virtual one

Service endpoint:

  • Service link to pod via endpoints.
  • kubectl get endpoints [pod name] to get endpoints for pod
  • the pod selector is used to build a list of IPs and ports, which is then stored in the Endpoints resource
  • Endpoint will not be auto-created for services without selector
  • Endpoint can be created for external servers by manual creation: exposing either ip address or host name
 apiVersion: v1 
 kind: Endpoints 
 metadata: 
   name: external-service 
 subsets: 
   - addresses: 
       - ip: 11.11.11.11 
       - ip: 22.22.22.22 
     ports: 
       - port: 80 
 apiVersion: v1 
 kind: Service 
 metadata: 
    name: external-service 
 spec: 
    type: ExternalName 
    externalName: someapi.somecompany.com 
    ports: 
       - port: 80

Exposing service to external client:
1.Create a nodePort service External-IP = nodes indicates that service is accessible through the IP address of any cluster node [node ip]:[node port] or [cluster ip]:[port]

 spec: 
   type: NodePort 
   ports: 
     - port: 80 
       targetPort: 8080 
       nodePort: 30123 

Need to open firewall to access node
Client’s IP is not visible to the pod
Find out node’s external ip: kubectl get nodes -o jsonpath=‘{.items[].status.addresses[?(@.type=="ExternalIP")].address}‘

Use spec: externalTrafficPolicy: local to instruct kubenete use to use pod on node receives the request, preventing extra hop but may not load balance evenly.

2.Create a loadBalancer service External-IP = fixed. Node port can be assigned automatically

spec: 
  type: LoadBalancer 
  ports: 
    - port: 80 
      targetPort: 8080 

No need to open firewall

3.Create an ingress service A single ingress service can be used for multiple pods. Host path of the request determines which service the request is forwarded to. The Ingress controller didn’t forward the request to the service. It only used it to select a pod.

 apiVersion: extensions/v1beta1 
 kind: Ingress 
 metadata: 
   name: kubia 
 spec: 
   rules: 
     - host: kubia.example.com 
   http: 
     paths: 
       - path: / 
     backend: 
       serviceName: kubia-nodeport 
       servicePort: 80 

Readiness probe:

  • Invoked periodically to check whether pod is ready.
  • Unlike liveness probe, pod failing readiness probe check is not killed/restarted, instead it is removed from the service. After ready, it is added back
  • Readiness can be viewed via “get pods” ‘s ready column
  • Always define a readiness probe to avoid pod becoming ready too soon when starting up
  • No need to include shutdown handling logic in readiness probe

Headless service:

  • Setting the cluster ip of service to none creates a headless service
  • DNS Lookup does not return cluster ip of service but each pod’s ip
    Performing DNS lookup in kubenetes: kubectl run dnsutils --image=tutum/dnsutils --generator=run-pod/v1 --command -- sleep infinity
    kubectl exec dnsutils nslookup service name
  • A headless services still provides load balancing across pods, but through the DNS round-robin mechanism instead of through the service proxy.
  • Use service specs: publishNotReadyAddresses field to return pod’s IP even if it is not ready.

Troubleshot service:

  • Make sure you’re connecting to the service’s cluster IP from within the cluster, not from the outside.
  • Don’t bother pinging the service IP to figure out if the service is accessible (remember, the service’s cluster IP is a virtual IP and pinging it will never work).
  • If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the pod won’t be part of the service.
  • To confirm that a pod is part of the service, examine the corresponding endpoints object with kubectl get endpoints.
  • If you’re trying to access the service through its FQDN or a part of it (for example, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
  • Check whether you’re connecting to the port exposed by the service and not the target port.
  • Try connecting to the pod IP directly to confirm your pod is accepting connections on the correct port.
  • If you can’t even access your app through the pod’s IP, make sure your app isn’t only binding to localhost.

Kubenete study notes (Service)

标签:oss   types   fqdn   ubi   mes   confirm   benet   config   orm   

原文地址:https://blog.51cto.com/shadowisper/2476302


评论


亲,登录后才可以留言!