Saturday, 2019 July 13

I pushed an update this morning (I'm trying to backfill some blog posts I've been woefully late in writing) and discovered that once again a coredns pod had gone south.

% kubectl describe endpoints --namespace=kube-system kube-dns
Name:         kube-dns
Namespace:    kube-system
Labels:       k8s-app=kube-dns
              kubernetes.io/cluster-service=true
              kubernetes.io/name=CoreDNS
Annotations:  
Subsets:
  Addresses:          10.244.1.182
  NotReadyAddresses:  10.244.0.16
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    dns      53    UDP
    dns-tcp  53    TCP
    metrics  9153  TCP

Events:  
%

Deleting the bad pod (it's recreated automatically) "fixes" the problem for at least some length of time.

This is not something that used to happen, so I suspect some relatively recent update is triggering it. Now that I know what to look for I'm going to start paying much closer attention and see if I can track down the root cause.

Tuesday, 2019 July 09 Saturday, 2019 July 20