SSL and the Openshift API

Adventures trying to use the OpenShift and Kubernetes APIs on RHEL, Windows, and from within a pod.

One of the great features of Kubernetes and OpenShift is its API and the ability to perform any action by making the correct REST call, but as a wise man once said the three great virtues of a programmer are laziness, impatience, and hubris. Making REST calls explicitly and dealing with the results, even in a language like python, is long-winded and hard work so obviously libraries have been that wrap up the OpenShift API to make life easy.

Sadly writing great documentation is really not a virtue that comes naturally to programmers so the OpenShift python libraries take a little figuring out to make work, especially when you start hitting weird errors or working on platforms that are not well tested.

Installation is pretty straightforward, a simple ††

pip install openshift

will install both the openshift python library, and the Kubernetes python library that it builds upon. Alternately on RHEL7 a simple

yum install python2-openshift.noarch

should work. Failing that the code for the openshift library is part of the official openshift release and can be found at https://github.com/openshift/openshift-restclient-python

Once you have it installed using it is simple:

from kubernetes import client, config
from openshift.dynamic import DynamicClient

k8s_client = config.new_client_from_config()
client = DynamicClient(k8s_client)
v1_pod = client.resources.get(api_version ='v1', kind='Pod')                                        
podList = v1_pod.get(namespace = "default")
for pod in podList['items']:
    print("%s %s"%(pod.metadata.name,pod.status.podIP))

which should use the credentials stored in .kube/config (generated by the oc login command) and use them to get all the pods in the clusters default namespace, and list their names and IPs.

Sadly the difference between theory and practice is that in theory they are the same, and in practice they’re not, so what actually happened when I ran that on my development platform, a slightly left field mix of Windows10, python, and mingw (don’t ask, the customer makes the rules) I was greeted with an ugly SSL error message clearly suggesting a certificate problem:

2019-03-19 17:21:30,191 WARNING Retrying 
(Retry(total=2, connect=None, read=None, redirect=None, status=None))
 after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)'))': /version

There is an issue in github that seems to fit, https://github.com/openshift/openshift-restclient-python/issues/198 but no solution.

An experiment with the pure Kubernetes library had the same result.

Several frustrating hours later it appears that there is something strange in how python on windows deals with certificates, quite what would take a lot longer to figure out than I had, certainly longer than it took to figure out the workaround!

That workaround is pretty simple and obvious, you just turn off ssl verification, its not exactly great practice, and should never be done in production, but the technique opens the way to a more flexible approach to authorisation. This leaves our little demo program looking like:

import os
from kubernetes import client, config
from openshift.dynamic import DynamicClient

token = os.getenv("OCP_TOKEN")

config = client.Configuration()
config.host = "https://example.com:8443/"
config.verify_ssl = False
config.api_key = {"authorization": "Bearer " + token}

k8s_client = client.ApiClient(config)
client = DynamicClient(k8s_client)

v1_pod = client.resources.get(api_version ='v1', kind='Pod')
podList = v1_pod.get(namespace = "default")
for pod in podList['items']:
    print("%s %s"%(pod.metadata.name,pod.status.podIP))

Complete with reading the token from an environment variable which you can set with

export OCP_TOKEN=`oc login -t`

And there you are, a simple workaround to an annoying problem.

RHEL

Interestingly while testing out that bit of code I discovered another annoying ssl issue, this time running on RHEL7, where turning off SSL verification leads to the following error:

urllib3.exceptions.SSLError: 
[Errno 2] No such file or directory

This one comes down to the python2-certifi package which seems to point to a certificate file that it should provide itself but doesn’t, and is needed even with verification turned off. This looks to be fixed in the python3 version, but python2 is still the default on RHEL7 so that’s a little annoying. Luckily, this gives us the chance to make out little test program a lot better, by turning verification back on and pointing it to the system certs provided by the ca-certificates package:

import os
from kubernetes import client, config
from openshift.dynamic import DynamicClient

token = os.getenv("OCP_TOKEN")

k8sconfig = client.Configuration()
config.host = "https://example.com:8443/"
k8sconfig.verify_ssl = True
k8sconfig.ssl_ca_cert = "/etc/pki/tls/certs/ca-bundle.crt"
k8sconfig.api_key = {"authorization": "Bearer " + token}

k8s_client = client.ApiClient(k8sconfig)
client = DynamicClient(k8s_client)
v1_pod = client.resources.get(api_version ='v1', kind='Pod')
mypod = v1_pod.get(namespace = "default")
for pod in mypod['items']:
    print("%s %s"%(pod.metadata.name,pod.status.podIP))

And there we have it, the different variations of the program we need to get it working on different client platforms.

As a final word the kubernetes python library is similar but a little different, the same program would look more like:

import os
from kubernetes import client, config

k8s_client = config.new_client_from_config()

token = os.getenv("OCP_TOKEN")

k8sconfig = client.Configuration()
k8sconfig.host = "https://example.com:8443"
k8sconfig.verify_ssl = True
k8sconfig.ssl_ca_cert = "/etc/pki/tls/certs/ca-bundle.crt"
k8sconfig.api_key = {"authorization": "Bearer " + token}

k8sclient = client.ApiClient(k8sconfig)
v1 = client.CoreV1Api(k8sclient)
podList = v1.list_namespaced_pod(namespace="default")
for pod in podList.items:
    print("%s"%(pod.metadata.name))

Postscript

Just for fun, if you wanted to run this from within an OpenShift container there are a couple of changes that are worth making. There is a service that gets set up automatically which allows pods to contact the API server at https://kubernetes.default.io and the certs required to talk to it are injected into all pods at run time. So a our little test program becomes:

import os
from kubernetes import client, config
from openshift.dynamic import DynamicClient

token = os.getenv("OCP_TOKEN")

k8sconfig = client.Configuration()
config.host = "https://kubernetes.default.io"
k8sconfig.verify_ssl = True
k8sconfig.ssl_ca_cert = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
k8sconfig.api_key = {"authorization": "Bearer " + token}

k8s_client = client.ApiClient(k8sconfig)
client = DynamicClient(k8s_client)
v1_pod = client.resources.get(api_version ='v1', kind='Pod')
mypod = v1_pod.get(namespace = "default")
for pod in mypod['items']:
    print("%s %s"%(pod.metadata.name,pod.status.podIP))

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s