Representation of security with a shield and a padlock on it.

I just went through the description of this CVE-2020-15157 “ContainerDrip” Write-up. I found these information very insightful especially since there is some illustrations and great story with GKE.

CVE-2020-15157: If an attacker publishes a public image with a crafted manifest that directs one of the image layers to be fetched from a web server they control and they trick a user or system into pulling the image, they can obtain the credentials used by ctr/containerd to access that registry. In some cases, this may be the user’s username and password for the registry. In other cases, this may be the credentials attached to the cloud virtual instance which can grant access to other cloud resources in the account.

Interesting! By going through the description of this CVE you could find that it’s an old version of containerd (i.e. 1.2.x) which is impacted. With GKE, if you are using the cos_containerd or ubuntu_containerd node images, it’s an old 1.16 version which might be impacted too.

How to check if you are impacted if you are using a containerd node image, you could simply check the containerd’s version of your GKE cluster by running this command: kubectl get nodes -o wide. As an example, for my own cluster I got:

  • OS-IMAGE: Container-Optimized OS from Google
  • CONTAINER-RUNTIME: containerd://1.4.1

So all good here, but nonetheless the goal of this article today is to see what are the features you could leverage to make a robust security posture.

There is already 2 important aspects to improve your security posture here:

  • Auto-upgrading nodes
    • Keeping the version of Kubernetes up to date is one of the simplest things you can do to improve your security. Kubernetes frequently introduces new security features and provides security patches.
  • cos_containerd
    • cos_containerd is the preferred image for GKE as it has been custom built, optimized, and hardened specifically for running containers.

I also watched the video below which is referenced in the first article I mentioned. As I’m improving my knowledge and skills with cloud security principles, such approach and point of view from an hacker perspective is really insightful. Here below, they are talking about lateral movement and privilege escalation in GCP:

There is 3 important aspects to improve your security posture here:

  • Default node service account and Workload Identity
    • This Compute Engine default service account is overprivileged by default as the Editor role allows you to access and edit essentially everything in the project.
    • You could disable automatic grants to default service accounts at your Organization Policies level.
    • Here, I would like to call out this very well written article to see the impacts of this and some real life examples of companies hacked because they didn’t pay attention of this least privilege principle for the identity of their clusters or applications.
  • Workload Identity
    • Workload Identity is the recommended way to access Google Cloud services from applications running within GKE due to its improved security properties and manageability.
  • IAM Recommender
    • IAM recommender helps you enforce the principle of least privilege by ensuring that members have only the permissions that they actually need.
    • Great story here where the 2 authors provided improvements to the product group team managing the new IAM Recommender service. Love it!

Let’s translate this least privilege setup for the identity of your nodes and workloads with few gcloud commands:

projectId=FIXME
clusterName=FIXME

# First we need to enable the Kubernetes API on the current project
gcloud services enable container.googleapis.com
# Delete the default compute engine service account if you don't have have the Org policy iam.automaticIamGrantsForDefaultServiceAccounts in place
projectNumber="$(gcloud projects describe $projectId --format='get(projectNumber)')"
gcloud iam service-accounts delete $projectNumber-compute@developer.gserviceaccount.com --quiet

# Then we need to create a dedicated service account (instead of the default one with Editor role on the project) with the least privilege:
gcloud services enable cloudresourcemanager.googleapis.com
saId=$clusterName@$projectId.iam.gserviceaccount.com
gcloud iam service-accounts create $clusterName \
    --display-name=$clusterName
roles="roles/logging.logWriter roles/monitoring.metricWriter roles/monitoring.viewer"
for r in $roles; do gcloud projects add-iam-policy-binding $projectId --member "serviceAccount:$saId" --role $r; done

# Now you could create your cluster with this service account:
gcloud container clusters create $clusterName \
    --service-account=$saId

# Interestingly, you could have a different service account by nodepool (important if you would like to have different workloads on different node pools):
gcloud container node-pools create \
    --service-account=$saId

# And ultimately, you could enable Workload Identity on your cluster (which is even more important for fine-grained identity and authorization for applications):
gcloud container clusters create $clusterName \
    --service-account $saId \
    --workload-pool=$projectId.svc.id.goog

Then you could easily follow these instructions to allow your applications authenticate to Google Cloud using Workload Identity, typically by assigning a Kubernetes service account to the application and configure it to act as a Google service account.

With Workload Identity, you can configure a Kubernetes service account to act as a Google service account. Any application running as the Kubernetes service account automatically authenticates as the Google service account when accessing Google Cloud APIs. This enables you to assign fine-grained identity and authorization for applications in your cluster.

Note: there is few limitations currently with Workload Identity that you should be aware of.

You could learn more about Workload Identity with this session when it was launched on 2019:

In addition to this, here are 3 other aspects to still improve and complete your security posture that you may want to leverage:

  • Shielded GKE Nodes
    • Without Shielded GKE Nodes an attacker can exploit a vulnerability in a Pod to exfiltrate bootstrap credentials and impersonate nodes in your cluster, giving the attackers access to cluster secrets.
  • Private clusters
    • Private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet. This isolation is achieved as the nodes have internal IP addresses only.
  • Binary Authorization
    • Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on GKE.

Complementary resources:

That’s a wrap! We discussed about features you could enable on your GKE cluster (especially with Workload Identity) and more importantly the concept of least privilege service account instead of the default one for your GKE clusters. Yeah for sure, you could say to yourself “how an hacker could get my cluster credentials to be able to operate such attack?”. Yeah that’s for sure something which won’t happen every day, but it’s only a matter of worst case scenario + making sure you have different security layers in place to improve your security posture. How do you think data leaks and exfiltrations happen? How do you make sure you could prevent data leaks and exfiltrations in your organization?

Important note: this article got illustrations with GKE, but they apply for any Kubernetes, on any Cloud provider since they got very similar implementation, principles and features.

Hope you enjoyed that one. Sharing is caring, stay safe! ;)