Monday, July 9, 2018

Managed Web Application Scans

Nullable Security is proud to announce that we are now offering Managed Web Application Scans!

This is a recurring service to help our customers maintain a baseline of web application security hygiene. We perform an automated scan of your application to look for common vulnerabilities, and provide you with a concise report of the issues found and how they can be remediated. Our scanners are a combination of our proprietary scanners with industry-standard, free and open-source application vulnerability discovery tools. All aggregated to give you a good understanding of common vulnerabilities that may impact your product. Our testing methodology is thorough enough to satisfy PCI DSS 6.5 & 6.6 web security scanning requirements. The process of on-boarding a web application to managed application scans goes something like this:

  1. Planning - We sit down with you to develop a vulnerability testing strategy and interval
  2. Setup - A production-representative test environment is spun up for us
  3. Testing - Security testing is performed as outlined in the Planning phase
  4. Reporting - Finding reports are generated and sent out to the customer

Planning


Nullable tailors all of its tests to our clients specific needs. Each one of our scans is unique, but automated. Our vulnerability scanning tooling can work with web applications written in,

  • Server Side: Java, C#, Python, Rust, Golang, Ruby
  • Client Side: HTML / CSS / JavaScript

Setup


We prefer to run our security testing on a set of test infrastructure that is exactly representative of the production infrastructure. Testing in production can be done, but the first few tests should be closely monitored for application stability. 

Testing


Our tools and testers also understand the security implications of more complex web development paradigms such as WebSockets, AJAX, and HTML5. The testing techniques and tools will dive deep enough for you to feel confident in your application security posture. We cover all of the OWASP Top 10, and then some. Some, but not all, of our tests include:

  • SQL, PHP, Perl, Ruby, Python, CSS, ... Code Injection
  • OS Command Injection
  • File Path Traversal
  • XML External Entity, LDAP, XPATH Injection
  • Server-Side Include
  • Cross-Site Scripting (DOM, Reflected, Stored)
  • WebSocket Hijacking
  • Flash, Silverlight Cross-Domain Policy
  • Cross-Site Request Forgery
  • SMTP Header Injection
  • TLS Cryptography Audit
  • Session Token Mishandling
  • File Upload Abuse
  • Sensitive Information Disclosure
  • Confused Deputy Issues
  • Mixed Content Issues

Reporting


The reporting phase is one we're always evolving to ensure that the most clear picture of the applications security posture is communicated to the customer. Our reports are aggregations of highly automated scans, and vulnerabilities are ranked by severity. There is no manual verification of the contents of the report, but we do offer this as an additional service. This report will act as a remediation guide, and will help your development teams to find and fix the issues discovered in the Testing phase. The vulnerability report is Nullable's key deliverable to our customer.

Cost


Our pricing structure is very straight forward, and highly affordable. We charge per-absolute URL scanned. This includes both web APIs and HTML pages. Our spiders will look for all available inputs into that URL, and will perform 100+ (on average) security tests against it. Most medium complexity web applications will have about 20 to 100 absolute URLs. Subscriptions for larger customers are available by request.

 $10 / URL / Scan 


Why trust Nullable Security? We are a boutique application security firm with founders who have decades of software engineering and security experience. We've performed security services for two-person startups, all the way up to Fortune 10 conglomerates. We are licensed, insured, and mutual NDAs are available. Nullable knows that everyones' business is their own baby, and we want to work with you to protect your assets.


Saturday, August 5, 2017

Statically Compile Tor

The Onion Router (Tor) is a great open-source tool to help protect your privacy online. It's useful for journalists in hostile nations, and allows us the freedom to find information online without being tracked via IP address. It does this by bouncing your connections through multiple encrypted routers on the Tor network to hide the source of the request.


At Nullable Security, we sometimes use Tor as a transport mechanism for client penetration tests. We had a customer who wanted to simulate how an APT-style attack would look like when it originates from, or connects to, the Tor network. Most organizations don't have the level of network awareness to alert on such communications, and Tor traffic tends to go unnoticed. Whenever we use the Tor client in a customer engagement, we always want the software to be as up-to-date and portable as possible. And we of course want to be transparent on the software we're using in their network. This post will describe how we wrap up Tor for easy deployment.

The Tor software comes in three major forms: a bowser bundle, an expert bundle, and source code. We'll be focusing on the Windows variants of Tor for this post. Most people use the browser bundle, which is a combination of the Tor client and a hardened version of Firefox. The expert bundle comes with just the Tor client. And source code contains all the code for Tor and it's accessories. We have little use for the full browser bundle because we don't usually need a graphical browser for our pentesting tools. All we really need is a locally listening Tor socket, which the expert bundle gives us. But when you download the Tor expert bundle, you'll see that it contains two top-level folders: Data & Tor. Data contains geoip data, and isn't required for the Tor client to run. The Tor folder contains two executables and eight DLLs. Of the two executables, we only care about tor.exe, which needs six of those DLLs. But we only want to deploy the single tor.exe, and not all the required DLLs. To solve this, we will statically compile the Tor source code, and include all the necessary DLL code into a single Tor binary. This also lets us use the most modern source code for Tor and its dependencies. We'll begin by getting the necessary pieces of software we'll need to compile Tor.


MSYS2 is a software build environment similar to Cygwin, which we'll use to build Tor on Windows 10. Run the MSYS2 installer and accept all the defaults. We used the 64-bit 20161025 version of MSYS2. This will open the MSYS command prompt, where we will install some packages needed to compile Tor. The pacman command will be used to fetch updates and dependencies. Run this command twice to ensure everything updated.

   pacman -Syu

Now pull the build environment.

   pacman -S msys/make msys/perl msys/tar
   pacman -S mingw32/mingw-w64-i686-binutils msys/binutils
   pacman -S mingw32/mingw-w64-i686-gcc
   pacman -S mingw32/mingw-w64-i686-make
   pacman -S msys/pkg-config mingw32/mingw-w64-i686-pkg-config

Open the mingw32 console (C:\msys64\msys2_shell.cmd -mingw32) and enter these commands to download the Tor source code and it's dependencies.

   wget https://www.openssl.org/source/openssl-1.0.2l.tar.gz
   wget http://zlib.net/zlib-1.2.11.tar.gz
   wget https://github.com/libevent/libevent/releases/download/release-2.1.8-stable/libevent-2.1.8-stable.tar.gz
   wget https://www.torproject.org/dist/tor-0.3.0.10.tar.gz
   mkdir openssl && mkdir libevent && mkdir zlib && mkdir tor
   tar xvf openssl-1.0.2l.tar.gz -C openssl
   tar xvf zlib-1.2.11.tar.gz -C zlib
   tar xvf tor-0.3.0.10.tar.gz -C tor

Set a few build environment variables

   export INCLUDE_PATH="/mingw32/include:/mingw32/i686-w64-mingw32/include:$INCLUDE_PATH"
   export LIBRARY_PATH="/mingw32/lib:/mingw32/i686-w64-mingw32/lib:$LIBRARY_PATH"
   export BINARY_PATH="/mingw32/bin:/mingw32/i686-w64-mingw32/bin:$BINARY_PATH"

Compile zlib

   cd ~/zlib/zlib-1.2.11
   make -f win32/Makefile.gcc

Compile libevent

   cd ~/libevent/libevent-2.1.8-stable/
   ./configure --prefix="$HOME/libevent/install" --enable-static --disable-shared
   make && make install-strip

Compile OpenSSL

   cd ~/openssl/openssl-1.0.2l
   LDFLAGS="-static" ./Configure no-shared no-zlib no-asm --prefix="$HOME/openssl/install" -static mingw
   make depend && make && make install
   
Compile Tor

   cd ~/tor/tor-0.3.0.10
   export LDFLAGS="-static -L $HOME/openssl/install/lib -L $HOME/libevent/install/lib -L $HOME/zlib/zlib-1.2.11 -L /mingw32/lib -L /mingw32/i686-w64-mingw32/lib"
   export CFLAGS="-I $HOME/openssl/install/include -I $HOME/zlib/zlib-1.2.11 -I $HOME/libevent/install/include"
   export LIBRARY_PATH="$HOME/openssl/install/lib:$HOME/libevent/install/lib:$HOME/zlib/zlib-1.2.11:/mingw32/lib:/mingw32/i686-w64-mingw32/lib"
   export INCLUDE_PATH="$HOME/openssl/install/include:$HOME/zlib/zlib-1.2.11:$HOME/libevent/install/include:/mingw32/include:/mingw32/i686-w64-mingw32/include"
   export BINARY_PATH="/mingw32/bin:/mingw32/i686-w64-mingw32/bin"
   export PKG_CONFIG_PATH="$HOME/openssl/install/lib/pkgconfig:$PKG_CONFIG_PATH"
   export LIBS="-lcrypt32"
   ./configure --disable-gcc-hardening --enable-static-tor --prefix="$HOME/tor/install" --with-libevent-dir="$HOME/libevent/install/lib" --with-openssl-dir="$HOME/openssl/install/lib" --with-zlib-dir="$HOME/zlib/zlib-1.2.11"
   make && make install-strip

And now we have a tor.exe binary in our MSYS2 home build directory. You can find that at the default path of C:\msys64\home\<username>\tor\install\bin. Here's a demonstration of Tor successfully bootstrapping to the network, using only the newly compiled tor.exe binary. And a bonus attribute about your new binary - since it's custom compiled, can be renamed/packed/crypted, and made to listen on any port - it's more likely to bypass AV signatures



This executable can be passed a standard torrc configuration file, and may be easily embedded in other malware. For Blue-Team'ers, the most effective way to detect something like this is to watch for Tor traffic on the wire. Creating firewall blacklists and alerts for traffic to known Tor entry guard nodes will show you which hosts on your network may be running Tor enabled software. Monitoring for Tor exit node traffic will show you connections originating from the Tor network. You can retrieve IP lists from the Tor project and other third-parties here.
  • https://check.torproject.org/cgi-bin/TorBulkExitList.py
  • https://atlas.torproject.org/#search/flag:Guard
  • https://www.dan.me.uk/tornodes
Sources:
  • http://wiki.ozanh.com/doku.php?id=tor:tor_for_windows
  • https://tor.stackexchange.com/questions/610/compiling-tor-on-windows-what-is-needed

Sunday, July 23, 2017

Fuzzing Nginx

Nginx is a popular web server software used by over 130 million websites. Of the 10,000 busiest websites, most are running on Nginx. Due to its vast deployment, the importance of this software cannot be overstated. Because of this, we have decided to evaluate the ruggedness of the product to search for unknown security vulnerabilities. This post will describe how to set up and use a fuzzing environment to search for bugs in Nginx.

Fuzzing is the technique of sending malformed data to a piece of software in order to understand how it reacts. American Fuzzy Lop (AFL) will be our primary fuzzer. And we'll need a few hacks to make AFL and Nginx play nice. Fuzzing of Nginx appears infrequently, so maybe we'll find some good bugs by doing this. Google's oss-fuzz project will eventually target Nginx, but at the moment they appear to have made little progress.

To begin, we need to set up a Linux environment to test on. For our purposes, we will be running Debian 9 (Stretch) installed in a VM. But this can be done on bare-metal machines or cloud VPSs too. Once Debian is fully patched and running on your machine of choice, you'll need to install/compile a few pre-requisite packages. The main fuzzing components we use are,
Let's create our working directory, and get the software we'll need.

    mkdir /opt/fuzz /opt/fuzz/tests /opt/fuzz/results && cd /opt/fuzz
    apt install unzip build-essential clang zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev libini-config-dev llvm-3.8 llvm-3.8-dev llvm-3.8-runtime -y
    wget https://nginx.org/download/nginx-1.12.1.tar.gz
    wget http://lcamtuf.coredump.cx/afl/releases/afl-latest.tgz
    wget https://github.com/zardus/preeny/archive/master.zip
    tar xvf nginx-1.12.1.tar.gz
    tar xvf afl-latest.tgz
    unzip master.zip

Now we build AFL and it's optional afl-clang-fast wrapper.

    cd /opt/fuzz/afl-2.49b/ && make
    cd ./llvm_mode/ && LLVM_CONFIG=/usr/bin/llvm-config-3.8 make
    cd ../ && sudo make install
    
In order to have Nginx quickly execute our test cases, we need to patch the software to exit after performing exactly one HTTP request. This allows the Nginx process space to be in a clean state for each test case, which will help to correlate bugs and inputs. It will also prevent Nginx from performing other actions that we do not want to test. Begin by editing the file /opt/fuzz/nginx-1.12.1/src/os/unix/ngx_process_cycle.c. On line 309, there is a call to the ngx_process_events_and_timers() function, which will processes the incoming event. We want Nginx to perform this call, and then validate it's state, before we exit the program. But interestingly, Nginx considers both the incoming request and the outgoing response, to each be a single "event." So to see the output of our fuzzed HTTP request, we need to allow this event processing to happen twice. To get around this, add a counter to the for loop, which will check for two iterations before it exits. A "run_count" variable is initialized before the for loop, and checked after each iteration. The code in red below is what needs to be added.

 309     static volatile int run_count = 0;
 310     for ( ;; ) {
 311         ngx_log_debug0(NGX_LOG_DEBUG_EVENT, cycle->log, 0, "worker cycle     ");
 312
 313         ngx_process_events_and_timers(cycle);

...

 339         if (ngx_reopen) {
 340             ngx_reopen = 0;
 341             ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "reopening logs");
 342             ngx_reopen_files(cycle, (ngx_uid_t) -1);
 343         }
 344         if (run_count >= 1) exit(0);
 345         run_count += 1;
 346      }

Next step is to build Nginx, but using the AFL Clang Fast wrapper (afl-clang-fast). This wrapper is a drop-in Clang replacement, which allows AFL to perform instrumentation on the newly compiled Nginx binary. Afl-clang-fast is a true compiler-level instrumentation, instead of the more crude assembly-level rewriting approach taken by afl-gcc and afl-clang. This gives AFL the ability to see when different code paths are executed for each of it's fuzz test cases. Nginx will also be compiled with the "select_module" enabled, which forces the server to work with the select() method to handle connections. This makes the binary easier to profile, as this is a standard Linux syscall.

    cd /opt/fuzz/nginx-1.12.1
    CC=/usr/local/bin/afl-clang-fast ./configure --prefix=/opt/fuzz/nginx --with-select_module
    make && make install

Nginx needs to be configured so that it's friendly to the single-request-then-exit style fuzzing we will perform. Edit the file /opt/fuzz/nginx/conf/nginx.conf and add these lines at the top. This prevents Nginx from forking and running as a service.

    master_process off;
    daemon off;

In the same file, add the red lines into the "events" config block, and edit the "server" block to make Nginx listen on 8080 which allows it run as a non-root user. This also tells Nginx to handle one request at a time, and use the select() syscall we enabled earlier.

    events {
        worker_connections  1024;
        use select;
        multi_accept off;
    }

    ...

    server {
        listen       8080;
        server_name  localhost;
        ...
    }

The web server is compiled, configured, and nearly ready to fuzz. But there's a limitation in AFL that needs to be worked around. AFL primarily operates on files, and was not designed to fuzz network sockets - which is what we need to talk to Nginx. To get around this, Nginx needs to be able to talk over stdin / stdout so that we can feed in AFL's tests. Preeny is a collection of utilities that takes advantage of LD_Preload hooking to do all kinds of crazy things to other binaries. Specifically there is a utility called "desock" which will channel a socket to the console. This utility will bridge the gap between Nginx and AFL. Compile and load Preeny using these commands.

    cd /opt/fuzz/preeny-master/ && make


The compilation will create a directory in the preeny-master/ folder with the architecture of your machine. It will contain a file called desock.so which we use to hook Nginx. Let's copy that to our main fuzz directory for ease of access.

    cp /opt/fuzz/preeny-master/x86_64-pc-linux-gnu/desock.so /opt/fuzz/

This hook command will launch the target Nginx server, and we'll use this again.

    LD_PRELOAD=/opt/fuzz/desock.so /opt/fuzz/nginx/sbin/nginx

After running this command, you'll notice that the terminal seems to hang. This is because Nginx is now waiting for input on stdin. Test this by typing in "GET /" to your terminal, to see Nginx's response:

    GET /
    <!DOCTYPE html>
    <html>
    <head>

    <title>Welcome to nginx!</title>
    ...

The server should close immediately after issuing the response.

Creating and organizing input test cases for AFL is very important to the accuracy and speed of the fuzz job. You want to give AFL good context so that it can learn from those input test cases. Start by creating a single, very simple HTTP test case in the file /opt/fuzz/tests/test1.txt with this HTTP request as its contents. And don't forget to add two new-line characters at the end of this file to terminate the HTTP protocol.

   GET / HTTP/1.1
   Accept: text/html, application/xhtml+xml, */*
   Accept-Language: en-US
   User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36
   Accept-Encoding: gzip, deflate
   Host: website.com
   Connection: Keep-Alive
   Cookie: A=asdf1234


Test this by passing it to a new hooked instance of Nginx.

   cd /opt/fuzz/ 
   LD_PRELOAD=./desock.so ./nginx/sbin/nginx < ./tests/test1.txt

Now that we have a command that runs our fully-instrumented Nginx server, let's feed that to AFL.

    LD_PRELOAD=./desock.so afl-fuzz -i tests -o results nginx/sbin/nginx

And now we let AFL find some bugs..



Future Work:
  • Implement AFL persistence mode
Sources:

Saturday, July 4, 2015

Format String Exploitation

Format String vulnerabilities are a class of software bug which allows an attacker to perform writes or reads to arbitrary memory addresses. This tutorial will focus on the C programming language, and exploitation of the format string functionality.

Before we begin to understand the nature of this software flaw, we must first know what a format string is. A format string is an ASCII string that contains text and format parameters. For example,
printf("My name is: %s", "nops");
This function call returns the string,
My name is: nops 
The first parameter to this printf function is the format string. This is basically a specifier which tells the program how to format the output. There are several format specifiers which can be used in the format string, with any subsequent parameters serving to populate the format specifiers. Specifiers are always prefixed by the "%" character. Many specifiers exist for differing data types, but the most common include
  • %d - Decimal (signed int) - Output decimal number
  • %s - String - Reads string from memory
  • %x - Hexadecimal - Output hexadecimal number
  • %c - Character - Output character
  • %p - Pointer - Pointer address
  • %n - Number of bytes written so far - Writes the number of bytes till the format string to memory
Functions that are vulnerable to format string exploits include (but are not limited to), fprintf, printf, sprintf, snprintf, etc. The vulnerability comes in when the programmer does not sanitize any user-supplied data which may be used as the format string. The best way to explain the vulnerability is through an example.
    
    #include <stdio.h>
    int main(int argc, char * argv[])
    {
        char a[1024];
        strcpy(a, argv[1]);
        printf(a);
        printf("\n");
    }

This code takes in a string as a parameter, create a 1024 character buffer, copies the string into the buffer, then outputs that string via two formatted printf calls. When compiled and run under normal circumstances, the first parameter to this program gets argued out as expected. (Bonus points if you noticed the buffer overflow vulnerability.)
[email protected]:~/#gcc test.c -o test
[email protected]:~/# ./test blah
blah
But if we look at the printf documentation, we see that the first parameter of that call is the special format string specifier. And in our sample test code, we can see that argv[1] is eventually passed to printf as that first parameter. So we have user-supplied (ie. hacker supplied) data being interpreted as the format string - Dangerous. Now let's see some samples of this type of attack

    [email protected]:~/# ./test %s
    TERM_PROGRAM=Apple_Terminal

So we entered %s as the attack parameter, and it spit out something about our terminal. Why is that? What's happening is printf thinks that it needs to print the next address on the stack, and interpret that data as a string. This is because we supplied the "%s" as a format string (the 'a' variable in the code). Again, we pass more format strings to the vulnerable program to see what happens.

    [email protected]:~/# ./test %s.%s
    TERM_PROGRAM=Apple_Terminal.(null)

Now we added a second format string parameter, delimited by a "." The next value in the stack happened to be null, so we get the same terminal message from before, plus a null value for the second parameter. This attack is reading values straight off of the stack that may have otherwise been private. This, in of itself, is dangerous as the stack might contains passwords, keys, or other secrets that weren't intended to be released. If we try to read too far using this technique, the program will segfault as it tries to read illegal memory entries.

    [email protected]:~/# ./test %s.%s.%s
    Segmentation fault: 11

But, we can take this vulnerability one step further, and actually write values onto the stack. To understand how this works, we must know about two features used in the printf specification. First thing, "%n" is a special feature that will store the number of characters written so far, into the integer indicated variable named in the corresponding argument. So,

    int i;
    printf("ABCDE%n", &i);

Would cause printf to write "5" (the number of characters it just wrote) into the variable i;

The second feature we need to understand is the "$" operator. This will allow us to select a specific argument from the format string. This operator will be followed by a code which will allow us to select one of the format string arguments. For example,

    printf("%3$s", 1, "b", "c", 4);

Will display "c". This is because the format string "%3$s" is basically saying "give me the 3rd parameter after this format string, then interpret that parameter as a string." So if we can do something like this,

    printf("AAA%3$n");

then printf will write the value "3" (the number of A's written) to the address pointed to by the 3rd argument to this printf function. But wait, there is no 3rd parameter. Exactly! Remember that printf will use parameters straight off the stack, because it has no inherent knowledge that it shouldn't. (Recent format string exploit mitigations made printf smarter in this way.) So what will happen that is that printf will write "3" to whatever is located at that address on the stack.

Ok, cool. We've got a stack data leak, and an uncontrolled write-what-where primitive. We already control what is written, we just have to control where it is written in order to leverage this bug for code execution. At this point, we can only write to the arbitrary locations that happen to be on the stack. Not super useful. But the next section is called shellcode development, and will give us an attack payload for this bug to use, which will allow us to control where our data is written. If we can overwrite some security flag, the EIP, or some other sensitive variable, then we can compromise the security of the system.

(Continued in Part 2 - Coming Soon - Subscribe)


Monday, January 19, 2015

Windows 8 Kernel Debugging

This inaugural post will guide you through setting up a kernel debugging environment using VMWare and WinDbg. We will create an environment which will allow us to poke at the Windows 8 kernel to further study how its internals work.

Installation

We need to start by installing WinDbg on our host machine. We will be using Windows 7 as the host machine for this post, but these instructions should roughly translate to Windows 8 hosts too. There are several options to getting WinDbg as it come packaged with Visual Studio, the Windows Software Developer Kit (SDK), and the Windows Driver Kit (WDK). The Windows 8.1 SDK allows us to install WinDbg in a stand-alone mode on both Windows 7 and 8 - so that's what we'll get. We don't need the full WDK or Visual Studio for this demonstration.

Download and run the SDK. Install it using the default options, until you get to this step.
Make sure to deselect everything except for the "Debugging Tools for Windows", and continue to walk through the rest of the installation steps. You'll have a couple of WinDbg entries in your start menu if all went well.
Now we need to install VMWare Workstation. Installation should be straight forward, as we do not need to change any of the default settings. We used Workstation 10, but 11 should work just fine. For VMWare Workstation, you will need a license to use it. But if you're into hacking, it's worth it. Install Workstation with all the default settings.

The next step is to build and configure the Windows 8 guest VM. This is the OS that we will be attaching our host's WinDbg installation to via a named pipe. We will assume that you have the Windows 8 ISO and license key already. For a refresher on how to install Windows 8 as a VM guest, please refer to this article.

VMWare Configuration

Once the Win 8 guest is created, we will need to add a serial port to it, so that our host's WinDbg can talk to the guest. To do this, start by making sure the Win8 guest is powered down. Right click on the Win 8 VM, and select "Settings". On the "Hardware" tab, click the "Add" button, and select "Serial Port".
On the next page, make sure that "Output to named pipe" is selected.
On the last page, make sure the settings are as follows, and click "Finish."
Back on the "Hardware" tab, enable "Yield CPU on poll." This forces the guest VM to yield processor time if the only task is trying to poll the virtual serial port.

Guest Configuration

Now we have to tell the Windows 8 guest that kernel debugging should be enabled, and that it should communicate on the COM port that VMWare created for us. Open a command prompt as an administrator, and run the following commands.

   bcdedit /set {current} debug on
   bcdedit /set {current} debugtype serial
   bcdedit /set {current} debugport 1
   bcdedit /set {current} baudrate 115200

We're making the assumption that the OS assigned the VMWare COM port to 1. You may have to fiddle with the ports on the guest in bcdedit and the Device Manager to find one that isn't in use. Power down the VM.

WinDbg Configuration

The first thing we need to do with a fresh WinDbg install, is to set up Windows Debug Symbols. Symbols are like debug metadata for a compiled binary. They are special files generated at compile time for a target binary, and provide useful debugging metadata like function and variable names. A lot of Microsoft binaries are compiled with Symbols that are distributed by Microsoft's Symbol Server. We need to tell WinDbg how to connect to that Symbol server.

Start by creating the folder "C:\Symbols" on the host machine. Open WinDbg on the host machine, and go to "File"  "Symbol File Path". Add the following string to the path. 

   SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols

This will tell WinDbg to download and store symbol files in the C:\Symbols directory whenever it's debugging a binary which has symbols available. Select "File" → "Save Workspace" to save the symbol settings.

Now we need to tell WinDbg how to connect to the Windows 8 VM. Open "File" → "Kernel Debugging" and select the "COM" tab. Make the settings as such
This tells WinDbg to debug the kernel at the end of the "\\.\pipe\com_1" pipe. VMWare will open this pipe for us when the VM boots. Do not click OK yet.


Putting it Together

We should now have a powered down VM, and WinDbg ready to start the connection. The "Kernel Debugging" window in WinDbg should still be up. This next steps requires a bit of good timing. Power up the VM. Then click "OK" in WinDbg after the VM POSTs and before the OS boots.

If WinDbg successfully connected to the VM over the COM pipe, then WinDbg should show something like this. Notice the "Debugee is running..." dialog. This is confirmation that it's connected to the Windows 8 kernel properly.
You're ready to set break points, analyze memory, and hack away.