1

I'd like to use the Win32 GetStringTypeW() method inside a PowerShell script.

I figured out the correct signature in C# and the following code worked nicely there:

[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
private static extern uint GetStringTypeW(uint dwInfoType, string lpSrcStr, int cchSrc, out ushort lpCharType);

private const uint CT_CTYPE1 = 0x0001;

public void MyMethod(string strIn) {
  ushort[] lpCharType = new ushort[strIn.Length];
  GetStringTypeW(CT_CTYPE1, strIn, strIn.Length, out lpCharType[0]);

  // Do stuff with lpCharType
}

The lpCharType array gets filled with 16-bit unsigned integers; one for each character of the string that was passed in. The integers can be checked with bitwise comparisons to find out which types of characters are present in the string.

I translated that C# code to the following in PowerShell:

$MethodDefinition = @'
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
public static extern uint GetStringTypeW(uint dwInfoType, string lpSrcStr, int cchSrc, out ushort lpCharType);
'@

$Kernel32 = Add-Type -MemberDefinition $MethodDefinition -Name 'Kernel32' -Namespace 'Win32' -PassThru

[System.UInt32] $CT_CTYPE1 = 0x0001

[System.UInt16[]] $lpCharType = [System.UInt16[]]::new($strIn.Length)

$Kernel32::GetStringTypeW($CT_CTYPE1, $strIn, $strIn.Length, [ref] $lpCharType[0])

# Do stuff with $lpCharType

That just doesn't populate $lpCharType with anything, and depending how I use that code I can also kill PowerShell entirely with a System.AccessViolationException: Attempted to read or write protected memory.

It seems like there's something weird going on in memory which I don't fully understand, so does anyone have any suggestions on how to make it work?

Note: Interestingly, if I try passing in a single UInt16 parameter instead of an array of them, it gets filled with a proper integer value, so the code kind of works, but of course, it can't hold more than one value, and that doesn't solve the Access Violation.

If I have to, I can add a go-between C# method to the $MethodDefinition to accept a string from PowerShell, call GetStringTypeW(), and return the output, but I was hoping to avoid filling my PowerShell script with C# code if possible.

morbiD
  • 214
  • 1
  • 11
  • 2
    `[ref] $lpCharType[0]` doesn't work because PowerShell can't obtain a reference to an element of a collection with value-typed elements. Answer to different question, same cause: https://stackoverflow.com/a/72364180/7571258 – zett42 May 24 '22 at 14:47
  • Out of curiosity, what is it that `GetStringTypeW` will do for you that the various methods of `Char` (like `GetUnicodeCategory`) will not? – Jeroen Mostert May 24 '22 at 14:52
  • @JeroenMostert I'm mostly just trying to learn something new here, but this has to do with testing strings against Active Directory password complexity requirements and [this article](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc786468(v=ws.10)) says that's what AD uses, so I figured, why not do the same, rather than trying to approximate it with regexes or something? I also didn't even know GetUnicodeCategory() existed, although that seems to have 29 categories instead of 10, so it might actually require more work to achieve the same result! – morbiD May 24 '22 at 17:04
  • 1
    One potential problem with this approach is that the result is still not necessarily what AD itself uses unless you're actually running the code on the domain controller, as the client Windows version may not be using the same Unicode database -- so in that regard I don't think using managed alternatives directly is much worse. For completeness, there's also `NetValidatePasswordPolicy`, which cuts out the middleman (as demonstrated in [this answer](https://stackoverflow.com/a/70147183/4137916)), though that's even harder to call in PowerShell (and might be slow if you're doing it a lot). – Jeroen Mostert May 24 '22 at 17:14
  • @JeroenMostert As it happens I do plan to run this on a DC, but I'd certainly like to take a look at NetValidatePasswordPolicy as well. Thanks for the suggestion! – morbiD May 24 '22 at 17:28

1 Answers1

3

As zett42 points out, in PowerShell you cannot obtain a reference to an individual element of a value-type array, as discussed in this answer.

However, you can simply use an array parameter in your P/Invoke declaration, and pass the array as a whole from PowerShell:

$MethodDefinition = @'
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
public static extern uint GetStringTypeW(uint dwInfoType, 
                          string lpSrcStr, 
                          int cchSrc, 
                          ushort[] lpCharType); // Declared as array, w/o `out`
'@

$Kernel32 = Add-Type -MemberDefinition $MethodDefinition -Name 'Kernel32' -Namespace 'Win32' -PassThru

[System.UInt32] $CT_CTYPE1 = 0x0001

$strIn = 'A!'

[uint16[]] $lpCharType = [uint16[]]::new($strIn.Length)

$ok = $Kernel32::GetStringTypeW($CT_CTYPE1, $strIn, -1, $lpCharType)

$lpCharType # -> 897, 528
mklement0
  • 382,024
  • 64
  • 607
  • 775
  • Perfect! Thanks! This is the first time I've messed with P/Invoke so I had no idea you could switch things up like that. I can't say I understand how that works though. How does GetStringTypeW() actually manage to populate the array if it isn't passed in by reference? – morbiD May 24 '22 at 17:09
  • 2
    Glad to hear it, @morbiD; an array is a .NET reference type, and the .NET P/Invoke logic seemingly automatically translates an array parameter declaration into a pointer to a given array('s first element). That is, I presume that the WinAPI function automatically receives the memory address of the .NET array, with the array's memory automatically getting pinned for the duration of the call (to ensure that memory isn't moved around during the call). – mklement0 May 24 '22 at 22:04