Purging content after backing-up and then restoring a database

How do we create new datasets that contain just a portion of the content from an existing dataset? I’ve been asked this dozens of times over the years. Someone asked this past week and I figured I’d bundle up my typical method into a powershell script. This script can be run via a scheduled task, allowing you to routinely execute it for such purposes as refreshing development environments, exporting to less secured environments, or to curate training datasets.

Start by first installing the SQL Server powershell tools from an administrative powershell window by running this command:

Install-Module -Name SqlServer

You will be prompted to allow the install to continue.

2019-04-27_20-24-23.png

It will then initiate the download and attempt to install.

2019-04-27_20-24-36.png

In my case, I’ve already got it installed. If you do as well then you’ll see these types of errors, which are safe to ignore.

2019-04-27_20-26-17.png

Now we can start scripting out the process. The first step is to start the backup of the existing database. You can use the Backup-SqlDatabase command. To make things easier we use variables for the key parameters (like instance name, database name, path to the backup file).

# Backup the source database to a new file
Write-Information "Backing Up $($sourceDatabaseName)"
Backup-SqlDatabase -ServerInstance $serverInstance -Database $sourceDatabaseName -BackupAction Database -CopyOnly -BackupFile $backupFilePath

Next comes the restore of that backup. Unfortunately I cannot use the Restore-SqlDatabase commandlet, because it does not support restoring over databases that have other active connections. Instead I’ll have to run a series of statements that set the database to single-user mode, restores the database (with relocated files), and then sets the database back to multi-user mode.

# Restore the database by first getting an exclusive lock with single user access, enable multi-user when done
Write-Warning "Restoring database with exclusive use access"
$ExclusiveLock = "USE [master]; ALTER DATABASE [$($targetDatabaseName)] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;"
$RestoreDatabase = "RESTORE FILELISTONLY FROM disk = '$($backupFilePath)'; RESTORE DATABASE $($targetDatabaseName) FROM disk = '$($backupFilePath)' WITH replace, MOVE '$($sourceDatabaseName)' TO '$($sqlDataPath)\$($targetDatabaseName).mdf', MOVE '$($sourceDatabaseName)_log' TO '$($sqlDataPath)\$($targetDatabaseName)_log.ldf', stats = 5; ALTER DATABASE $($targetDatabaseName) SET MULTI_USER;"
ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement ($ExclusiveLock+$RestoreDatabase)

Next we need to change the recovery mode from full to simple. Doing so will allow us to manage the growth of the log files. This is important because the delete commands we run will spool changes into the database logs, which we’ll need to shrink as often as possible (otherwise the DB server could potentially run out of space).

# Change recovery mode to simple
Write-Information "Setting recovery mode to simple"
Invoke-Sqlcmd -ServerInstance $serverInstance -Database $targetDatabaseName -Query "ALTER DATABASE $($targetDatabaseName) SET RECOVERY SIMPLE"

With the database in simple recovery mode we can now start purging content from the restored database. Before digging into the logic of the deletes, I’ll need to create a function I can call that traps errors. I’ll also want to be able to incrementally shrink, if necessary.

# Function below is used so that we trap errors and shrink at certain times
function ExecuteSqlStatement 
{
    param([String]$Instance, [String]$Database, [String]$SqlStatement, [bool]$shrink = $false)
    $error = $false
    # Trap all errors
    try {
        # Execute the statement
        Write-Debug "Executing Statement on $($Instance) in DB $($Database): $($SqlStatement)"
        Invoke-Sqlcmd -ServerInstance $Instance -Database $Database -Query $SqlStatement | Out-Null
        Write-Debug "Statement executed with no exceptions"
    } catch [Exception] {
        Write-Error "Exception Executing Statement: $($_)"
        $error = $true
    } finally {
    }
    # When no error and parameter passed, shrink the DB (can slow process)
    if ( $error -eq $false -and $shrink -eq $true ) {
        ShrinkDatabase -SQLInstanceName $serverInstance -DatabaseName $Database -FileGroupName $databaseFileGroup -FileName $databaseFileName -ShrinkSizeMB 48000 -ShrinkIncrementMB 20
    }
}

To implement the incremental shrinking I use a method I found a few years ago, linked here. It’s great as it works around the super-slow shrinking process when done on very large datasets. Your database administrator should pay close attention to how it works and align it with your environment.

My goal is to remove all records of certain types, so that they aren’t exposed in the restored copy. Unfortunately the out-of-the-box constraints do not cascade and delete related objects. That means we need to delete them before trying to delete the records.

We need to delete:

  • Workflows (and supporting objects)

  • Records (and supporting objects)

  • Record Types

In very large datasets this process could take hours. You can optimize the performance by adding a “-shrink $true” parameter to any of the delete statements that impact large volumes of data in your org (electronic revisions, renditions, locations for instance).

# Purging content by record type uri
foreach ( $rtyUri in $recTypeUris ) 
{
    Write-Warning "Purging All Records & Supporting Objects for Record Type Uri $($rtyUri)"
    Write-Information " - Purging Workflow Document References"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkdocusa where wduDocumentUri in (select uri from tswkdocume where wdcRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri))"
    Write-Information " - Purging Workflow Documents"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkdocume where wdcRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri)"
    Write-Information " - Purging Workflow Activity Start Conditions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkstartc where wscActivityUri in (select uri from tswkactivi where wacWorkflowUri in (select uri from tswkworkfl where wrkInitiator in (select uri from tsrecord where rcRecTypeUri=$rtyUri)))"
    Write-Information " - Purging Workflow Activities"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkactivi where wacWorkflowUri in (select uri from tswkworkfl where wrkInitiator in (select uri from tsrecord where rcRecTypeUri=$rtyUri))"
    Write-Information " - Purging Workflows"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkworkfl where wrkInitiator in (select uri from tsrecord where rcRecTypeUri=$rtyUri)"
    Write-Information " - Purging Communications Detail Words"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tstranswor where twdTransDetailUri in (select tstransdet.uri from tstransdet inner join tstransmit on tstransdet.tdTransUri = tstransmit.uri inner join tsrecord on tstransmit.trRecordUri = tsrecord.uri where tsrecord.rcRecTypeUri = $rtyUri);"
    Write-Information " - Purging Communications Details"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tstransdet where tdTransUri in (select tstransmit.uri from tstransmit inner join tsrecord on tstransmit.trRecordUri = tsrecord.uri where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Communications"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tstransmit where trRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Thesaurus Terms"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecterm where rtmRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Relationships"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsreclink where rkRecUri1 in (select uri from tsrecord where rcRecTypeUri=$rtyUri) OR rkRecUri2 in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Actions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecactst where raRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Jurisdictions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecjuris where rjRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Requests"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecreque where rqRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Rendition Queue"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrendqueu where rnqRecUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Renditions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrenditio where rrRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Revisions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tserecvsn where evRecElecUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Documents"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecelec where uri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Holds"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tscasereco where crRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Locations"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecloc where rlRecUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Records (shrink after)"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecord where rcRecTypeUri = $rtyUri" -shrink $true
    Write-Information " - Purging Record Types"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrectype where uri = $rtyUri"
}

With that out of the way we can now restore the recovery mode back to full.

# Change recovery mode to simple
Write-Information "Setting recovery mode to full"
Invoke-Sqlcmd -ServerInstance $serverInstance -Database $targetDatabaseName -Query "ALTER DATABASE $($targetDatabaseName) SET RECOVERY FULL"

Last step is to restart the workgroup service on the server using this restored database.

# Restart CM
Write-Warning "Restarting Content Manager Service"
Restart-Service -Force -Name $cmServiceName

At the top of my script I have all of the variables defined. To make this work for you, you’ll need to adjust the variables to align with your environment. For instance, you’ll need to update the rtyUris array to contain the URIs of those record types you want to have deleted.

# Variables to be used throughout the script
$serverInstance = "localhost"                   # SQL Server instance name 
$sourceDatabaseName = "CMRamble_93"             # Name of database in SQL Server
$targetDatabaseName = "Restored_cmramble_93"    # Name to restore to in SQL Server
$backupFileName = "$($sourceDatabaseName).bak"  # File name for backup of database
$backupPath = "C:\\temp"                        # Folder to back-up into (relative to server)
$backupFilePath = [System.IO.Path]::Combine($backupPath, $backupFileName)   # Use .Net's path join which honors OS
$databaseFileGroup = "PRIMARY"                  # File group of database content, used when shrinking
$databaseFileName = "CMRamble_93"               # Filename within file group, used when shrinking
$sqlDataPath = "D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA"   # Path to data files, used in restore
$cmServiceName = "TRIMWorkgroup"                # Name of registered service in OS (not display name)
$recTypeUris = @( 3 )                           # Array of uri's for record types to be purged after restore

To find the URIs of your record types, you’ll need to customize your view pane and add in the unique identifier property.

2019-04-27_21-04-13.png

Running the script gives me this output….

2019-04-27_20-17-55.png

On my local machine it took ~1 minute to complete for a super small dataset. When running this on a 70 GB file, with me removing approximately 20 GB of content, it takes 15 minutes. Though my SQL Server has 64 GB of RAM and two SDD drives that hold the SQL Server data & log files.

You can download my full script here: https://github.com/aberrantCode/cm_db_restore_and_purge

Transcribing Audio Records with the Cloud Speech API

In this post I'll show how one might approach enriching their public archives with audio transcriptions provided by Google's Cloud Speech API.

First we can start with this collection of records:

Collection of Hearing Records

Collection of Hearing Records

Simple meta-data for each file

Simple meta-data for each file

For each of these audio files I'll have to download it, convert it, stage it, pass it to the speech api, and capture the results.  I'll craft this in powershell for the moment and then later implement this as a cloud function.  The results can then be added to the notes or as an OCR text rendition (or reviewed and then added).

2018-06-08_20-46-04.png

The speech API will give me chunks of text with an associated confidence level, as shown below:

2018-06-08_20-54-10.png

Which can all be mashed together for a pretty accurate transcription:

2018-06-08_21-17-59.png

Step by Step

My first step is to enable the Speech API within GCP:

 
2018-06-02_1-33-25.png
 

Then create a storage bucket to house the files.  I could skip this and upload it directly within a request, but staging them in a bucket makes it easier for me to later work with cloud functions.  

To convert the audio from mp3 to wav format with a single audio channel I used ffmpeg:

 
ffmpeg -hide_banner -loglevel panic -y -i %inputfile% -ac 1 %outputfile%

I like ffmpeg because it's easy to use on windows servers & workstations.  But there's also a good fluent ffmpeg module available in nodejs, which allows this to be built as a cloud function.  For now here's my powershell function to convert the audio file...

function ConvertTo-Wav {
	Param([string]$inputFile)
	#few variables for local pathing
	$newFileName = [System.IO.Path]::getfilenamewithoutextension($inputFile) + ".wav"
	$fileDir = [System.IO.Path]::getdirectoryname($inputFile)
	$newFilePath = [System.IO.Path]::combine($fileDir, $newFileName)
	#once is enough
	Write-Debug("ConvertTo-Wav Target File: " + $newFilePath)
	if ( (Test-Path $newFilePath) -eq $false ) {
		#convert using open source ffmpeg
		$convertCmd = "ffmpeg -hide_banner -loglevel panic -y -i `"$inputFile`" -ac 1 `"$newFilePath`""
		Write-Debug ("ConvertTo-Wav Command: " + $convertCmd)
		Invoke-Expression $convertCmd
	}
	return $newFilePath
}

This function queries for the audio records from content manager and caches the result to disk (simply for development of the script)...

function Get-CMAudioFilesJson {
	$localResponseFile = [System.IO.Path]::Combine($localTempPath,"audioRecords.json")
	$audioFiles = @()
	if ( (Test-Path $localResponseFile) -eq $false ) {
		#fetch if results not cached to disk
		$searchUri = ($baseCMUri + "/Record?q=container:11532546&format=json&pageSize=100000&properties=Uri,RecordNumber,Url,RecordExtension,RecordDocumentSize")
		Write-Debug "Searching for audio records: $searchUri"
		$response = Invoke-RestMethod -Uri $searchUri -Method Get -ContentType $ApplicationJson
		#flush to disk as raw json
		$response | ConvertTo-Json -Depth 6 | Set-Content $localResponseFile
	} else {
		#load and convert from json
		Write-Debug ("Loading Audio Records from local file: " + $localResponseFile)
		$response = Get-Content -Path $localResponseFile | ConvertFrom-Json
	}
	#if no results just error out
	if ( $response.Results -ne $null ) {
		Write-Debug ("Processing $($response.Results.length) audio records")
		$audioFiles = $response.Results
	} else {
		Write-Debug "Error"
		break
	}
	return $audioFiles
}

A function to submit the record's audio file to the speech api (and capture the results):

function Get-AudioText 
{
    Param($audioRecord)
    #formulate a valid path for the local file system
    $localFileName = (""+$audioRecord.Uri+"."+$audioRecord.RecordExtension.Value)
    $localPath = ($localTempPath + "\" + $localFileName)
    $sourceAudioFileUri = ($audioRecord.Url.Value + "/File/Document")
    $speechApiResultPath = ($localTempPath + "\" + $audioRecord.Uri + ".txt")  
    $speechTextPath = ($localTempPath + "\" + $audioRecord.Uri + "_text.txt")
    #download the audio file if not already done so
    if ( (Test-Path $localPath) -eq $false ) {  
        Invoke-WebRequest -Uri $sourceAudioFileUri -OutFile $localPath
    }
    #convert file if necessary
    if ( ($audioRecord.RecordExtension.Value.ToLower()) -ne "wav" ) {
        $localPath = ConvertTo-Wav $localPath
        $localFileName = [System.IO.Path]::GetfileName($localPath)
        if ( (Test-Path $localPath) -eq $false ) {
            Write-Error "Error Converting $($localPath)"
            return
        }
    }
 
    #transcribe, if not already done so
    Write-Debug ("Checking Speech API Text: "+$speechApiResultPath)
    if ( (Test-Path $speechApiResultPath) -eq $false ) {
        try {
            $bucketFilePath = "$bucketPath/$localFileName"
            Put-BucketFile -bucketFilePath $bucketFilePath -bucketPath $bucketPath -localPath $localPath
            #invoke speech api
            $speechCmd = "gcloud ml speech recognize-long-running $bucketFilePath --language-code=en-US"
            Write-Debug ("Speech API Command: "+$speechCmd)
            Invoke-Expression $speechCmd -OutVariable $speechResult | Tee-Object -FilePath $speechApiResultPath   
            Write-Debug ("Speech API Result: " + $speechResult)    
        } catch {
            Write-Error $_Write-Error $_
        }
    }
 
    #process transcription result
    if ( (Test-Path $speechApiResultPath) -eq $true ) {
        Write-Debug ("Reading Speech Results File: " + $speechApiResultPath)
		#remove previous consolidated transcription file
		if ( (Test-Path) -eq $true ) {Remove-Item $speechTextPath -Force }
		#flush each transcript result to disk
		$content.results | ForEach-Object { $_.alternatives | ForEach-Object { Add-Content $speechTextPath ($_.transcript+' ')  }  }
    } else {
        Write-Debug ("No Speech API Results: " + $speechTextPath)
    }
}

And then some logic to parse the search results and invoke the speech api:

#fetch the search results
$audioFiles = Get-CMAudioFilesJson
if ( $audioFiles -eq $null ) {
    Write-Error "No audio files found"
    exit
}
#process each
Write-Debug "Found $($audioFiles.Length) audio files"
foreach ( $audioFile in $audioFiles ) {
    Write-Host "Transcribing $($audioFile.RecordNumber.Value)"
    Get-AudioText  -audioRecord $audioFile
}

Here's the complete script:

Clear-Host
$DebugPreference = "Continue"
 
#variables and such
$AllProtocols = [System.Net.SecurityProtocolType]'Ssl3,Tls,Tls11,Tls12'
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
$ApplicationJson = "application/json"
$baseCMUri = "http://efiles.portlandoregon.gov"
$localTempPath = "C:\temp\speechapi"
$bucketPath = "gs://speech-api-cm-dev"
 
#create local staging area
if ( (Test-Path $localTempPath) -eq $false ) {
    New-Item $localTempPath -Type Directory
}
 
function Get-CMAudioFilesJson {
	$localResponseFile = [System.IO.Path]::Combine($localTempPath,"audioRecords.json")
	$audioFiles = @()
	if ( (Test-Path $localResponseFile) -eq $false ) {
		#fetch if results not cached to disk
		$searchUri = ($baseCMUri + "/Record?q=container:11532546&format=json&pageSize=100000&properties=Uri,RecordNumber,Url,RecordExtension,RecordDocumentSize")
		Write-Debug "Searching for audio records: $searchUri"
		$response = Invoke-RestMethod -Uri $searchUri -Method Get -ContentType $ApplicationJson
		#flush to disk as raw json
		$response | ConvertTo-Json -Depth 6 | Set-Content $localResponseFile
	} else {
		#load and convert from json
		Write-Debug ("Loading Audio Records from local file: " + $localResponseFile)
		$response = Get-Content -Path $localResponseFile | ConvertFrom-Json
	}
	#if no results just error out
	if ( $response.Results -ne $null ) {
		Write-Debug ("Processing $($response.Results.length) audio records")
		$audioFiles = $response.Results
	} else {
		Write-Debug "Error"
		break
	}
	return $audioFiles
}
 
function ConvertTo-Wav {
	Param([string]$inputFile)
	#few variables for local pathing
	$newFileName = [System.IO.Path]::getfilenamewithoutextension($inputFile) + ".wav"
	$fileDir = [System.IO.Path]::getdirectoryname($inputFile)
	$newFilePath = [System.IO.Path]::combine($fileDir, $newFileName)
	#once is enough
	Write-Debug("ConvertTo-Wav Target File: " + $newFilePath)
	if ( (Test-Path $newFilePath) -eq $false ) {
		#convert using open source ffmpeg
		$convertCmd = "ffmpeg -hide_banner -loglevel panic -y -i `"$inputFile`" -ac 1 `"$newFilePath`""
		Write-Debug ("ConvertTo-Wav Command: " + $convertCmd)
		Invoke-Expression $convertCmd
	}
	return $newFilePath
}
 
function Put-BucketFile {
	Param($bucketFilePath,$bucketPath,$localPath)
	#upload to bucket
    $checkCommand = "gsutil -q stat $bucketFilePath"
    $checkCommand += ';$?'
    Write-Debug ("GCS file check: " + $checkCommand)
    $fileCheck = Invoke-Expression $checkCommand
    #fileCheck is true if it exists, false otherwise
    if (-not $fileCheck ) {
        Write-Debug ("Uploading to bucket: gsutil cp " + $localPath + " " + $bucketPath)
        gsutil cp $localPath $bucketPath
    }
}
 
function Get-AudioText 
{
    Param($audioRecord)
    #formulate a valid path for the local file system
    $localFileName = (""+$audioRecord.Uri+"."+$audioRecord.RecordExtension.Value)
    $localPath = ($localTempPath + "\" + $localFileName)
    $sourceAudioFileUri = ($audioRecord.Url.Value + "/File/Document")
    $speechApiResultPath = ($localTempPath + "\" + $audioRecord.Uri + ".txt")  
    $speechTextPath = ($localTempPath + "\" + $audioRecord.Uri + "_text.txt")
    #download the audio file if not already done so
    if ( (Test-Path $localPath) -eq $false ) {  
        Invoke-WebRequest -Uri $sourceAudioFileUri -OutFile $localPath
    }
    #convert file if necessary
    if ( ($audioRecord.RecordExtension.Value.ToLower()) -ne "wav" ) {
        $localPath = ConvertTo-Wav $localPath
        $localFileName = [System.IO.Path]::GetfileName($localPath)
        if ( (Test-Path $localPath) -eq $false ) {
            Write-Error "Error Converting $($localPath)"
            return
        }
    }
 
    #transcribe, if not already done so
    Write-Debug ("Checking Speech API Text: "+$speechApiResultPath)
    if ( (Test-Path $speechApiResultPath) -eq $false ) {
        try {
            $bucketFilePath = "$bucketPath/$localFileName"
            Put-BucketFile -bucketFilePath $bucketFilePath -bucketPath $bucketPath -localPath $localPath
            #invoke speech api
            $speechCmd = "gcloud ml speech recognize-long-running $bucketFilePath --language-code=en-US"
            Write-Debug ("Speech API Command: "+$speechCmd)
            Invoke-Expression $speechCmd -OutVariable $speechResult | Tee-Object -FilePath $speechApiResultPath   
            Write-Debug ("Speech API Result: " + $speechResult)    
        } catch {
            Write-Error $_Write-Error $_
        }
    }
 
    #process transcription result
    if ( (Test-Path $speechApiResultPath) -eq $true ) {
        Write-Debug ("Reading Speech Results File: " + $speechApiResultPath)
		#remove previous consolidated transcription file
		if ( (Test-Path) -eq $true ) {Remove-Item $speechTextPath -Force }
		#flush each transcript result to disk
		$content.results | ForEach-Object { $_.alternatives | ForEach-Object { Add-Content $speechTextPath ($_.transcript+' ')  }  }
    } else {
        Write-Debug ("No Speech API Results: " + $speechTextPath)
    }
}
 
#fetch the search results
$audioFiles = Get-CMAudioFilesJson
if ( $audioFiles -eq $null ) {
    Write-Error "No audio files found"
    exit
}
#process each
Write-Debug "Found $($audioFiles.Length) audio files"
foreach ( $audioFile in $audioFiles ) {
    Write-Host "Transcribing $($audioFile.RecordNumber.Value)"
    Get-AudioText  -audioRecord $audioFile
}

Subscribing a MicroStrategy Report to be delivered to Content Manager

As users work with MicroStrategy they will create records, right?  In this post I'll detail one approach for routinely getting those records into Content Manager with no user effort....

First create a new file transmitter named Content Manager:

 
2018-05-25_21-05-36.png
 
 
 

Then create a new Content Manager device:

 
Select the new Content Manager transmitter type

Select the new Content Manager transmitter type

 
 
Note the File Location is dynamic, based on the User's ID

Note the File Location is dynamic, based on the User's ID

 

Then update the user so that they have an address on the new device:

 
Note that user addresses can be automatically attached via a command manager script

Note that user addresses can be automatically attached via a command manager script

 

Also need to ensure that the user can subscribe to a file (Content Manager is of type File):

 
 

Now the user can subscribe a report to a file:

 
2018-05-25_21-31-07.png
 

This one will be scheduled to run every day at 5am:

 
2018-05-26_0-20-25.png
 

When the schedule runs the PDF will be placed within a sub-folder unique for this user:

The Recipient ID in the folder name doesn't really help me.  The script will need to know the login of the user so that it can be registered within CM on their behalf.  I can lookup the login by using the command manager, as shown below.

If I execute the command manager from within powershell I won't get a nice table.  Instead I'll get some really ugly output:

2018-05-26_8-44-30.png

I don't want the results file to include everyone.  I just need those who have addresses pointing to my Content Manager staging devices.  But I also want to know the name of the report and have it formatted to be more easily worked-with inside powershell.  I'll have to use a command manager procedure....

DisplayPropertyEnum iProperty = DisplayPropertyEnum.EXPRESSION;
ResultSet oUserProperties = executeCapture("LIST ALL PROPERTIES FOR USERS IN GROUP 'EVERYONE';");
ResultSet oPropertySet = null;
oUserProperties.moveFirst();
while (!oUserProperties.isEof()) {
    String sUserLogin = oUserProperties.getFieldValueString(DisplayPropertyEnum.LOGIN);
    String sID = oUserProperties.getFieldValueString(DisplayPropertyEnum.ID);
    String sUserName = oUserProperties.getFieldValueString(DisplayPropertyEnum.FULL_NAME);
    ResultSet oUserAddresses = (ResultSet)oUserProperties.getFieldValue(DisplayPropertyEnum.DS_ADDRESSES_RESULTSET);
    oUserAddresses.moveFirst();
    while (!oUserAddresses.isEof()) {
        String sAddressName = oUserAddresses.getFieldValueString(DisplayPropertyEnum.DS_ADDRESS_NAME);
        if (sAddressName.contains("Content Manager")) {
            ResultSet oProjects = executeCapture("LIST ALL PROJECTS;");
            oProjects.moveFirst();
            while (!oProjects.isEof()) {
                String sProjectName = oProjects.getFieldValueString(DisplayPropertyEnum.NAME);
                ResultSet oSubscriptions = executeCapture("LIST ALL SUBSCRIPTIONS FOR RECIPIENTS USER '" + sUserName + "' FOR PROJECT '" + sProjectName + "';");
                oSubscriptions.moveFirst();
                while (!oSubscriptions.isEof()) {
                    String sContent = oSubscriptions.getFieldValueString(DisplayPropertyEnum.CONTENT);
                    String sSubscriptionType = oSubscriptions.getFieldValueString(DisplayPropertyEnum.SUBSCRIPTION_TYPE);
                    if (sSubscriptionType.contains("File")) {
                        ResultSet oRecipientList = (ResultSet)oSubscriptions.getFieldValue(RECIPIENT_RESULTSET);
                        oRecipientList.moveFirst();
                        while (!oRecipientList.isEof()) {
                            String sRecipientAddress = oRecipientList.getFieldValueString(RECIPIENT_ADDRESS);
                            if (sRecipientAddress.contains(sAddressName)) {
                                printOut("||" + sID + ",\"" + sUserLogin + "\",\"" + sContent+"\"");
                            }
                            oRecipientList.moveNext();
                        }
                    }
                    oSubscriptions.moveNext();
                }
                oProjects.moveNext();
            }
        }
        oUserAddresses.moveNext();
    }
    oUserProperties.moveNext();
}

Executing this yields the following content within the log file:

2018-05-26_8-05-45.png

Note that I included two pipe characters in my log file, so that I can later find & parse my results.  In powershell I'll invoke the procedure via the command manager, redirect the output to a file, load the file, find the lines with the double pipes, extract the data, and convert it from CSV. 

function Get-MstrCMUserSubscriptions {
    $command = 'EXECUTE PROCEDURE ListContentManagerUserSubscriptions();'
    $outFile = New-TemporaryFile
    $logFile = New-TemporaryFile
    $userSubscriptions = @("ID,Name,Report")
    try {
        Add-Content -Path $outFile $command
        $cmdmgrCommand = "cmdmgr -n `"$psn`" -u `"$psnUser`" -f `"$outFile`" -o `"$logFile`""
        iex $cmdmgrCommand$results += $_ }}
        $results = Get-Content -Path $logFile | Where-Object { $_ -like "*||*" }
        foreach ( $result in $results ) {
            $userSubscriptions += ($result.split('|')[2])
        }
    } catch {
    }
    if ( (Test-Path $outFile) ) { Remove-Item -Path $outFile -Force }
    if ( (Test-Path $logFile) ) { Remove-Item -Path $logFile -Force }
    return $userSubscriptions | ConvertFrom-CSV
}

That effort yields an array I can work with...

2018-05-26_8-28-54.png

Last step, iterate the array and look for reports in the staging area.  As I find one, I submit it to Content Manager via the Service API and remove it from disk.  

$userSubscriptions = Get-MstrCMUserSubscriptions
foreach ( $userSubscription in $userSubscriptions ) {
    $stagingPath = ("$stagingRoot\\$($userSubscription.ID)")
    $reports = Get-ChildItem $stagingPath -Filter "*$($userSubscription.Report)*"
    foreach ( $report in $reports ) {
        New-CMRecord -UserLogin $userSubscription.Name -Report $userSubscription.Report -File $report.Name
        Remove-Item $report.Name -Force
    }
}

My New-CMRecord function includes logic that locates an appropriate folder for the report.  Yours could attach a schedule, classification, or other meta-data fetched from Microstrategy.