Loosely Coupled Record Automation

I want to automate the creation of records in Content Manager so that workflows can be initiated and other content can be captured. As records and workflows are processed by users within Content Manager I want to export that activity to a graph database. That graph database will highlight relationships between records that might not otherwise be easily determined. It can also aide in tracing lineage and relationships.

I few years ago I would have created a lightweight C# console app that implements the logic and directly integrates with Content Manager via the .Net or COM SDK. No longer though! Now I want to implement this using as many managed services and server-less components as possible.

This diagram depicts the final solution design…

CM Graph (2).png

What does each component do?

  • Cloud Scheduler — an online cron/task utility that is cheap and easy to use on GCP

  • Cloud Functions — light-weight, containerized bundles of code that can be versioned, deployed, and managed independently of the other components

  • Cloud PubSub — this is a message broker service that allows you to quickly integrate software components together. One system may publish to a topic. Other systems (0+) will subscribe to those topics

  • Service API — REST API end-point that enables integration over HTTP

  • Content Manager Event Server — custom .Net Event Processer Plugin that publishes new record meta-data and workflow state to a PubSub topic

  • Graph Database — enables searching via cypher query syntax (think social network graph) across complex relationships

Why use this approach?

  • Centralized — Putting the scheduler outside of the CM server makes it easier to monitor centrally

  • Separation of Concerns — Separating the “Check Website” logic from the “Saving to CM” logic enables us re-use the logic for other purposes

  • Asynchronous Processing — Putting PubSub between the functions let’s them react in real-time and independently of each other

  • Scaling — cloud functions and pubsub can scale horizontally to billions of calls

  • Error handling — when errors happen in a function we can redirect to an error topic for review (which could kick-off a workflow)

  • Language Freedom — I can use python, node, or Go for the cloud functions; or I can use .Net (via Cloud Run instead of as a Cloud Functions)

Overall this is a pretty simple undertaking. It will grow much more complex as time progresses, but for now I can get building!


Fetching the records

This is super easy with python! My source is a REST API that will contain a bunch of data about firms. For each retrieved firm I’ll publish a message to a topic. Multiple things could then subscribe to that topic and react to the message.

First we’ll create the topic…

2019-06-21_18-33-09.jpg

Next I write the logic in a python module…

import urllib.request as urllib2
import sys
import json
import requests
import gzip
import os
from google.cloud import pubsub
 
project_id = os.getenv('GOOGLE_CLOUD_PROJECT') if os.getenv('GOOGLE_CLOUD_PROJECT') else 'CM-DEV'
topic_name = os.getenv('GOOGLE_CLOUD_TOPIC') if os.getenv('GOOGLE_CLOUD_TOPIC') else 'new_firm'
 
def callback(message_future):
    # When timeout is unspecified, the exception method waits indefinitely.
    if message_future.exception(timeout=30):
        print('Publishing message on {} threw an Exception {}.'.format(
            topic_name, message_future.exception()))
    else:
        print(message_future.result())
 
def downloadFirms(args):
    request_headers = requests.utils.default_headers()
    request_headers.update({
        'Accept''text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 
        'Accept-Language''en-US,en;q=0.9', 
        'Cache-Control''max-age=0', 
        'User-Agent''Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0' 
    })
    url = "https://api..."
    request = urllib2.Request(url, headers=request_headers)
    html = gzip.decompress(urllib2.urlopen(request).read()).decode('utf-8')
    data = json.loads(html)
    try:
        hits = data['hits']['hits']
        publisher = pubsub.PublisherClient()
        topic_path = publisher.topic_path(project_id, topic_name)
        for firm in hits:
            firm_data = firm['_source']
            firm_name = firm_data['firm_name']
            topic_message = json.dumps(firm_data).encode('utf-8')
            print(firm_name)
            msg = publisher.publish(topic_path, topic_message)
            msg.add_done_callback(callback)
    except Exception as exc:
        print('Error: ', exc)
    finally:
        print('Done!')
 
if __name__ == "__main__":
    downloadFirms()

Now that module can be placed into a cloud function…

2019-06-21_18-41-02.jpg

Don’t forget to pass in the run-time parameters so that the function can post to the correct topic in the correct project. You may change these during your testing process.

2019-06-21_18-42-55.jpg

With that saved we can now review the HTTP end-point address, which we’ll use when scheduling the routine download. Open the cloud function and click onto the trigger tab, then copy the URL to your clipboard.

2019-06-21_18-46-06.jpg

In the cloud scheduler we just need to determine the frequency and the URL of the cloud function. I’ll post an empty json object as it’s required by the scheduler (even though I won’t consume it directly within the cloud function).

2019-06-21_18-49-34.jpg

Next I need a cloud function that subscribes to the topic and does something with the data. For quick demonstration purposes I’ll just write the data out to the console (which will materialize in stackdriver as a log event).

2019-06-21_19-14-16.jpg

With that created I can now test the download function, which should result in new messages in the topic, and then new output in Stackdriver. I can also create log metrics based on the content of the log. For instance, I can create a metric for number of new firms, number of errors, or average runtime execution duration (cloud functions cap out in terms of their lifetime, so this is important to consider).

2019-06-21_19-17-43.jpg

Now I could just put the “create folder in CM” logic within my existing cloud function, but then I’m tightly-coupling the download of the firms to the registration of folders. That would limit the extent to which I can re-use code and cobble together new feature functionality. Tightly-coupled solutions are harder to maintain, support, and validate.

In the next post we’ll update the cloud function that pushes the firm into the Content Manager dataset!

Purging content after backing-up and then restoring a database

How do we create new datasets that contain just a portion of the content from an existing dataset? I’ve been asked this dozens of times over the years. Someone asked this past week and I figured I’d bundle up my typical method into a powershell script. This script can be run via a scheduled task, allowing you to routinely execute it for such purposes as refreshing development environments, exporting to less secured environments, or to curate training datasets.

Start by first installing the SQL Server powershell tools from an administrative powershell window by running this command:

Install-Module -Name SqlServer

You will be prompted to allow the install to continue.

2019-04-27_20-24-23.png

It will then initiate the download and attempt to install.

2019-04-27_20-24-36.png

In my case, I’ve already got it installed. If you do as well then you’ll see these types of errors, which are safe to ignore.

2019-04-27_20-26-17.png

Now we can start scripting out the process. The first step is to start the backup of the existing database. You can use the Backup-SqlDatabase command. To make things easier we use variables for the key parameters (like instance name, database name, path to the backup file).

# Backup the source database to a new file
Write-Information "Backing Up $($sourceDatabaseName)"
Backup-SqlDatabase -ServerInstance $serverInstance -Database $sourceDatabaseName -BackupAction Database -CopyOnly -BackupFile $backupFilePath

Next comes the restore of that backup. Unfortunately I cannot use the Restore-SqlDatabase commandlet, because it does not support restoring over databases that have other active connections. Instead I’ll have to run a series of statements that set the database to single-user mode, restores the database (with relocated files), and then sets the database back to multi-user mode.

# Restore the database by first getting an exclusive lock with single user access, enable multi-user when done
Write-Warning "Restoring database with exclusive use access"
$ExclusiveLock = "USE [master]; ALTER DATABASE [$($targetDatabaseName)] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;"
$RestoreDatabase = "RESTORE FILELISTONLY FROM disk = '$($backupFilePath)'; RESTORE DATABASE $($targetDatabaseName) FROM disk = '$($backupFilePath)' WITH replace, MOVE '$($sourceDatabaseName)' TO '$($sqlDataPath)\$($targetDatabaseName).mdf', MOVE '$($sourceDatabaseName)_log' TO '$($sqlDataPath)\$($targetDatabaseName)_log.ldf', stats = 5; ALTER DATABASE $($targetDatabaseName) SET MULTI_USER;"
ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement ($ExclusiveLock+$RestoreDatabase)

Next we need to change the recovery mode from full to simple. Doing so will allow us to manage the growth of the log files. This is important because the delete commands we run will spool changes into the database logs, which we’ll need to shrink as often as possible (otherwise the DB server could potentially run out of space).

# Change recovery mode to simple
Write-Information "Setting recovery mode to simple"
Invoke-Sqlcmd -ServerInstance $serverInstance -Database $targetDatabaseName -Query "ALTER DATABASE $($targetDatabaseName) SET RECOVERY SIMPLE"

With the database in simple recovery mode we can now start purging content from the restored database. Before digging into the logic of the deletes, I’ll need to create a function I can call that traps errors. I’ll also want to be able to incrementally shrink, if necessary.

# Function below is used so that we trap errors and shrink at certain times
function ExecuteSqlStatement 
{
    param([String]$Instance, [String]$Database, [String]$SqlStatement, [bool]$shrink = $false)
    $error = $false
    # Trap all errors
    try {
        # Execute the statement
        Write-Debug "Executing Statement on $($Instance) in DB $($Database): $($SqlStatement)"
        Invoke-Sqlcmd -ServerInstance $Instance -Database $Database -Query $SqlStatement | Out-Null
        Write-Debug "Statement executed with no exceptions"
    } catch [Exception] {
        Write-Error "Exception Executing Statement: $($_)"
        $error = $true
    } finally {
    }
    # When no error and parameter passed, shrink the DB (can slow process)
    if ( $error -eq $false -and $shrink -eq $true ) {
        ShrinkDatabase -SQLInstanceName $serverInstance -DatabaseName $Database -FileGroupName $databaseFileGroup -FileName $databaseFileName -ShrinkSizeMB 48000 -ShrinkIncrementMB 20
    }
}

To implement the incremental shrinking I use a method I found a few years ago, linked here. It’s great as it works around the super-slow shrinking process when done on very large datasets. Your database administrator should pay close attention to how it works and align it with your environment.

My goal is to remove all records of certain types, so that they aren’t exposed in the restored copy. Unfortunately the out-of-the-box constraints do not cascade and delete related objects. That means we need to delete them before trying to delete the records.

We need to delete:

  • Workflows (and supporting objects)

  • Records (and supporting objects)

  • Record Types

In very large datasets this process could take hours. You can optimize the performance by adding a “-shrink $true” parameter to any of the delete statements that impact large volumes of data in your org (electronic revisions, renditions, locations for instance).

# Purging content by record type uri
foreach ( $rtyUri in $recTypeUris ) 
{
    Write-Warning "Purging All Records & Supporting Objects for Record Type Uri $($rtyUri)"
    Write-Information " - Purging Workflow Document References"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkdocusa where wduDocumentUri in (select uri from tswkdocume where wdcRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri))"
    Write-Information " - Purging Workflow Documents"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkdocume where wdcRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri)"
    Write-Information " - Purging Workflow Activity Start Conditions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkstartc where wscActivityUri in (select uri from tswkactivi where wacWorkflowUri in (select uri from tswkworkfl where wrkInitiator in (select uri from tsrecord where rcRecTypeUri=$rtyUri)))"
    Write-Information " - Purging Workflow Activities"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkactivi where wacWorkflowUri in (select uri from tswkworkfl where wrkInitiator in (select uri from tsrecord where rcRecTypeUri=$rtyUri))"
    Write-Information " - Purging Workflows"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tswkworkfl where wrkInitiator in (select uri from tsrecord where rcRecTypeUri=$rtyUri)"
    Write-Information " - Purging Communications Detail Words"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tstranswor where twdTransDetailUri in (select tstransdet.uri from tstransdet inner join tstransmit on tstransdet.tdTransUri = tstransmit.uri inner join tsrecord on tstransmit.trRecordUri = tsrecord.uri where tsrecord.rcRecTypeUri = $rtyUri);"
    Write-Information " - Purging Communications Details"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tstransdet where tdTransUri in (select tstransmit.uri from tstransmit inner join tsrecord on tstransmit.trRecordUri = tsrecord.uri where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Communications"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tstransmit where trRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Thesaurus Terms"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecterm where rtmRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Relationships"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsreclink where rkRecUri1 in (select uri from tsrecord where rcRecTypeUri=$rtyUri) OR rkRecUri2 in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Actions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecactst where raRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Jurisdictions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecjuris where rjRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Requests"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecreque where rqRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Rendition Queue"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrendqueu where rnqRecUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Renditions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrenditio where rrRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Revisions"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tserecvsn where evRecElecUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Documents"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecelec where uri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Holds"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tscasereco where crRecordUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Record Locations"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecloc where rlRecUri in (select uri from tsrecord where rcRecTypeUri=$rtyUri);"
    Write-Information " - Purging Records (shrink after)"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrecord where rcRecTypeUri = $rtyUri" -shrink $true
    Write-Information " - Purging Record Types"
    ExecuteSqlStatement -Instance $serverInstance -Database $targetDatabaseName -SqlStatement "delete from tsrectype where uri = $rtyUri"
}

With that out of the way we can now restore the recovery mode back to full.

# Change recovery mode to simple
Write-Information "Setting recovery mode to full"
Invoke-Sqlcmd -ServerInstance $serverInstance -Database $targetDatabaseName -Query "ALTER DATABASE $($targetDatabaseName) SET RECOVERY FULL"

Last step is to restart the workgroup service on the server using this restored database.

# Restart CM
Write-Warning "Restarting Content Manager Service"
Restart-Service -Force -Name $cmServiceName

At the top of my script I have all of the variables defined. To make this work for you, you’ll need to adjust the variables to align with your environment. For instance, you’ll need to update the rtyUris array to contain the URIs of those record types you want to have deleted.

# Variables to be used throughout the script
$serverInstance = "localhost"                   # SQL Server instance name 
$sourceDatabaseName = "CMRamble_93"             # Name of database in SQL Server
$targetDatabaseName = "Restored_cmramble_93"    # Name to restore to in SQL Server
$backupFileName = "$($sourceDatabaseName).bak"  # File name for backup of database
$backupPath = "C:\\temp"                        # Folder to back-up into (relative to server)
$backupFilePath = [System.IO.Path]::Combine($backupPath, $backupFileName)   # Use .Net's path join which honors OS
$databaseFileGroup = "PRIMARY"                  # File group of database content, used when shrinking
$databaseFileName = "CMRamble_93"               # Filename within file group, used when shrinking
$sqlDataPath = "D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA"   # Path to data files, used in restore
$cmServiceName = "TRIMWorkgroup"                # Name of registered service in OS (not display name)
$recTypeUris = @( 3 )                           # Array of uri's for record types to be purged after restore

To find the URIs of your record types, you’ll need to customize your view pane and add in the unique identifier property.

2019-04-27_21-04-13.png

Running the script gives me this output….

2019-04-27_20-17-55.png

On my local machine it took ~1 minute to complete for a super small dataset. When running this on a 70 GB file, with me removing approximately 20 GB of content, it takes 15 minutes. Though my SQL Server has 64 GB of RAM and two SDD drives that hold the SQL Server data & log files.

You can download my full script here: https://github.com/aberrantCode/cm_db_restore_and_purge

Transcribing Audio Records with the Cloud Speech API

In this post I'll show how one might approach enriching their public archives with audio transcriptions provided by Google's Cloud Speech API.

First we can start with this collection of records:

Collection of Hearing Records

Collection of Hearing Records

Simple meta-data for each file

Simple meta-data for each file

For each of these audio files I'll have to download it, convert it, stage it, pass it to the speech api, and capture the results.  I'll craft this in powershell for the moment and then later implement this as a cloud function.  The results can then be added to the notes or as an OCR text rendition (or reviewed and then added).

2018-06-08_20-46-04.png

The speech API will give me chunks of text with an associated confidence level, as shown below:

2018-06-08_20-54-10.png

Which can all be mashed together for a pretty accurate transcription:

2018-06-08_21-17-59.png

Step by Step

My first step is to enable the Speech API within GCP:

 
2018-06-02_1-33-25.png
 

Then create a storage bucket to house the files.  I could skip this and upload it directly within a request, but staging them in a bucket makes it easier for me to later work with cloud functions.  

To convert the audio from mp3 to wav format with a single audio channel I used ffmpeg:

 
ffmpeg -hide_banner -loglevel panic -y -i %inputfile% -ac 1 %outputfile%

I like ffmpeg because it's easy to use on windows servers & workstations.  But there's also a good fluent ffmpeg module available in nodejs, which allows this to be built as a cloud function.  For now here's my powershell function to convert the audio file...

function ConvertTo-Wav {
	Param([string]$inputFile)
	#few variables for local pathing
	$newFileName = [System.IO.Path]::getfilenamewithoutextension($inputFile) + ".wav"
	$fileDir = [System.IO.Path]::getdirectoryname($inputFile)
	$newFilePath = [System.IO.Path]::combine($fileDir, $newFileName)
	#once is enough
	Write-Debug("ConvertTo-Wav Target File: " + $newFilePath)
	if ( (Test-Path $newFilePath) -eq $false ) {
		#convert using open source ffmpeg
		$convertCmd = "ffmpeg -hide_banner -loglevel panic -y -i `"$inputFile`" -ac 1 `"$newFilePath`""
		Write-Debug ("ConvertTo-Wav Command: " + $convertCmd)
		Invoke-Expression $convertCmd
	}
	return $newFilePath
}

This function queries for the audio records from content manager and caches the result to disk (simply for development of the script)...

function Get-CMAudioFilesJson {
	$localResponseFile = [System.IO.Path]::Combine($localTempPath,"audioRecords.json")
	$audioFiles = @()
	if ( (Test-Path $localResponseFile) -eq $false ) {
		#fetch if results not cached to disk
		$searchUri = ($baseCMUri + "/Record?q=container:11532546&format=json&pageSize=100000&properties=Uri,RecordNumber,Url,RecordExtension,RecordDocumentSize")
		Write-Debug "Searching for audio records: $searchUri"
		$response = Invoke-RestMethod -Uri $searchUri -Method Get -ContentType $ApplicationJson
		#flush to disk as raw json
		$response | ConvertTo-Json -Depth 6 | Set-Content $localResponseFile
	} else {
		#load and convert from json
		Write-Debug ("Loading Audio Records from local file: " + $localResponseFile)
		$response = Get-Content -Path $localResponseFile | ConvertFrom-Json
	}
	#if no results just error out
	if ( $response.Results -ne $null ) {
		Write-Debug ("Processing $($response.Results.length) audio records")
		$audioFiles = $response.Results
	} else {
		Write-Debug "Error"
		break
	}
	return $audioFiles
}

A function to submit the record's audio file to the speech api (and capture the results):

function Get-AudioText 
{
    Param($audioRecord)
    #formulate a valid path for the local file system
    $localFileName = (""+$audioRecord.Uri+"."+$audioRecord.RecordExtension.Value)
    $localPath = ($localTempPath + "\" + $localFileName)
    $sourceAudioFileUri = ($audioRecord.Url.Value + "/File/Document")
    $speechApiResultPath = ($localTempPath + "\" + $audioRecord.Uri + ".txt")  
    $speechTextPath = ($localTempPath + "\" + $audioRecord.Uri + "_text.txt")
    #download the audio file if not already done so
    if ( (Test-Path $localPath) -eq $false ) {  
        Invoke-WebRequest -Uri $sourceAudioFileUri -OutFile $localPath
    }
    #convert file if necessary
    if ( ($audioRecord.RecordExtension.Value.ToLower()) -ne "wav" ) {
        $localPath = ConvertTo-Wav $localPath
        $localFileName = [System.IO.Path]::GetfileName($localPath)
        if ( (Test-Path $localPath) -eq $false ) {
            Write-Error "Error Converting $($localPath)"
            return
        }
    }
 
    #transcribe, if not already done so
    Write-Debug ("Checking Speech API Text: "+$speechApiResultPath)
    if ( (Test-Path $speechApiResultPath) -eq $false ) {
        try {
            $bucketFilePath = "$bucketPath/$localFileName"
            Put-BucketFile -bucketFilePath $bucketFilePath -bucketPath $bucketPath -localPath $localPath
            #invoke speech api
            $speechCmd = "gcloud ml speech recognize-long-running $bucketFilePath --language-code=en-US"
            Write-Debug ("Speech API Command: "+$speechCmd)
            Invoke-Expression $speechCmd -OutVariable $speechResult | Tee-Object -FilePath $speechApiResultPath   
            Write-Debug ("Speech API Result: " + $speechResult)    
        } catch {
            Write-Error $_Write-Error $_
        }
    }
 
    #process transcription result
    if ( (Test-Path $speechApiResultPath) -eq $true ) {
        Write-Debug ("Reading Speech Results File: " + $speechApiResultPath)
		#remove previous consolidated transcription file
		if ( (Test-Path) -eq $true ) {Remove-Item $speechTextPath -Force }
		#flush each transcript result to disk
		$content.results | ForEach-Object { $_.alternatives | ForEach-Object { Add-Content $speechTextPath ($_.transcript+' ')  }  }
    } else {
        Write-Debug ("No Speech API Results: " + $speechTextPath)
    }
}

And then some logic to parse the search results and invoke the speech api:

#fetch the search results
$audioFiles = Get-CMAudioFilesJson
if ( $audioFiles -eq $null ) {
    Write-Error "No audio files found"
    exit
}
#process each
Write-Debug "Found $($audioFiles.Length) audio files"
foreach ( $audioFile in $audioFiles ) {
    Write-Host "Transcribing $($audioFile.RecordNumber.Value)"
    Get-AudioText  -audioRecord $audioFile
}

Here's the complete script:

Clear-Host
$DebugPreference = "Continue"
 
#variables and such
$AllProtocols = [System.Net.SecurityProtocolType]'Ssl3,Tls,Tls11,Tls12'
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
$ApplicationJson = "application/json"
$baseCMUri = "http://efiles.portlandoregon.gov"
$localTempPath = "C:\temp\speechapi"
$bucketPath = "gs://speech-api-cm-dev"
 
#create local staging area
if ( (Test-Path $localTempPath) -eq $false ) {
    New-Item $localTempPath -Type Directory
}
 
function Get-CMAudioFilesJson {
	$localResponseFile = [System.IO.Path]::Combine($localTempPath,"audioRecords.json")
	$audioFiles = @()
	if ( (Test-Path $localResponseFile) -eq $false ) {
		#fetch if results not cached to disk
		$searchUri = ($baseCMUri + "/Record?q=container:11532546&format=json&pageSize=100000&properties=Uri,RecordNumber,Url,RecordExtension,RecordDocumentSize")
		Write-Debug "Searching for audio records: $searchUri"
		$response = Invoke-RestMethod -Uri $searchUri -Method Get -ContentType $ApplicationJson
		#flush to disk as raw json
		$response | ConvertTo-Json -Depth 6 | Set-Content $localResponseFile
	} else {
		#load and convert from json
		Write-Debug ("Loading Audio Records from local file: " + $localResponseFile)
		$response = Get-Content -Path $localResponseFile | ConvertFrom-Json
	}
	#if no results just error out
	if ( $response.Results -ne $null ) {
		Write-Debug ("Processing $($response.Results.length) audio records")
		$audioFiles = $response.Results
	} else {
		Write-Debug "Error"
		break
	}
	return $audioFiles
}
 
function ConvertTo-Wav {
	Param([string]$inputFile)
	#few variables for local pathing
	$newFileName = [System.IO.Path]::getfilenamewithoutextension($inputFile) + ".wav"
	$fileDir = [System.IO.Path]::getdirectoryname($inputFile)
	$newFilePath = [System.IO.Path]::combine($fileDir, $newFileName)
	#once is enough
	Write-Debug("ConvertTo-Wav Target File: " + $newFilePath)
	if ( (Test-Path $newFilePath) -eq $false ) {
		#convert using open source ffmpeg
		$convertCmd = "ffmpeg -hide_banner -loglevel panic -y -i `"$inputFile`" -ac 1 `"$newFilePath`""
		Write-Debug ("ConvertTo-Wav Command: " + $convertCmd)
		Invoke-Expression $convertCmd
	}
	return $newFilePath
}
 
function Put-BucketFile {
	Param($bucketFilePath,$bucketPath,$localPath)
	#upload to bucket
    $checkCommand = "gsutil -q stat $bucketFilePath"
    $checkCommand += ';$?'
    Write-Debug ("GCS file check: " + $checkCommand)
    $fileCheck = Invoke-Expression $checkCommand
    #fileCheck is true if it exists, false otherwise
    if (-not $fileCheck ) {
        Write-Debug ("Uploading to bucket: gsutil cp " + $localPath + " " + $bucketPath)
        gsutil cp $localPath $bucketPath
    }
}
 
function Get-AudioText 
{
    Param($audioRecord)
    #formulate a valid path for the local file system
    $localFileName = (""+$audioRecord.Uri+"."+$audioRecord.RecordExtension.Value)
    $localPath = ($localTempPath + "\" + $localFileName)
    $sourceAudioFileUri = ($audioRecord.Url.Value + "/File/Document")
    $speechApiResultPath = ($localTempPath + "\" + $audioRecord.Uri + ".txt")  
    $speechTextPath = ($localTempPath + "\" + $audioRecord.Uri + "_text.txt")
    #download the audio file if not already done so
    if ( (Test-Path $localPath) -eq $false ) {  
        Invoke-WebRequest -Uri $sourceAudioFileUri -OutFile $localPath
    }
    #convert file if necessary
    if ( ($audioRecord.RecordExtension.Value.ToLower()) -ne "wav" ) {
        $localPath = ConvertTo-Wav $localPath
        $localFileName = [System.IO.Path]::GetfileName($localPath)
        if ( (Test-Path $localPath) -eq $false ) {
            Write-Error "Error Converting $($localPath)"
            return
        }
    }
 
    #transcribe, if not already done so
    Write-Debug ("Checking Speech API Text: "+$speechApiResultPath)
    if ( (Test-Path $speechApiResultPath) -eq $false ) {
        try {
            $bucketFilePath = "$bucketPath/$localFileName"
            Put-BucketFile -bucketFilePath $bucketFilePath -bucketPath $bucketPath -localPath $localPath
            #invoke speech api
            $speechCmd = "gcloud ml speech recognize-long-running $bucketFilePath --language-code=en-US"
            Write-Debug ("Speech API Command: "+$speechCmd)
            Invoke-Expression $speechCmd -OutVariable $speechResult | Tee-Object -FilePath $speechApiResultPath   
            Write-Debug ("Speech API Result: " + $speechResult)    
        } catch {
            Write-Error $_Write-Error $_
        }
    }
 
    #process transcription result
    if ( (Test-Path $speechApiResultPath) -eq $true ) {
        Write-Debug ("Reading Speech Results File: " + $speechApiResultPath)
		#remove previous consolidated transcription file
		if ( (Test-Path) -eq $true ) {Remove-Item $speechTextPath -Force }
		#flush each transcript result to disk
		$content.results | ForEach-Object { $_.alternatives | ForEach-Object { Add-Content $speechTextPath ($_.transcript+' ')  }  }
    } else {
        Write-Debug ("No Speech API Results: " + $speechTextPath)
    }
}
 
#fetch the search results
$audioFiles = Get-CMAudioFilesJson
if ( $audioFiles -eq $null ) {
    Write-Error "No audio files found"
    exit
}
#process each
Write-Debug "Found $($audioFiles.Length) audio files"
foreach ( $audioFile in $audioFiles ) {
    Write-Host "Transcribing $($audioFile.RecordNumber.Value)"
    Get-AudioText  -audioRecord $audioFile
}