Transcribing Audio Records with the Cloud Speech API

In this post I'll show how one might approach enriching their public archives with audio transcriptions provided by Google's Cloud Speech API.

First we can start with this collection of records:

Collection of Hearing Records

Collection of Hearing Records

Simple meta-data for each file

Simple meta-data for each file

For each of these audio files I'll have to download it, convert it, stage it, pass it to the speech api, and capture the results.  I'll craft this in powershell for the moment and then later implement this as a cloud function.  The results can then be added to the notes or as an OCR text rendition (or reviewed and then added).

2018-06-08_20-46-04.png

The speech API will give me chunks of text with an associated confidence level, as shown below:

2018-06-08_20-54-10.png

Which can all be mashed together for a pretty accurate transcription:

2018-06-08_21-17-59.png

Step by Step

My first step is to enable the Speech API within GCP:

 
2018-06-02_1-33-25.png
 

Then create a storage bucket to house the files.  I could skip this and upload it directly within a request, but staging them in a bucket makes it easier for me to later work with cloud functions.  

To convert the audio from mp3 to wav format with a single audio channel I used ffmpeg:

 
ffmpeg -hide_banner -loglevel panic -y -i %inputfile% -ac 1 %outputfile%

I like ffmpeg because it's easy to use on windows servers & workstations.  But there's also a good fluent ffmpeg module available in nodejs, which allows this to be built as a cloud function.  For now here's my powershell function to convert the audio file...

function ConvertTo-Wav {
	Param([string]$inputFile)
	#few variables for local pathing
	$newFileName = [System.IO.Path]::getfilenamewithoutextension($inputFile) + ".wav"
	$fileDir = [System.IO.Path]::getdirectoryname($inputFile)
	$newFilePath = [System.IO.Path]::combine($fileDir, $newFileName)
	#once is enough
	Write-Debug("ConvertTo-Wav Target File: " + $newFilePath)
	if ( (Test-Path $newFilePath) -eq $false ) {
		#convert using open source ffmpeg
		$convertCmd = "ffmpeg -hide_banner -loglevel panic -y -i `"$inputFile`" -ac 1 `"$newFilePath`""
		Write-Debug ("ConvertTo-Wav Command: " + $convertCmd)
		Invoke-Expression $convertCmd
	}
	return $newFilePath
}

This function queries for the audio records from content manager and caches the result to disk (simply for development of the script)...

function Get-CMAudioFilesJson {
	$localResponseFile = [System.IO.Path]::Combine($localTempPath,"audioRecords.json")
	$audioFiles = @()
	if ( (Test-Path $localResponseFile) -eq $false ) {
		#fetch if results not cached to disk
		$searchUri = ($baseCMUri + "/Record?q=container:11532546&format=json&pageSize=100000&properties=Uri,RecordNumber,Url,RecordExtension,RecordDocumentSize")
		Write-Debug "Searching for audio records: $searchUri"
		$response = Invoke-RestMethod -Uri $searchUri -Method Get -ContentType $ApplicationJson
		#flush to disk as raw json
		$response | ConvertTo-Json -Depth 6 | Set-Content $localResponseFile
	} else {
		#load and convert from json
		Write-Debug ("Loading Audio Records from local file: " + $localResponseFile)
		$response = Get-Content -Path $localResponseFile | ConvertFrom-Json
	}
	#if no results just error out
	if ( $response.Results -ne $null ) {
		Write-Debug ("Processing $($response.Results.length) audio records")
		$audioFiles = $response.Results
	} else {
		Write-Debug "Error"
		break
	}
	return $audioFiles
}

A function to submit the record's audio file to the speech api (and capture the results):

function Get-AudioText 
{
    Param($audioRecord)
    #formulate a valid path for the local file system
    $localFileName = (""+$audioRecord.Uri+"."+$audioRecord.RecordExtension.Value)
    $localPath = ($localTempPath + "\" + $localFileName)
    $sourceAudioFileUri = ($audioRecord.Url.Value + "/File/Document")
    $speechApiResultPath = ($localTempPath + "\" + $audioRecord.Uri + ".txt")  
    $speechTextPath = ($localTempPath + "\" + $audioRecord.Uri + "_text.txt")
    #download the audio file if not already done so
    if ( (Test-Path $localPath) -eq $false ) {  
        Invoke-WebRequest -Uri $sourceAudioFileUri -OutFile $localPath
    }
    #convert file if necessary
    if ( ($audioRecord.RecordExtension.Value.ToLower()) -ne "wav" ) {
        $localPath = ConvertTo-Wav $localPath
        $localFileName = [System.IO.Path]::GetfileName($localPath)
        if ( (Test-Path $localPath) -eq $false ) {
            Write-Error "Error Converting $($localPath)"
            return
        }
    }
 
    #transcribe, if not already done so
    Write-Debug ("Checking Speech API Text: "+$speechApiResultPath)
    if ( (Test-Path $speechApiResultPath) -eq $false ) {
        try {
            $bucketFilePath = "$bucketPath/$localFileName"
            Put-BucketFile -bucketFilePath $bucketFilePath -bucketPath $bucketPath -localPath $localPath
            #invoke speech api
            $speechCmd = "gcloud ml speech recognize-long-running $bucketFilePath --language-code=en-US"
            Write-Debug ("Speech API Command: "+$speechCmd)
            Invoke-Expression $speechCmd -OutVariable $speechResult | Tee-Object -FilePath $speechApiResultPath   
            Write-Debug ("Speech API Result: " + $speechResult)    
        } catch {
            Write-Error $_Write-Error $_
        }
    }
 
    #process transcription result
    if ( (Test-Path $speechApiResultPath) -eq $true ) {
        Write-Debug ("Reading Speech Results File: " + $speechApiResultPath)
		#remove previous consolidated transcription file
		if ( (Test-Path) -eq $true ) {Remove-Item $speechTextPath -Force }
		#flush each transcript result to disk
		$content.results | ForEach-Object { $_.alternatives | ForEach-Object { Add-Content $speechTextPath ($_.transcript+' ')  }  }
    } else {
        Write-Debug ("No Speech API Results: " + $speechTextPath)
    }
}

And then some logic to parse the search results and invoke the speech api:

#fetch the search results
$audioFiles = Get-CMAudioFilesJson
if ( $audioFiles -eq $null ) {
    Write-Error "No audio files found"
    exit
}
#process each
Write-Debug "Found $($audioFiles.Length) audio files"
foreach ( $audioFile in $audioFiles ) {
    Write-Host "Transcribing $($audioFile.RecordNumber.Value)"
    Get-AudioText  -audioRecord $audioFile
}

Here's the complete script:

Clear-Host
$DebugPreference = "Continue"
 
#variables and such
$AllProtocols = [System.Net.SecurityProtocolType]'Ssl3,Tls,Tls11,Tls12'
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
$ApplicationJson = "application/json"
$baseCMUri = "http://efiles.portlandoregon.gov"
$localTempPath = "C:\temp\speechapi"
$bucketPath = "gs://speech-api-cm-dev"
 
#create local staging area
if ( (Test-Path $localTempPath) -eq $false ) {
    New-Item $localTempPath -Type Directory
}
 
function Get-CMAudioFilesJson {
	$localResponseFile = [System.IO.Path]::Combine($localTempPath,"audioRecords.json")
	$audioFiles = @()
	if ( (Test-Path $localResponseFile) -eq $false ) {
		#fetch if results not cached to disk
		$searchUri = ($baseCMUri + "/Record?q=container:11532546&format=json&pageSize=100000&properties=Uri,RecordNumber,Url,RecordExtension,RecordDocumentSize")
		Write-Debug "Searching for audio records: $searchUri"
		$response = Invoke-RestMethod -Uri $searchUri -Method Get -ContentType $ApplicationJson
		#flush to disk as raw json
		$response | ConvertTo-Json -Depth 6 | Set-Content $localResponseFile
	} else {
		#load and convert from json
		Write-Debug ("Loading Audio Records from local file: " + $localResponseFile)
		$response = Get-Content -Path $localResponseFile | ConvertFrom-Json
	}
	#if no results just error out
	if ( $response.Results -ne $null ) {
		Write-Debug ("Processing $($response.Results.length) audio records")
		$audioFiles = $response.Results
	} else {
		Write-Debug "Error"
		break
	}
	return $audioFiles
}
 
function ConvertTo-Wav {
	Param([string]$inputFile)
	#few variables for local pathing
	$newFileName = [System.IO.Path]::getfilenamewithoutextension($inputFile) + ".wav"
	$fileDir = [System.IO.Path]::getdirectoryname($inputFile)
	$newFilePath = [System.IO.Path]::combine($fileDir, $newFileName)
	#once is enough
	Write-Debug("ConvertTo-Wav Target File: " + $newFilePath)
	if ( (Test-Path $newFilePath) -eq $false ) {
		#convert using open source ffmpeg
		$convertCmd = "ffmpeg -hide_banner -loglevel panic -y -i `"$inputFile`" -ac 1 `"$newFilePath`""
		Write-Debug ("ConvertTo-Wav Command: " + $convertCmd)
		Invoke-Expression $convertCmd
	}
	return $newFilePath
}
 
function Put-BucketFile {
	Param($bucketFilePath,$bucketPath,$localPath)
	#upload to bucket
    $checkCommand = "gsutil -q stat $bucketFilePath"
    $checkCommand += ';$?'
    Write-Debug ("GCS file check: " + $checkCommand)
    $fileCheck = Invoke-Expression $checkCommand
    #fileCheck is true if it exists, false otherwise
    if (-not $fileCheck ) {
        Write-Debug ("Uploading to bucket: gsutil cp " + $localPath + " " + $bucketPath)
        gsutil cp $localPath $bucketPath
    }
}
 
function Get-AudioText 
{
    Param($audioRecord)
    #formulate a valid path for the local file system
    $localFileName = (""+$audioRecord.Uri+"."+$audioRecord.RecordExtension.Value)
    $localPath = ($localTempPath + "\" + $localFileName)
    $sourceAudioFileUri = ($audioRecord.Url.Value + "/File/Document")
    $speechApiResultPath = ($localTempPath + "\" + $audioRecord.Uri + ".txt")  
    $speechTextPath = ($localTempPath + "\" + $audioRecord.Uri + "_text.txt")
    #download the audio file if not already done so
    if ( (Test-Path $localPath) -eq $false ) {  
        Invoke-WebRequest -Uri $sourceAudioFileUri -OutFile $localPath
    }
    #convert file if necessary
    if ( ($audioRecord.RecordExtension.Value.ToLower()) -ne "wav" ) {
        $localPath = ConvertTo-Wav $localPath
        $localFileName = [System.IO.Path]::GetfileName($localPath)
        if ( (Test-Path $localPath) -eq $false ) {
            Write-Error "Error Converting $($localPath)"
            return
        }
    }
 
    #transcribe, if not already done so
    Write-Debug ("Checking Speech API Text: "+$speechApiResultPath)
    if ( (Test-Path $speechApiResultPath) -eq $false ) {
        try {
            $bucketFilePath = "$bucketPath/$localFileName"
            Put-BucketFile -bucketFilePath $bucketFilePath -bucketPath $bucketPath -localPath $localPath
            #invoke speech api
            $speechCmd = "gcloud ml speech recognize-long-running $bucketFilePath --language-code=en-US"
            Write-Debug ("Speech API Command: "+$speechCmd)
            Invoke-Expression $speechCmd -OutVariable $speechResult | Tee-Object -FilePath $speechApiResultPath   
            Write-Debug ("Speech API Result: " + $speechResult)    
        } catch {
            Write-Error $_Write-Error $_
        }
    }
 
    #process transcription result
    if ( (Test-Path $speechApiResultPath) -eq $true ) {
        Write-Debug ("Reading Speech Results File: " + $speechApiResultPath)
		#remove previous consolidated transcription file
		if ( (Test-Path) -eq $true ) {Remove-Item $speechTextPath -Force }
		#flush each transcript result to disk
		$content.results | ForEach-Object { $_.alternatives | ForEach-Object { Add-Content $speechTextPath ($_.transcript+' ')  }  }
    } else {
        Write-Debug ("No Speech API Results: " + $speechTextPath)
    }
}
 
#fetch the search results
$audioFiles = Get-CMAudioFilesJson
if ( $audioFiles -eq $null ) {
    Write-Error "No audio files found"
    exit
}
#process each
Write-Debug "Found $($audioFiles.Length) audio files"
foreach ( $audioFile in $audioFiles ) {
    Write-Host "Transcribing $($audioFile.RecordNumber.Value)"
    Get-AudioText  -audioRecord $audioFile
}

Automating the generation of Tesseract OCR text renditions

Although IDOL will index the contents of PDF documents, it does not perform its' own OCR of the content (at least the OEM connector for CM does not).  In the JFK archives this means I can only search on the stamped annotation on each image.  Even if IDOL re-OCR'd documents, I can't easily extract the words it finds.  I need to do that when researching records, performing a retention analysis, culling keywords for a record hold, or writing scope notes for categorization purposes.  In the previous post I created a record addin that generated a plain text file that held OCR content from the tesseract engine.    

Moving forward I want to automate these OCR tasks.  For instance, anytime a new document is attached we should have a new OCR rendition generated.  I think it makes sense to take the solution from the previous post and add to it.  The event processor plugin I create should call the same logic as the client add-in.  If this approach works out, I can then add a ServiceAPI plugin to expose the same functionality into that framework.

So I took the code from the last post and added another C# class library.  I added one class that derived from the event processor addin class.  It required one method be implemented: ProcessEvent.  Within that method I check if the record is being reindex, the document has been replaced, the document has been attached, or a rendition has changed.  If so I called the methods from the TextExtractor library used in the previous post. 

using HP.HPTRIM.SDK;
using System;
using System.IO;
using System.Reflection;
 
namespace CMRamble.Ocr.EventProcessorAddin
{
    public class Addin : TrimEventProcessorAddIn
    {
        #region Event Processing
        public override void ProcessEvent(Database db, TrimEvent evt)
        {
            Record record = null;
            RecordRendition rendition;
            if (evt.ObjectType == BaseObjectTypes.Record)
            {
                switch (evt.EventType)
                {
                    case Events.ReindexWords:
                    case Events.DocReplaced:
                    case Events.DocAttached:
                    case Events.DocRenditionRemoved:
                        record = db.FindTrimObjectByUri(BaseObjectTypes.Record, evt.ObjectUri) as Record;
                        RecordController.UpdateOcrRendition(record, AssemblyDirectory);
                        break;
                    case Events.DocRenditionAdded:
                        record = db.FindTrimObjectByUri(BaseObjectTypes.Record, evt.ObjectUri) as Record;
                        var eventRendition = record.ChildRenditions.FindChildByUri(evt.RelatedObjectUri) as RecordRendition;
                        if ( eventRendition != null && eventRendition.TypeOfRendition == RenditionType.Original )
                        {   // if added an original
                            rendition = eventRendition;
                            RecordController.UpdateOcrRendition(record, rendition, Path.Combine(AssemblyDirectory, "tessdata\\"));
                        }
                        break;
                    default:
                        break;
                }
            }
        }
        #endregion
        public static string AssemblyDirectory
        {
            get
            {
                string codeBase = Assembly.GetExecutingAssembly().CodeBase;
                UriBuilder uri = new UriBuilder(codeBase);
                string path = Uri.UnescapeDataString(uri.Path);
                return Path.GetDirectoryName(path);
            }
        }
    }
}
 

Note that I created the AssemblyDirectory property so that the tesseract OCR path can be located correctly.  Since this is spawned from TRIMEvent.exe the executing directory is the installation path of Content Manager.  The tesseract language files are in a different location though.  To work around this I pass the AssemblyDirectory property into the TextExtractor.

I updated the UpdateOcrRendition method in the RecordController class so that it accepted the assemblypath.  If the assembly path is not passed then I default the value to the original value which is relative.  The record add-in can then be updated to match this approach.

2017-11-14_20-53-36.png

Within the TextExtractor class I added a parameter to the required method.  I could then pass it directly into the tesseract engine during instantiation.  

2017-11-14_20-56-41.png

If you expand upon this concept you can see how it's possible to use different languages or trainer data.  For now I need to go back and add one additional method.  In the event processor I reacted to when a new rendition was added, but I didn't implement the logic.  So I need to create a record controller method that works for renditions.

public static bool OcrRendition(Record record, RecordRendition sourceRendition, string tessData = @"./tessdata")
{
    bool success = false;
    string extractedFilePath = string.Empty;
    string ocrFilePath = string.Empty;
    try
    {
        // get a temp working location on disk
        var rootDirectory = Path.Combine(Path.GetTempPath(), "cmramble_ocr");
        if (!Directory.Exists(rootDirectory)) Directory.CreateDirectory(rootDirectory);
        // formulate file name to extract, delete if exists for some reason
        extractedFilePath = Path.Combine(rootDirectory, $"{sourceRendition.Uri}.{sourceRendition.Extension}");
        ocrFilePath = Path.Combine(rootDirectory, $"{sourceRendition.Uri}.txt");
        FileHelper.Delete(extractedFilePath);
        FileHelper.Delete(ocrFilePath);
        // fetch document
        var extract = sourceRendition.GetExtractDocument();
        extract.FileName = Path.GetFileName(extractedFilePath);
        extract.DoExtract(Path.GetDirectoryName(extractedFilePath), truefalse"");
        if (!String.IsNullOrWhiteSpace(extract.FileName) && File.Exists(extractedFilePath)) {
            ocrFilePath = TextExtractor.ExtractFromFile(extractedFilePath, tessData);
            // use record extension method that removes existing OCR rendition (if exists)
            record.AddOcrRendition(ocrFilePath);
            record.Save();
            success = true;
        }
    }
    catch (Exception ex)
    {
    }
    finally
    {
        FileHelper.Delete(extractedFilePath);
        FileHelper.Delete(ocrFilePath);
    }
    return success;
}

Duplicating code is never a great idea, I know.  This is just for fun though so I'm not going to stress about it.  Now I hit compile and then register my event processor addin, like shown below.

2017-11-14_21-09-31.png

I then enabled the configuration status and saved/deployed...

2017-11-14_21-10-24.png

Over in the client I removed the OCR rendition by using the custom button on my home ribbon...

2017-11-14_21-13-59.png

When I then monitor the event processor I can see somethings been queued!

2017-11-14_21-11-55.png

A few minutes later I've got a new OCR rendition attached.

2017-11-14_21-17-24.png

Progress!  Next thing I need to do is train tesseract.  Many of these records are typed and not handwritten.  That means I should be able to create a set of trainer data that improves the confidence of the OCR text.  Additionally, I'd like to be able to compare the results from the original PDF and the tesseract results. 

Using Tesseract-OCR within the Client

In a previous post I showed how to generate OCR renditions via Powershell.  The process worked quite well, and the accuracy is higher than other solutions.  After that post I went to upload the powershell scripts to github and decided to re-run each script against a new dataset. 

As I ran the OCR script I noticed a few things I did not like about it:

  1. The script ran fine for hours and then bombed because the search results went stale
  2. I must remember to run the script after each import of records, or no OCR renditions
  3. I had to create a custom property to track whether an OCR rendition was generated

To overcome these challenges I'll need to write some code.  Time to break out Visual Studio and build a new solution.  So let's dive right in!  


I opened up Microsoft Visual Studio 2017 and created a new solution with two projects: a C# class library for the add-in, and a C# class library for the Ocr functionality.  Here I'm splitting the Ocr functionality into a separate project because in the next post I'll create an event processor plug-in.  To make this work I updated the first project to reference the second and set a build dependency between the two.

Next I implemented the ITrimAddIn interface and organized the interface stubs into logical regions, as shown below.  I also created a folder named MenuLinks and created two new classes within: UpdateOcrRendition and RemoveOcrRendition.  Those classes will expose the menu options to the users within the client.

2017-11-14_8-03-16.png

The two menu link classes look are defined as follows:

 
using HP.HPTRIM.SDK;
 
namespace CMRamble.Ocr.ClientAddin.MenuLinks
{
    public class UpdateOcrRendition : TrimMenuLink
    {
        public const int LINK_ID = 8002;
        public override int MenuID => LINK_ID;
        public override string Name => "Update Ocr Rendition";
        public override string Description => "Uses the document content to generate OCR text";
        public override bool SupportsTagged => true;
 
    }
}
 
 
using HP.HPTRIM.SDK;
namespace CMRamble.Ocr.ClientAddin.MenuLinks
{
    public class RemoveOcrRendition : TrimMenuLink
    {
        public const int LINK_ID = 8003;
        public override int MenuID => LINK_ID;
        public override string Name => "Remove Ocr Rendition";
        public override string Description => "Remove any Ocr Renditions";
        public override bool SupportsTagged => true;
    }
}
 

Now in the Add-in class I create a local variable to store the array of MenuLinks, update the Initialise interface stub to instantiate that array, and then force the GetMenuLinks method to return that array....

private TrimMenuLink[] links;
public override void Initialise(Database db)
{
    links = new TrimMenuLink[2] { new MenuLinks.UpdateOcrRendition(), new MenuLinks.RemoveOcrRendition() };
}
public override TrimMenuLink[] GetMenuLinks()
{
    return links;
}

Next up I need to complete the IsMenuItemEnabled method.  I do this by switching on the command link ID passed into the method.  I compare it to the constant value that backs my Menu Link Id's.  If you look closely at the code below, you'll notice that I'm calling "HasOcrRendition" when the link matches my RemoveOcrRendition link.  There is no such method in the out-of-the-box .Net SDK.  Here I'll be calling a static extension method contained inside the other library.  I'm doing this because I know I'll need that same capability (to know if there is an Ocr rendition) across multiple libraries.  It also makes the code easier to read.

public override bool IsMenuItemEnabled(int cmdId, TrimMainObject forObject)
{
    switch (cmdId)
    {
        case MenuLinks.UpdateOcrRendition.LINK_ID:
            return forObject.TrimType == BaseObjectTypes.Record && ((HP.HPTRIM.SDK.Record)forObject).IsElectronic;
        case MenuLinks.RemoveOcrRendition.LINK_ID:
            return forObject.TrimType == BaseObjectTypes.Record && ((Record)forObject).HasOcrRendition();
        default:
            return false;
    }
}

The last two methods I need to implement within my record add-in are named "ExecuteLink".  Here I'll hand the implementation details off to a static class contained within my second project.  Doing so makes this code easy to understand and even easier to maintain.

public override void ExecuteLink(int cmdId, TrimMainObject forObject, ref bool itemWasChanged)
{
    HP.HPTRIM.SDK.Record record = forObject as HP.HPTRIM.SDK.Record;
    if ((HP.HPTRIM.SDK.Record)record != null)
    {
        switch (cmdId)
        {
            case MenuLinks.UpdateOcrRendition.LINK_ID:
                RecordController.UpdateOcrRendition(record);
                break;
            case MenuLinks.RemoveOcrRendition.LINK_ID:
                RecordController.RemoveOcrRendition(record);
                break;
            default:
                break;
        }
    }
}
public override void ExecuteLink(int cmdId, TrimMainObjectSearch forTaggedObjects)
{
    switch (cmdId)
    {
        case MenuLinks.UpdateOcrRendition.LINK_ID:
            RecordController.UpdateOcrRenditions(forTaggedObjects);
            break;
        case MenuLinks.RemoveOcrRendition.LINK_ID:
            RecordController.RemoveOcrRenditions(forTaggedObjects);
            break;
        default:
            break;
    }
}

Now I need to build the desired functionality within the solution's second project.  To start I'll go ahead and import the tesseract library via the Nuget package manager.  As of this post the latest stable version was 3.0.2.  Note that I also imported the CM .Net SDK and System.Drawing.

2017-11-14_8-21-48.png

Next I downloaded the latest english language data files and placed them into the required tessdata sub-folder.  I also updated the properties of each so that they copy to the output folder if needed.

2017-11-14_8-29-59.png

I decide to now implement the remove ocr rendition feature.  One method will work on a single record and a second method will work on a set of tagged objects (same approach as with the Client Addin).  To make it super simple I'm not presenting any sort of user interface or options.  

#region Remove Ocr Rendition
public static bool RemoveOcrRendition(Record record)
{
    return record.RemoveOcrRendition();
}
public static void RemoveOcrRenditions(TrimMainObjectSearch forTaggedObjects)
{
    foreach (var result in forTaggedObjects)
    {
        HP.HPTRIM.SDK.Record record = result as HP.HPTRIM.SDK.Record;
        if ((HP.HPTRIM.SDK.Record)record != null)
        {
            RemoveOcrRendition(record);
        }
    }
} 
#endregion

I again used an extension method, this time naming it "RemoveOcrRendition".  I create a new class named "RecordExtensions", mark it static, and implement the functionality.  I also add one last extension method that handles the creation of a new ocr rendition.  The contents of that class is included below.

using HP.HPTRIM.SDK;
namespace CMRamble.Ocr
{
    public static class RecordExtensions
    {
        public static void AddOcrRendition(this Record record, string fileName)
        {
            if (record.HasOcrRendition()) record.RemoveOcrRendition();
            record.ChildRenditions.NewRendition(fileName, RenditionType.Ocr, "Ocr");
        }
        public static bool RemoveOcrRendition(this Record record)
        {
            bool removed = false;
            for (uint i = 0; i < record.ChildRenditions.Count; i++)
            {
                RecordRendition rendition = record.ChildRenditions.getItem(i) as RecordRendition;
                if ((RecordRendition)rendition != null && rendition.TypeOfRendition == RenditionType.Ocr)
                {
                    rendition.Delete();
                    removed = true;
                }
            }
            record.Save();
            return removed;
        }
        public static bool HasOcrRendition(this Record record)
        {
            for (uint i = 0; i < record.ChildRenditions.Count; i++)
            {
                RecordRendition rendition = record.ChildRenditions.getItem(i) as RecordRendition;
                if ((RecordRendition)rendition != null && rendition.TypeOfRendition == RenditionType.Ocr)
                {
                    return true;
                }
            }
            return false;
        }
    }
}

Now that I have the remove ocr rendition functionality complete I can move onto the update functionality.  In order to OCR the file I must first extract it to disk.  Then I can extract the text by calling the tesseract library and saving the results back as a new ocr rendition.  The code below implements this within the Record Controller class (which is invoked by the addin).

#region Update Ocr Rendition
public static bool UpdateOcrRendition(Record record)
{
    bool success = false;
    string extractedFilePath = string.Empty;
    string ocrFilePath = string.Empty;
    try
    {
        // get a temp working location on disk
        var rootDirectory = Path.Combine(Path.GetTempPath(), "cmramble_ocr");
        if (!Directory.Exists(rootDirectory)) Directory.CreateDirectory(rootDirectory);
        // formulate file name to extract, delete if exists for some reason
        extractedFilePath = Path.Combine(rootDirectory, $"{record.Uri}.{record.Extension}");
        ocrFilePath = Path.Combine(rootDirectory, $"{record.Uri}.txt");
        FileHelper.Delete(extractedFilePath);
        FileHelper.Delete(ocrFilePath);
        // fetch document
        record.GetDocument(extractedFilePath, false"OCR"string.Empty);
        // get the OCR text
        ocrFilePath = TextExtractor.ExtractFromFile(extractedFilePath);
        // use record extension method that removes existing OCR rendition (if exists)
        record.AddOcrRendition(ocrFilePath);
        record.Save();
        success = true;
    }
    catch (Exception ex)
    {
    }
    finally
    {
        FileHelper.Delete(extractedFilePath);
        FileHelper.Delete(ocrFilePath);
    }
    return success;
}
public static void UpdateOcrRenditions(TrimMainObjectSearch forTaggedObjects)
{
    foreach (var result in forTaggedObjects)
    {
        HP.HPTRIM.SDK.Record record = result as HP.HPTRIM.SDK.Record;
        if ((HP.HPTRIM.SDK.Record)record != null)
        {
            UpdateOcrRendition(record);
        }
    }
}
#endregion

I placed all of the tesseract logic into a new class named TextExtractor.  Within that class I have one method that takes a file name and returns the name of a file containing all of the ocr text.  If I use tesseract on a PDF though it will give me back the text layers from within the PDF, which defeats my goal.  I want tesseract to OCR the images within the PDF. 

To accomplish that I used the Xpdf command line utility pdftopng, which extracts all of the images to disk.  I then iterate over each image (just like I did within the original powershell script) to generate new OCR content.  As each image is processed the results are appended to an ocr text file.  That text file is what is returned to the record controller.

using CMRamble.Ocr.Util;
using System;
using System.Diagnostics;
using System.IO;
using System.Linq;
using Tesseract;
namespace CMRamble.Ocr
{
    public static class TextExtractor
    {
        /// <summary>
        /// Exports all images from PDF and then runs OCR over each image, returning the name of the file on disk holding the OCR results
        /// </summary>
        /// <param name="filePath">Source file to be OCR'd</param>
        /// <returns>Name of file containing OCR contents</returns>
        public static string ExtractFromFile(string filePath)
        {
            var ocrFileName = string.Empty;
            var extension = Path.GetExtension(filePath).ToLower();
            if (extension.Equals(".pdf"))
            {   
                // must break out the original images within the PDF and then OCR those
                var localDirectory = Path.Combine(Path.GetDirectoryName(filePath), Path.GetFileNameWithoutExtension(filePath));
                ocrFileName = Path.Combine(Path.GetDirectoryName(filePath), Path.GetFileNameWithoutExtension(filePath) + ".txt");
                FileHelper.Delete(ocrFileName);
                // call xpdf util pdftopng passing PDF and location to place images
                Process p = new Process();
                p.StartInfo.UseShellExecute = false;
                p.StartInfo.RedirectStandardOutput = true;
                p.StartInfo.FileName = "pdftopng";
                p.StartInfo.Arguments = $"\"{filePath}\" \"{localDirectory}\"";
                p.Start();
                string output = p.StandardOutput.ReadToEnd();
                p.WaitForExit();
                // find all the images that were extracted
                var images = Directory.GetFiles(Directory.GetParent(localDirectory).FullName, "*.png").ToList();
                foreach (var image in images)
                {
                    // spin up an OCR engine and have it dump text to the OCR text file
                    using (var engine = new TesseractEngine(@"./tessdata""eng"EngineMode.Default))
                    {
                        using (var img = Pix.LoadFromFile(image))
                        {
                            using (var page = engine.Process(img))
                            {
                                File.AppendAllText(ocrFileName, page.GetText() + Environment.NewLine);
                            }
                        }
                    }
                    // clean-up as we go along
                    File.Delete(image);
                }
            }
            return ocrFileName;
        }
    }
}

All done!  I can now compile the add-in and play with it.  First I added the menu links to my home ribbon.  As you can see below, clicking the remove ocr rendition link changes the number of renditions available.

2017-11-14_8-54-24.gif

Along the same line, if I click update ocr rendition then the number of renditions is increased...

2017-11-14_8-59-56.gif

In the next post I'll incorporate the same functionality within an event processor plugin, so that all records have their content OCR'd via tesseract.  

You can download the full source for this solution here: 

https://github.com/HPECM/Community/tree/master/CMRamble/Ocr