Relocating a Document Store

I've recently migrated my dataset's database into a secure cloud environment.  I've also prepared a workgroup server in the cloud and have registered the dataset on that server.  When users try to open electronic documents though, an error is generated.

download.png

The error message indicates the expected location of the electronic file, relative to the server.  In this case I should be able to browse the file-system on the old server and find this directory: "C:\HPE Content Manager\DocumentStore\47".  When I look on the old server though, I don't see it.

To sort this out I'll need to review the properties of the record and find the document store details.  That information is stored in the Document Details property.  I can view that from the Properties tab of the view pane, as shown below.

2018-04-23_21-32-49.png

To dig in further I'll need to go to the Administration ribbon and click Document Stores.

Then locate the document store I found in the record's document details property and open the properties dialog.  

Well that doesn't seem right.  The path highlighted does indeed exist.  However, as I migrated my dataset into the cloud I also changed the dataset ID!  The path indicated in the properties of the document store is just the starting path.  Within that path will be the ID of the dataset; and then, within that, will be the unique ID of the document store.  I can find that by reviewing the view pane for the document store (you may need to customize the view pane and add that property).

2018-04-25_10-12-40.png

If I go back to the old server I should find a sub-folder within "45" (the old dataset ID) named "2"....

2018-04-25_10-14-40.png

Moving that folder onto my new server into the path "C:\HPE Content Manager\DocumentStore\47" should resolve the issue.  After dropping the file into that path, everything works!

Moral of the story is to be mindful when changing dataset IDs!  

Also, it would be really nice to be able to use a google storage bucket as a document store!

Monitoring CM Cloud Instance Resources with Stackdriver

It's saving me a tremendous amount of time having Content Manager in my secure private cloud!  I'd like to monitor the environment though; and for that I'll use Stackdriver.  The free tier gives me everything I need for my current usage.  As I ramp up my implementation though I'll need to expand its' usage, so understanding the pricing model is a must.

First things first.... I need to install it on my VM by using this command:

invoke-webrequest "https://dl.google.com/cloudagents/windows/StackdriverLogging-v1-8.exe" -OutFile "StackdriverLogging-v1-8.exe";
.\"StackdriverLogging-v1-8.exe"

Then ran through the installer as with any other application:

2018-04-24_19-51-34.png
2018-04-24_19-55-13.png

Next I flip over to my Stackdriver homepage and BAM.... everything's already done for me:

2018-04-24_19-51-56.png

Now a natural question would be "what about content manager logs and events?".  This can be easily done!  The logging agent support ruby expressions, as shown below.

2018-04-24_20-04-41.png

I can re-use the custom grok filters I created within Elasticsearch to parse CM logging sources!  A topic for another day!  For now I'll create an alerting policy to keep me in-the-know.

2018-04-24_20-08-41.png

What I find most useful here is that you can see information about the conditions you're trying to set.  So helpful to be able to see some historical data for the metric I'm configuring an alert on.  

2018-04-24_20-10-51.png

The rest is pretty self-explanatory!  :)

Migrating On-premise PostgresSQL to Cloud SQL

In this post I'll cover what it takes to move an on-premise PostgresSQL database into a Cloud SQL instance.  I've already migrated a workgroup server, but I could have just as easily kept my workgroup server on-premise.  At the end of this I'll have all of my Content Manager infrastructure within a secure, private, fully-managed cloud infrastructure.

The first task is to generate a  SQL dump from PostgresSQL but with no extension related statements.  I'll need to install GnuWin so that I can leverage the sed command.  I executed the pg_dump command, piped the results through sed and then to disk (as shown below).

2018-04-21_22-52-36.png

Next I created a bucket named "postgressql-backup" and uploaded the file.  Although I could have done this via the command line, I preferred to use the cloud console. 

 
2018-04-21_22-56-00.png
 

Next, I go over to my Cloud SQL instance and select import. 

 
 

Then I can click Browse, navigate to the file in the bucket, and select it.

 
 

Lastly, I need to select the target database within my instance and then click Import.

 
2018-04-21_23-06-10.png
 

After clicking Import, I returned to the Cloud SQL overview dashboard to monitor the progress.  Eventually I'll see a notification once the import has completed.

 
 

Now I connect to my Content Manager Workgroup Server and launch the Enterprise Studio.  Here I am registering a dataset because I've also decommissioned the original workgroup server.  Alternatively, I could have modified the connection string of an existing dataset.  I used all of the settings from the Cloud SQL instance  when configuring the connection string, as shown below.

 
 

I completed the wizard and was shown a message indicting success.

A quick check of the client shows me everything worked. 

 
 

Sweet!  Now my DemoDB dataset fully resides within the cloud!