There was an interesting question over on the forum. I figured I should follow-up with some information about the process of copying a dataset using the migration feature when creating a new dataset. This is a relatively new feature for some organizations upgrading, so let's dive right on in!
To get things started, I opened the Enterprise Studio and clicked Create Dataset.
Then I entered the new dataset's name, ID, and RDBMS platform (dataset type)...
After clicking Next, I clicked the KwikSelect icon on the dataset connection string property.
This displays the Data Link Properties dialog where I can enter the server name, authentication mechanism, and select the database name. I want to configure this to point to the new database, which will end up being a duplicate of an existing dataset. If you select SQL authentication (the use a specific user name and password option), be sure to also check "allow saving password".
After configuring the data link properties dialog, I clicked OK and was returned to the new dataset connection dialog. I changed the command timeout to 300. The default value equates to 30 seconds, a value far too low that will just lead to timeout errors for users.
I clicked Next to move onto the Options page of the create new dataset wizard. It prompted me for a working path for bulk load. I pointed it to a share on my workgroup server. Then I clicked Next (skipping the IDOL index staging field).
I provided a path for my new document store and then clicked Next.
On the Storage dialog I simply clicked Next and did not change any of the options. You would only change these options if you have created additional files within the database. Contact your database administrator for an explanation and review of these settings.
Within the initialization page of the wizard I must select the Migrate radio button, which then enabled me to select my source dataset and migration style. It's best to select a basic setup data migration style when promoting from development (DEV) to user acceptance testing (UAT) or UAT to production (PROD). Otherwise I just pick Entire Dataset (you can always purge records or locations later). Note that I have not selected to support Unicode characters, but you may need to (discuss with your DBA).
I clicked Next one last time and am presented with the run parameters page of the wizard. I clicked Finish without changing any options.
Then I had to click OK on this warning dialog.
Finally, I can click Start to kick this copy off!
It completed step one (of four) and then bombed with this error message...
Well the error is saying there's an access denied error on a file, so I open windows explorer and look at the file....
The existence of a non-zero file tells me that there were appropriate permissions to create the file. Therefore the issue must be with permissions reading the file, which makes sense given the error states "the file ... could not be opened".
Since I'm using Windows as an authentication mechanism the SQL Server must send my credentials for validation. This fails because SQL Server cannot transmit my credentials to the remote share (this would be a double-hop situation). If I change my bulk load share to one created on the SQL Server then my credentials won't need to be transmitted and the process works correctly. If I changed the identity of the SQL Server service away from the virtual network service account (shown below) and to a defined domain user, I could enable constrained delegation. That would allow the double-hop of credentials to work without failure.
If I cannot create a share off the SQL server then I must switch from Windows authentication to SQL authentication. That would also require that I change the identity of SQL Server from the virtual network service (shown in the picture above) to a domain account. Additionally, I would need to configure constrained delegation on that new account. The SQL authentication route is significantly more complicated to configure. Therefore the best option for me is to use a local share (local to the SQL Server since Windows authentication fails without constrained delegation).
My quick fix is to re-do all the steps above but to use "\\apps\bulkload" as my work path for bulk load...
Once I did that my process works flawlessly...
I've now got two datasets defined. Next I should configure events, IDOL, and any other settings unique to the new dataset.