Chef is changing their business model. We can no longer have an on site Chef Manage solution if we want it supported without a significant cost. We have been deploying our multi server environments with Chef since 10.6.
My question is can I switch to using Powershell-dsc for upgrades or are there steps I need to do to migrate deployment solutions?
This is a very intriguing idea, but presents many challenges. One challenge is the json file attributes are different between Chef and Powershell dsc. Typically with upgrades in PowerShell we take the original json file and update it with "OldVersion" parameter as well as update the paths to the new license files and setups. However, in this case the json file would need to be built from scratch and essentially needs to have matching values.
Another hurdle is that in Chef we create an Administrator username/password for the ArcGIS Server Site, and a separate account for Portal. However, in PowerShell DSC we only allow one account to be specified so this means both Portal and Server will have the same username/password. This would present a problem when registering the web adaptors since a token needs to be passed. But perhaps this could be worked around by manually updating both ArcGIS Server and Portal accounts to have the same username/password. Or possibly having separate json files for each product to be upgraded.
There could be other "gotcha's" and caveats as well since under the hood of Chef and PowerShell are very different. There would need to be allot of testing done prior to recommending this as an option.
I know Chef recently updated their Enterprise licensing agreement. My understanding is now they charge for redistributing at an Enterprise Level but the use of the Chef Infra Client I believe is still free at least for testing purposes. Perhaps you could expand on the why there will be significant costs for your current solution?
We had a Chef rep reach out to us and we had a webinar. Currently we have an on site linux server for Chef Manage with over 20 nodes bootstrapped. Their new business model from my understanding from that meeting is this. Their hosted environment with less than 20 nodes would remain free but more that 20 would cost per node.
We can keep our current on prem chef server but would not be able to update without licensing cost. This is not an option for us as governance requires updated software to mitigate risks.
The multiple Admin issue is a non issue because in our original setup we used the same Admin between Portal and servers.
I am sure I can figure out the new json if I have access to the right examples. My question is will there be an issue running the correct powershell-dsc scripts to upgrade after having used chef up to this point? Would I be better off doing a backup of the Enterprise then uninstalling and then reinstalling using Powershell or will I be ok just upgrading as is.
Apologies for the delayed response and thank you for sharing those details. The existing environments can still be upgraded using Chef Infra Client. Essentially all that Chef Server is really doing is running Chef Infra client on each bootstrapped node, copying over the cookbooks and json file. Then Chef Infra Client Infra Client is doing all the work.
So technically, you could still remote into each node, install Chef Infra Client, download the cookbooks, and modify the existing json file to perform the upgrades. I would suggest this route rather than migrating to PowerShell DSC to perform the upgrade since this wouldn't be very straight forward and presents allot of risks. Not only are the attributes/parameters not 1 to 1 but also the resources/recipes vary. For example, Portal Settings may get changed. Here is the resources that the Portal Upgrade would use:
Along with the json attributes, you would also need to examine the Chef recipes used in the original deployment and compare them with the DSC resources to see what differences there are and assess if any settings/configuration would get changed during the upgrade. If you do decide to go this route I suggest performing allot of testing prior to implementing it in production and have adequate backups/images to restore to if things don't go as planned.