Select to view content in your preferred language

Portal for ArcGIS content on S3 compatible storage

3070
3
12-01-2021 01:49 PM
NicolasGIS
Frequent Contributor

Hello,

In my private cloud (Openstack), I have a private s3 compatible storage.

So far I was able to successfully use it in 2 situations with ArcGIS Enterprise:

In order to move for an HA deployment, I would like to store Portal for ArcGIS content directory, ArcGIS Server directories and config store on that storage. According to the documentation it's doable 

"When ArcGIS Enterprise is deployed in the same cloud platform, you can store system directories such as the portal content directory and ArcGIS Server configuration store in the following cloud storage locations:

Amazon Simple Storage Service (S3) or an S3 compatible storage location"

https://enterprise.arcgis.com/en/cloud/latest/intro/cloud-options.htm 

But I can't find how to specify my custom S3 storage. In the example, I found, it is only well known storage that are documented:

https://enterprise.arcgis.com/en/server/latest/cloud/amazon/configure-web-gis-with-shared-content-di...

I tried to upload a json by specifying an S3 URL and creating a regionsforcloudstorage.dat file but I always get a 500 error:

com.esri.arcgis.portal.admin.core.PortalException:

com.esri.arcgis.portal.admin.core.PortalException:

com.esri.arcgis.portal.admin.core.PortalException: Cannot write to the mybucket S3 bucket. Please check that the bucket exists. If access keys are used to connect to the bucket, make sure they are correct. If an IAM role is used to connect to the bucket, make sure that the IAM role has write privileges to the bucket.

Any idea ? Maybe @JonathanQuinn ?

Thanks,

Nicolas

Tags (1)
0 Kudos
3 Replies
NicolasGIS
Frequent Contributor

Just a bit more details about what I tried:

  • I tried to specify a JSON a bit in the format of the cloud store registration that is to say by specifying the regionEndpointUrl and defaultEndpointsProtocol just like the following:

{
"type": "cloudStore",
"provider": "Amazon",
"connectionString": {
"accessKeyId": "foo",
"secretAccessKey": "bar",
"region": "company-s3",
"credentialType": "accessKey",
"regionEndpointUrl":"s3.company.com",
"defaultEndpointsProtocol":"https"
},
"objectStore": "mybucket"
}

but got the 500 above.

  • I tried the same idea as it was suggested for the webgisdr that is to say creating a "regionsforcloudstorage.dat" file in "Portal\framework\etc" 

{
    "regions": [{
        "name": "COMPANY",
        "id": "company-s3",
        "s3endpoint": "s3.company.com"
     }]

}

and specify the following location for my portal content directory:

{
"type": "cloudStore",
"provider": "Amazon",
"connectionString": {
"accessKeyId": "foo",
"secretAccessKey": "bar",
"region": "company-s3",
"credentialType": "accessKey"
},
"objectStore": "mybucket"
}

and got the same 500

0 Kudos
ChristopherPawlyszyn
Esri Contributor

First, ArcGIS Server requires both DynamoDB and S3 when hosted in AWS cloud storage. DynamoDB is the primary store of config-store information while items that exceed the DynamoDB size limits are stored as pointers in the database to S3 objects. This means that mimicking this deployment style on-prem would require a DynamoDB-compatible database as well. DynamoDB Local is not supported for production deployments by AWS, so that may be a non-starter for ArcGIS Server in this deployment pattern.

I did have success with configuring a private S3-compliant object store for Portal for ArcGIS content, but there are a few additional requirements that we will work to incorporate in our documentation.

Additionally, the entries in the regionsforcloudstorage.dat are recognized from the .../Portal/framework/etc/ directory as you surmised, but require virtual-hosted style access to be available.

This article from AWS goes into the differences in virtual-hosted style and path-style routing and the eventual deprecation of the path-style access.

Amazon S3 Path Deprecation Plan – The Rest of the Story | AWS News Blog
https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/

 

My sample regionsforcloudstorage.dat contained the following information in both the 'regions' and 'Amazon' sections:

{
  "name": "Custom In-house",
  "id": "custom-in-house",
  "s3endpoint": "objectstore.domain.com",
  "blobStoreEndpoint": "objectstore.domain.com"
}

I confirmed my object storage was configured to allow access to portalcontent.objectstore.domain.com and DNS was configured properly for both aliases. Then my Portal for ArcGIS site creation JSON was (using a pre-created bucket named 'portalcontent'):

{
  "type": "cloudStore",
  "provider": "Amazon",
  "connectionString": {"accessKeyId":"<accessKey>","secretAccessKey": "<secretKey>","region": "custom-in-house","credentialType": "accessKey"},
  "objectStore": "portalcontent"
}

 

FYI: Here is the current latest AWS S3 region file: https://s3.amazonaws.com/esriresources/1091/regionsforcloudstorage.dat


-- Chris Pawlyszyn
NicolasGIS
Frequent Contributor

Thanks @ChristopherPawlyszyn for your detailed explanations. Much appreciated.

"This means that mimicking this deployment style on-prem would require a DynamoDB-compatible database as well"

-> I had in mind using a classic file share directory for the arcgis server config-store as I. do not have a DynamoDB-compatible at my disposal (will double check though). Too bad you can't "mix" specially. knowing that it's currently not suitable for production ! Do you think this may evolve in the future to something more "flexible" ?

On the S3 storage for Portal for ArcGIS content, I am happy to read that it is doable ! Will have a look and. try the "virtual-hosted style". Will share my finding here !

Once again, many thanks !

 

0 Kudos