|
POST
|
@WhereMatters I just re-read my post and realise I have misled you. In our large AWS hosted HA Enterprise site, we don't use FSx for ArcGIS Server Config files, only for the Server Directories (and Portal Content). We do use DynamoDB for the AGS Config - and for the very reason you suggested - to ensure millisecond read-write consistency. When publishing, it is important to ensure the updates to config tables happen rapidly as there are a few individual service requests that are managed at this time, and any inconsistency may cause corruption. DynamoDB provides this fast update, we can't be certain that FSx will do so. so, to summarise, FSx good for the Server Directories (arcgiscache, arcgisoutput, arcgissystem). - and for file based registered data stores. But DynamoDB is the best HA solution for the AGS Configuration. FSx can also be used for Portal Content, but it is cheaper to use S3 for this. sorry if I have caused any confusion.
... View more
04-25-2021
03:48 PM
|
1
|
3
|
4631
|
|
POST
|
Thanks for the update Glad you are finally back in action
... View more
04-17-2021
09:08 PM
|
1
|
0
|
4856
|
|
POST
|
Hi Adam, how did it go with Support? Was it something fundamentally wrong with the original .prvc?
... View more
04-15-2021
10:27 PM
|
0
|
2
|
4870
|
|
POST
|
Hi Dean, Portal's temp directory can be relocated by the administrator via the portaladmin page. go to /portaladmin/system/directories/temp and click the "edit" operation, replace the existing path with your new location (see https://developers.arcgis.com/rest/enterprise-administration/portal/edit-directory.htm) BUT - the new path must still be a local directory for each Portal machine (not a network share) and the update wont create or populate the new folder - you need to do that first - on both machines in an HA site. AND - - this will cause a Portal restart so not something to be taken lightly the good news is that any files in the existing TEMP directory are really that - temporary - and do not need to be copied to the new location. In the same vein - I would suggest there is no reason not to delete any files in the temp directory that have not been auto-deleted - but maybe wait until they are a day old to ensure you are not impacting a running process.
... View more
04-15-2021
07:45 PM
|
3
|
1
|
4040
|
|
POST
|
Hi @AdamRepsher_BentEar in MyEsri, on the "Manage License Files" page you should see the .prvc file you generated for 10.8 Server under the "License Files related to ArcGIS Server and Desktop" option. If you dont, you may need to log out of MyEsri and log in again to get a refreshed listing. Once you can see the file listed, select the Details page - and (if you have the appropriate privileges in MyEsri), there is a "Cancel License File" button. once you have done this, you should be able to generate a new 10.8.1 .prvc file. If your server does not have access to the Esri licensing site, you may need to go via the "Secure Site Operations" page in MyEsri Licensing - this allows you to upload a simple text file (based on the details in the .prvc file) via this page to retrieve a .ecp file that you can use to authorise without being asked to go to the web from the Server machine. Note, if you have an Enterprise Geodatabase or a Relational Data Store- you need to ensure that the license file gets updated in these as well - for Data Store, the upgrade wizard will deal with this. For an Enterprise Geodatabase, you need to do it via Desktop or Pro -there is a GeoProcessing tool https://desktop.arcgis.com/en/arcmap/10.8/tools/data-management-toolbox/update-enterprise-geodatabase-license.htm – use a copy of the keycodes file from the \\Program Files\ESRI\License<release#>\sysgen directory on the ArcGIS Server for this.
... View more
04-12-2021
10:22 PM
|
3
|
1
|
4901
|
|
POST
|
I wouldn't alter the list - it has been put together this way by the Esri Security team to be very specific about what gets blocked. There are some other endpoints that you would block with a less specific rule that may prevent (for example) publishing or overwriting services or sharing Portal items.
... View more
03-29-2021
02:28 PM
|
1
|
0
|
4116
|
|
POST
|
well - your Web Adaptor will handle it when you add any additional machine to either of your Server sites - you still wont need an internal Load Balancer - really just adding (marginal) latency to every request. Adding an additional server to provide an additional Role (Image, Notebook etc) also managed with adding a new Web Adaptor on whichever tier you decide to use.
... View more
03-23-2021
03:49 PM
|
0
|
0
|
4194
|
|
POST
|
the Trust website provides this paper "ArcGIS Enterprise Web Application Filter Rules" that lists a suggested list of endpoints that can be blocked to external access - you can apply these rules at the External Load Balancer in your proposed layout. Angus's comment about where the web adaptors sit is valid - but running all WA's on Portal tier also works - and allows you to remove the need for an internal Load Balancer - you can have the SSL cert on the IIS site hosting the WA's. unless in future you are planning to add additional virtual machine at Portal tier to provide High Availability?
... View more
03-23-2021
03:11 PM
|
2
|
4
|
14693
|
|
POST
|
I share Todd's scepticism about performance claims for access to cloud storage from in-house servers. But having said that, Portal Content may not be as critical a location - think of full Cloud deployments where AWS S3 or Azure Blob is an approved solution in the Esri templates for Portal Content and for Map Tile Cache. These are also "slow" storage and perhaps do well enough. But - local storage is always best practice - and is certainly mentioned for Map Tile Cache storage in an optimal Server configuration.
... View more
03-23-2021
02:58 PM
|
0
|
0
|
10512
|
|
POST
|
Hi Anand, We do have a large ArcGIS Enterprise site with multiple machines that is currently using FSx for shared Server Directories, including config. It is working well under high loads. However, FSx is not an officially certified solution as AWS has indicated to Esri that it uses DFS technology behind the scenes, and DFS replication has been known to not always provide the sub millisecond read-write accessibility that ArcGIS Server requires for config store. But my understanding is that DFS-R will only slow down when used for cross region duplication, which is not the case when using FSx within one region across multiple availability zones which would generally have very low latency. So far, we have seen no issues caused by inconsistent read-writes, and I am happy to suggest FSx as a good solution for HA in an AWS deployment.
... View more
03-21-2021
08:22 PM
|
1
|
4
|
4739
|
|
POST
|
Well, under those constraints - may I suggest you reduce the D:drive on the Portal server to (say) 40 GB and provision 60 GB on the Azure Share for the Portal Content directory. That way, you could bring the Data Folders back to an increased D:drive on the GIS Server. regarding Web Adaptors, in a non HA deployment like this, I would normally install the Web Adaptors on the Portal Server and not have a separate tier. Unless that is, you have a need for a DMZ isolation for incoming requests from outside your network. If you have an existing Reverse Proxy outside the firewall, I would suggest that can give sufficient isolation - passing requests to the Web Adaptors running on the Portal Server. Or - there is no real problem with installing the Web Adaptors on your existing Web Server. Either way - I dont think you need a dedicated Web Adaptor tier.
... View more
03-21-2021
07:42 PM
|
3
|
9
|
10612
|
|
POST
|
I back up all that Angus & Craig have said - but I have a question about your architecture. What are you intending to put in the Azure SMB share that you need to access from the in-house servers? This share will almost certainly have relatively high latency for any read/write - so would not be recommended for Server System directories. May be OK for Portal Content, but I wouldn't want to use for registered File Data Stores used by ArcGIS Server. This may be an good location for Backup files - there may be for Portal, Server and Data Stores. If you use webgisdr for creating backups of your entire site, you will certainly be looking for space to keep the large outputs.
... View more
03-21-2021
04:47 PM
|
3
|
0
|
10642
|
|
POST
|
Hi again, sounds like you are planning almost exactly the configuration we are moving toward. Using the "blue-green" staging pattern. We have existing HA site at 10.7.1. We now want to move to 10.8.1 but have no time when the Portal and its federated services are not available at least for Read-Only. As the Portal holds a lot of content (>400 GB), we are conscious that the upgrade is going to take a long time - we estimate at least 12 hours - probably more unless we temporarily beef up the Portal Server to have more CPU available to the javaw.exe processes. So, we are looking at setting up the standby site (in the same Region- in Oz, moving out of ap-southeast2 is bad news in latency terms) to be used as "read-only" and giving the user community lots of pre-warning that the site will be under maintenance, and any new/updated content will be lost if it happens within the nominated time window. (if we were at 10.8+ we could go into Read-Only mode, but not available in earlier versions) We are very familiar with using webgisdr - it was the method we used to originally migrate from one AWS account to another and upgrade from 10.6.1 to 10.7.1 along the way - but that time, the site was much smaller and far fewer people were affected by the outage. The reason we are thinking about not using webgisdr this time is that it is very slow due to the large size of Portal content. The backup of portal itself takes >4 hrs (which runs in parallel with server and data store backups that take slightly less) and then it can take either 12 hrs to zip all the individual backups to a single archive (using file system backup_location); or 12 hours to upload the individual backup files to an S3 bucket (which sounds ridiculous, but at present is apparently unavoidable- at least at 10.7.1). We would then expect maybe another 10-12 hours to import the package to the standby site, ouch. - and then the time for the upgrade - so we are getting close to a full weekend. If we can do snapshot restoration to the standby site, then we should be able to get the setup pre-upgrade down to a couple of hours or less. We have used Route53 private zoning to allow the standby site to think it has the same Public & Admin urls as the Primary site (and using weighting to switch public access between the two) - but found problems when trying to use the same domain for both internal and external Load Balancer, so needed to end up using AWS DS's own DNS override. this is why we are thinking to use a separate AD Domain for the standby site. We are in contact with the AWS engineers regarding the best way to make the standby a replica of the primary site. I don't think using DynamoDB Global Tables will provide the isolation we probably need to be able to upgrade one site without affecting the other, but maybe DataSync will help for FSx transfer. I will have to keep you posted on our progress cheers, David H.
... View more
03-15-2021
03:11 PM
|
1
|
2
|
4273
|
|
POST
|
Hi, our HA sites (8 of them) are still running with FSx for Portal Content and AGS System Directories. DynamoDB being used for AGS config. Regarding Performance/Stability, we have not seen any issue with our environments that we would blame on FSx or DynamoDB, so I would stand by FSx as a workable HA solution. We are currently trying to work out a good way to use AWS snapshots to maintain a Standby mirror site in a different VPC to allow for Blue-Green deployment of patches and upgrades without a need for outage. What we are uncertain of is how to copy FSx & DynamoDB content to a second site (using a different AD) and maintain close linking with the recovered EC2 instance snapshots. We are trying to avoid any chance of creating "orphan" portal items and/or AGS Service definitions which theoretically could happen if FSx replica is a different timestamp to that of the EC2 snapshot AMIs. We are also aware that AWS method for maintaining an FSx replica across regions uses DFS-R which definitely could introduce delays in completing consistent write/reads. (I think this is why FSx is not on the officially endorsed list)
... View more
03-14-2021
09:07 PM
|
0
|
4
|
4288
|
|
POST
|
but, in general, I believe it is rarely a good idea to use individual's schema to hold data that will need to be shared (or used later by some other user). If not using a dbo account, It is generally a far better practise to use specific data owner/schema, e.g gis_owner and connect as that login when adding new datasets. You may consider a few individual data owners for different themes or external sources (e.g. land. assets, hydro etc). This also also provides a naming convention that should assist in finding your datasets (remembering the fully qualified name is database.schema.datasetname) Once a class is created, then grant access to appropriate "gisreaders" and/or "giseditors" database roles. That way, everyone knows that when connected using Operating System authentication their individual AD login, if in the appropriate role, will only see the feature classes they have been authorised to see/edit.
... View more
02-28-2021
02:11 PM
|
2
|
1
|
7018
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-09-2025 05:21 PM | |
| 1 | 09-19-2024 10:35 PM | |
| 1 | 08-05-2024 11:10 PM | |
| 1 | 07-25-2024 07:01 PM | |
| 1 | 07-24-2024 06:50 PM |
| Online Status |
Offline
|
| Date Last Visited |
11-30-2025
02:58 PM
|