|
POST
|
Thanks @JonathanQuinn ! This is exactly what I was looking for. It is super useful ! Last night, while testing the new objectStore, the restore of the latter took 14 hours (cf. https://community.esri.com/t5/high-availability-and-disaster-recovery-questions/can-t-backup-arcgis-enterprise-11-4-webgisdr-since/m-p/1567875) but failed with another weird issue for ArcGIS Server (something like "ArcSOC PID XYZ not ready" or something like that). I restarted ArcGIS Server service, and run the import site operation for ArcGIS Server with "mode:dr" and Portal for ArcGIS and it worked like charm ! I spared yet another 15 hours of my time ! I don't understand why you don't document it. Super powerful. Thanks again for sharing.
... View more
12-18-2024
02:12 AM
|
0
|
0
|
1654
|
|
POST
|
Hello, Apologize for my late reply but tests are taking ages.... I will summarize my testings but here is my conclusion: everything seems to be working properly but it is just super long and slow. I found out that "Windows Defender" was slowing the backup down. I added exclusions to: - arcgis datastore backup folder - webgisdr temp and backup folder and after that, I was able to make a webgisdr backup... but performances are really poor compared to tileCache. Here is a quick overview: Scenario 1: ------------------- - 1 base deployment - Portal for ArcGIS, ArcGIS for Server, ArcGIS datastores (relational, tileCache, objectStore) - Windows 2022 16 CPU - 60 GB RAM deployment - Disk: 160 GB. - Attached local D drive "io3" 300MB/S and a rate of 5 IO operation per gigabyte with a guaranteed minimum of 500 IO operations and a maximum of 2000 IO operations (both, read and write) Scenario 2 (objectStore on a dedicated VM): ------------------- - 1 base deployment - Portal for ArcGIS, ArcGIS for Server, ArcGIS datastores (relational, tileCache) - Windows 2022 16 CPU - 60 GB RAM deployment - Disk: 160 GB. - 1 datastore deployment - ArcGIS datastores (objectStore) - Windows 2022 16 CPU - 60 GB RAM deployment - Disk: 160 GB. - Attached local D drive "io3" 300MB/S and a rate of 5 IO operation per gigabyte with a guaranteed minimum of 500 IO operations and a maximum of 2000 IO operations (both, read and write) As you can see it now takes: - 12h30 in a base deployment when webgisdr temp and backup folder can be local - 16h in a scenario with a shared webgisdr temp and backup folder (we can ignore the deletion but thought it is interesting to note that it is now taking ages as well) I am bit scared now because this ArcGIS Enterprise deployment is small. TileCache is only 27 Go. Our professional WebGIS has currently a tileCache of 110 Go and I am not sure it will fit in 24 hours. I am about to start the test. It seems to me that with the objectStore, we are retrograding back in time of BUG-000139154 BUG-000139154: https://support.esri.com/en-us/bug/tile-cache-datastore-backup-takes-too-long-and-the-data-bug-000139154 The bad piece of new is that there is only room for 1 version to fix it (11.5) ! Note that a webgisdr backup of the same site on tileCache takes only 30 minutes ! And finally to answer your question @Gaius_Kuttappan : - There is a size reduction after migration (from 27.8 Go -> 21. 4GB) and all migrated scene layers are loading correctly - Yes, datastore are validating properly - Describedatastore: READWRITE - nothing special - Yes, full permissions correctly set. - Nothing abnormal in webgisdr logs as everything is working as expected. Any thought @JonathanQuinn ? Thanks !
... View more
12-12-2024
02:37 AM
|
0
|
0
|
2670
|
|
POST
|
On our side, in the end we kept having this SMB error that we thought would be solved by removing the incorrect registered data store. We found out using "process monitor" that the SMB error was coming from ArcSoc process when starting a new soc instance. We searched everywhere for this path on ArcGIS Manager and Admin interface: could not find any mention of it. Then we checked the ArcGIS Project used for publishing and found out that this path was referenced as broken in the "Folders" list section as on the image below. We republished each service. SMB errors went away. No issue since then (10 days). To be followed. Broken path in ArcGIS Pro folders project Conclusions: - I did not think that these paths matter in the ArcGIS Pro project for publishing - The issue is still unclear to me: SMB error could occur without having necessarily a crash. - Maybe because almost all our services (50) were published with this broken path, if several of these broken services start spinning up ArcSOC processes, it could lead to it ?!
... View more
12-11-2024
12:33 PM
|
0
|
0
|
1048
|
|
POST
|
Thanks for your feedback @AndyGup . It is visible on your core samples from "js-api-resources" repository: https://github.com/Esri/jsapi-resources/tree/main/core-samples/jsapi-custom-workers Run 'npm install', 'npm run build' and serve the build. You end up with the phenomena described: many requests to individuals workers' dependencies modules. Don't need to click on "Run spatial Join" to run the custom worker. Basically, as soon as you make use of custom workers with core, you end up with the phenomena described that is to say loading all the depencies of ESRI Maps SDK workers individually (mainly FeaturePipelineWorker dependencies for simple webmap). If you compare with a CDN version of the app: require(["esri/Map", "esri/views/MapView", "esri/layers/FeatureLayer", "esri/widgets/Legend"], (ArcGISMap, MapView, FeatureLayer, Legend) =>{
const cityLayer = new FeatureLayer({
portalItem: {
id: "e39d04981238498792eb33ea26ba1c09"
}
});
const frsLayer = new FeatureLayer({
portalItem: {
id: "cdff193a3e3743a5bc770e2743f215b3"
}
});
const map = new ArcGISMap({
basemap: "dark-gray-vector",
layers: [cityLayer, frsLayer]
});
const view = new MapView({
container: "viewDiv",
map,
center: [-117.98, 33.96],
zoom: 12
});
const legend = new Legend({ view });
view.ui.add(legend, "top-right");
}) all the worker code is bundled in 'https://js.arcgis.com/4.31/esri/views/2d/layers/features/FeaturePipelineWorker.js' and there much much less requests:
... View more
12-04-2024
01:55 AM
|
0
|
1
|
1131
|
|
POST
|
Did some additional debugging and compare with a simple WebApp using CDN version of the SDK. I found out that basically our workers are not "bundled" contrary to the CDN version. As a consequence, initializing the worker "FeaturePipelineWorker", triggers the requests of all the dependencies of this module and there are a lot: While using in the CDN version, I can see that all these dependencies are bundled in "FeaturePipelineWorker.js". Do you make maybe a separate bundles of all modules of "esri/core/workers/registry" by any chance ? I am still wondering why vite does not do that by default.
... View more
12-03-2024
04:58 AM
|
0
|
1
|
1144
|
|
POST
|
Just referencing another issue that describes pretty much what we are experiencing currently with vite as bundler but in this case it was with arcgis-webpack-plugin and Maps SDK workers: https://github.com/Esri/arcgis-webpack-plugin/issues/73 "During runtime, when creating a new feature layer for the first time, there are more than 1500 network requests from web workers (while Chrome limits up to 6 parallel requests) -it which leads to major delay displaying the feature layer." It is not exactly the same issue as on our side, it is our custom workers instead of the Maps SDK for javascript ones which get requested but the outcome is the same: too many requests from workers leading to delay in displaying the map. Issue is different but it makes me think that something can be done on the bundler side to prevent this ?
... View more
12-03-2024
02:42 AM
|
0
|
0
|
1151
|
|
POST
|
Thanks @AndyGupfor your reply ! Much appreciated. The app architecture and stack is fairly simple: map focused web app built with latest ESRI Maps SDK for javascript (@arcgis/core) and calcite design system. By many aspects it is similar to the new MapViewer. But it does make heavy use of custom workers as described here: https://github.com/Esri/jsapi-resources/tree/main/core-samples/jsapi-custom-workers All those requests delaying the initial load of the map are coming from these custom workers. The issue does not seem to be data/map related: I can open the very same webmap in the MapViewer and this phenomena does not appear. If I check MapViewer workers requests, they are only 4: "dojo.js, arcadeUtils.js, FeaturePipelineWorker.js, libtess.js" MapViewer seems to be built with dojo: could it be possible that many workers scripts are bundled into the "dojo.js" build ? Like forcing it with dojo build profile or something like that ? Should we investigate something equivalent with vite maybe ? Thanks
... View more
11-29-2024
09:37 AM
|
0
|
0
|
1163
|
|
POST
|
I have the intuition that there are so many files to copy that it takes ages but that is just a feeling... Only retrieving the size of the folder the 'ozonedata' took couple of hours as there are so many files. Small comparison: - before migration 'nosqldata' folder: 27.7 GB, 265 files, 9 folders - after migration 'ozonedata' folder: 21.4 GB, 2 353 27 files, 81 folders When I am waiting for the backup to be made, I can tell that the 'local' object store backup works (ie: the first backup which is done in the configured backup location). Using the ArcGIS Datastore utility command 'listbackups', I can tell it is running during around 9 hours (!), then I can see the backup listed as SUCCESS using this utility. Finally, it starts backuping over to the SHARED_LOCATION but the copy of the object store is very slow: it is growing super slowy. In the end, I think it only fails because it reached the hard coded time out value of 24 hours. It is copying slowly but surely but is very inefficient due to the large number of files. When it fails after 24 hours, the objectStore backup folder is around 9 Go only so there is still a lot to copy remaining. I suspected a network drive performance issue so I moved the SHARED_LOCATION folder and BACKUP folders of the webgisdr to a local drive on the machine (it is a base deployment for testing purpose) and it failed as well. I am now requesting a new drive with more IO to see if it helps.. It kind of reminds me this issue related to the TILE_CACHE and switch to higher IO drive had helped: https://community.esri.com/t5/high-availability-and-disaster-recovery-questions/tilecache-datastore-very-slow-to-restore/td-p/1121135/page/2 Thanks
... View more
11-27-2024
10:51 PM
|
0
|
1
|
2784
|
|
POST
|
Hello, Testing the upgrade from ArcGIS Enterprise 11.3 running on Windows Server 2022 to 11.4 and I am facing an issue after having migrated my tileCache datastore to objectStore using the `MigrateSceneServices` utility. https://enterprise.arcgis.com/en/server/latest/publish-services/windows/migrate-scene-services-utility.htm Here is the workflow: - From 11.3, upgrade to 11.4 OK - Export webgisdr backup OK - Import webgisdr backup OK - Upgrade scene services using `MigrateSceneServices` utility: OK - Update the webgisdr.properties to INCLUDE_OBJECT_STORE_CACHES = true and INCLUDE_SCENE_TILES_CACHES = false - Run the webgisdr export It systematically fails after 24 hours... (before it was done in 1 hour !): The tileCache was about 29 Go and composed of 20 scene services (mainly meshes). Anybody else faced the same issue ? Thanks, Nicolas /cc @JonathanQuinn
... View more
11-27-2024
05:24 AM
|
0
|
9
|
2834
|
|
POST
|
Hello, I am looking for a world wide DSM in the same spirit of the "Terrain" layer from living Atlas: https://elevation.arcgis.com/arcgis/rest/services/WorldElevation/Terrain/ImageServer cf. https://www.esri.com/arcgis-blog/products/analytics/analytics/introducing-esris-world-elevation-services/ with "Surface" elevation (aka: "A gridded raster representing the highest visible surface, including vegetation and human-made features, at every pixel. ") (cf. https://support.esri.com/en-us/gis-dictionary/digital-surface-model) Does it exist in the living atlas ? Thanks, Nicolas
... View more
11-20-2024
05:32 AM
|
0
|
0
|
371
|
|
POST
|
Hmm strange indeed. My issue was mainly with OIDC on my side. For SAML, it is documented that the organization short name is appended on AGOL : All organization-specific usernames in ArcGIS Online have the organization short name appended to the end https://enterprise.arcgis.com/en/portal/latest/administer/windows/configuring-a-saml-compliant-identity-provider-with-your-portal.htm#ESRI_SECTION1_1E9996AB78AD47F7BE14B7DD5598BE2F Strangely, I was not able to find this piece of information on AGOL documentation but only in ArcGIS Enterprise stating that it is possible to do the same thank to the "defaultIDPUsernameSuffix" property. But I wonder why it is not the case with OIDC. What is the logic behind ?
... View more
11-20-2024
03:07 AM
|
0
|
0
|
677
|
|
POST
|
Hi Ahmad, Thanks for your quick reply ! Much appreciated. Could you please expand on what did you increase on the DB so that I can cross check with DBA ? But in my case, 90% of my services are served from a local file geodatabase at the root of the server (C:\). Not sure much can be optimized but will investigate this lead. Thanks !
... View more
11-19-2024
12:46 AM
|
0
|
1
|
2353
|
|
POST
|
Hello @AhmadAwada1 , Out of curiosity, do you also have this kind of error 'localhost.0.log' in C:\Program Files\Arcgis\Server\framework\runtime\tomcat\logs: WARNING: The web application [arcgis#rest] appears to have started a thread named [Thread-216] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.base@17.0.10/jdk.internal.misc.Unsafe.park(Native Method)
java.base@17.0.10/java.util.concurrent.locks.LockSupport.parkNanos(Unknown Source)
java.base@17.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source)
java.base@17.0.10/java.util.concurrent.LinkedBlockingQueue.poll(Unknown Source)
com.esri.arcgis.discovery.logging.Logger$c.run(Logger$c.java:555) I have plenty of those after reboot when my site was unavailable. The issue start occuring on my side after upgrade from 11.1 to 11.3. What did it start for you ? Thanks !
... View more
11-19-2024
12:21 AM
|
0
|
1
|
2359
|
|
POST
|
Hello, Now that group membership have been enabled on AGOL for OIDC provider, we would like to switch authentication provider from SAML to OIDC. But I noticed a difference of behavior between SAML and OIDC providers which is a bit confusing. Let's say my organization username is "guineapig" and my organization name is "MYORG" (ie: AGOL URL is https://MYORG.maps.arcgis.com) Currently, if I log in with SAML, my AGOL username will be "guineapig_MYORG" which is fine. But if I log in with OIDC, my AGOL username will be "guineapig4" solely. As you can see, an integer was added at the end because "guineapig" already exists in AGOL as a built-in account so my username is mapped to "guineapig4" instead of "guineapig" or "guineapig_MYORG". I would prefer the SAML way of adding the organization suffix to make it unique. I looked everywhere and it does not seem to be configurable. But why is there this difference of behavior ? Is that a BUG or a feature ? I am confused. Thanks, Nicolas
... View more
11-18-2024
10:41 PM
|
1
|
5
|
733
|
|
POST
|
Well in my case it is about 100 ArcSocs per VM (2) but they have lots of RAM (60 GB). When inspecting the activity manually, VMs do not seem overloaded CPU or RAM wise so I am not sure it is the issue... just trying to correlate the fact that during lunch time (less activity), there was no issue. Do you have a monitoring of your CPU or RAM by any chance ? I personally don't. I wonder if it could not correlate with a CPU activity spikes.
... View more
11-14-2024
10:09 PM
|
0
|
0
|
2431
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | a month ago | |
| 1 | 09-28-2025 09:14 AM | |
| 1 | 11-05-2024 01:46 AM | |
| 2 | 08-25-2025 03:13 AM | |
| 1 | 11-14-2025 01:24 AM |