We have already resolved the issue, but I'm posting because we're still trying to understand why this happened and I wanted to see if anyone has encountered this issue before and has any ideas on what might have caused it.
Three days ago, after nothing changed that we could determine, our parcels feature classes which are all in a Parcels feature dataset in our Enterprise geodatabase stopped working. Specifically, when you tried to load a layer into ArcGIS Pro directly or through a service, the CPU on the SQL Server machine would immediately spike to 100% and if you were loading it as part of a service the number of instances used would jump to the maximum in seconds. The parcels layer would then fail to load.
We were able to resolve the issue by deleting the Parcels feature dataset and all of its layers from the database and then recreating it. Once recreated, it worked as it had three days ago. We did have archiving enabled on the dataset and I disabled that. The issue started two weeks after any data had last been updated.
Has anyone ever encountered anything like this before and if so, did you figure out what might have caused it? Thanks!
Solved! Go to Solution.
I’ve seen similar “nothing changed, but CPU pegs at 100% the moment Pro/service touches the layer” cases on SQL Server EGDBs. Usually it’s not ArcGIS itself—it’s SQL Server getting a terrible plan or choking on an index/statistics/metadata issue tied to that dataset.
Most common root causes to investigate:
Bad/invalid statistics or plan regression on the parcels tables (SQL suddenly chooses a full scan + heavy joins).
Fix: update stats on the parcels tables (and related system tables), consider clearing the plan cache for the specific query if you can identify it.
Spatial index corruption or a missing/fragmented spatial index causing expensive spatial filtering/extent queries at load time.
Fix: rebuild spatial indexes + regular index maintenance.
Archiving/versioning side effects: archive tables/indexes grow or stats go stale, and the default layer load triggers history/row lineage logic that becomes expensive.
Fix: rebuild indexes/stats on both base + archive tables; if using traditional versioning, run compress/reconcile/post workflows as applicable.
Schema/metadata inconsistency (dataset-level issue): something in the feature dataset/relationship classes/domains got inconsistent, so any layer draw triggers costly validation/lookups.
The fact that recreating the feature dataset fixed it strongly points to this category.
What I’d do if it happens again:
Capture the exact SQL being executed (SQL Profiler/Extended Events) when the layer is added.
Check the execution plan + top waits (CPU vs IO).
Run DBCC CHECKDB and rebuild indexes / update stats on parcels + archive tables.
Review ArcGIS logs for any repeated “describe / permissions / relationship” lookups that line up with the spike.
Recreating “works,” but the underlying cause is usually stats/index/metadata drift—especially with archiving enabled and no data edits for a while.
I’ve seen similar “nothing changed, but CPU pegs at 100% the moment Pro/service touches the layer” cases on SQL Server EGDBs. Usually it’s not ArcGIS itself—it’s SQL Server getting a terrible plan or choking on an index/statistics/metadata issue tied to that dataset.
Most common root causes to investigate:
Bad/invalid statistics or plan regression on the parcels tables (SQL suddenly chooses a full scan + heavy joins).
Fix: update stats on the parcels tables (and related system tables), consider clearing the plan cache for the specific query if you can identify it.
Spatial index corruption or a missing/fragmented spatial index causing expensive spatial filtering/extent queries at load time.
Fix: rebuild spatial indexes + regular index maintenance.
Archiving/versioning side effects: archive tables/indexes grow or stats go stale, and the default layer load triggers history/row lineage logic that becomes expensive.
Fix: rebuild indexes/stats on both base + archive tables; if using traditional versioning, run compress/reconcile/post workflows as applicable.
Schema/metadata inconsistency (dataset-level issue): something in the feature dataset/relationship classes/domains got inconsistent, so any layer draw triggers costly validation/lookups.
The fact that recreating the feature dataset fixed it strongly points to this category.
What I’d do if it happens again:
Capture the exact SQL being executed (SQL Profiler/Extended Events) when the layer is added.
Check the execution plan + top waits (CPU vs IO).
Run DBCC CHECKDB and rebuild indexes / update stats on parcels + archive tables.
Review ArcGIS logs for any repeated “describe / permissions / relationship” lookups that line up with the spike.
Recreating “works,” but the underlying cause is usually stats/index/metadata drift—especially with archiving enabled and no data edits for a while.
Thanks! This is super helpful, especially the steps to diagnose the issue if it happens again. We weren't sure where to start with that once we'd isolated the issue to the dataset.