Select to view content in your preferred language

Parcel Fabric Performance Issues

4131
14
04-21-2017 07:27 AM
GavinMcDade
New Contributor III

We’re currently in the development and testing phase of implementing the Parcel Fabric for our county, and have encountered EXTREME performance issues that no one seems to have an answer for.

 

As a brief background:  We’ve already (months ago) completed a lengthy cleanup process of our parcels – simplifying the line work by removing excessive vertices and planarizing the majority of them to 2-point lines (except for a minority of parcel boundaries following natural features like hydro, etc., which were left as linestrings). This cleanup removed in excess of 10 million extraneous vertices. Our cleaned up parcel and subdivision data was then further processed and loaded into the ESRI-provided staging schema (FGDB) where any and all topology errors/issues were corrected, then ultimately loaded into a new Fabric (FGDB). This source Fabric FGDB was subsequently loaded into our Development SDE (ArcSDE 10.3.1 | Oracle 11.2.0.3) database via the Copy Parcel Fabric tool.

 

Our first performance issue was encountered when attempting to register the Fabric as versioned. Because the versioning process runs a series of “analyzes” on all of the Fabric objects/tables prior to the registration itself, the process appears to hang – when in reality, it simply takes a very long time to complete. In our case, “a very long time” means that both the “Parcels” and “Lines” feature classes took 4-5 hrs. each to complete. A cursory look at the tables (and their various created indexes) revealed that the “Lines” feature class alone has 2.9 million lines/records. While somewhat alarming to me, no one at ESRI has suggested that this is beyond the pale, per se.

 

After finally getting it registered, I began some rudimentary edit testing, mainly focusing on simple Parcel Fabric Workflows like a parcel split. Immediately, at the point in the Workflow where the Construction tool is activated, the cursor/crosshair exhibits a mind-numbing latency, turning to a pointer w/ an hourglass when moved through the map window (as if an intense operation was underway). If left alone (stopping mouse movement), the crosshair will return after 10-15 secs. … but, will turn back to an hourglass the instant you attempt to move it again. If you move the unresponsive pointer within snapping distance of a construction line feature and wait, it will behave as if snapped once the crosshair returns. From this point, if you SLOWLY move the cursor along the already-snapped feature, it will remain as an active crosshair. If, however, you move to quickly, or move beyond the sticky tolerance of the snap environment (10 pixels), the crosshair will become an hourglass once again. If patient enough to actually begin constructing a line feature during this time, the same behavior will continue to be exhibited the entire time you add vertices and snap to corresponding line features. When finished, you can Build the constructed features as you would normally do.

 

After additional testing, I discovered that if I turn on the ‘Classic Snapping’ environment which allows you to control which layers and feature types in your TOC are available for snapping (as opposed to the newer default snapping environment in which all layers are snap-enabled all the time), and turn OFF all layers from snapping, the performance issue goes away. At this point, the Fabric still enforces snapping to itself (which is necessary), even when all layers are set not to snap. The moment you toggle on the Lines FC, however, is when performance screeches to a halt.

 

Interestingly, this behavior does NOT occur in a FGDB. This leads me to infer that there is some unique interaction between ArcMap, SDE, and the Oracle database when it comes to the data-intensive Lines FC. Despite ESRI having provided several additional ways of directly (via SQL) re-analyzing the Fabric tables and regenerating spatial indexes, etc., no improvement has been achieved, nor has anyone identified (via SDE Intercept and Oracle trace files) any apparent bottleneck… This is NOT in I/O problem, nor does it appear to be a SQL processing issue, either.

 

I return to my concern over the sheer number of records in the Lines feature class, but have no way of substantiating how/why/if this is a valid concern. ArcMap is very apparently choking on the ability to interact with the Lines data in real time, but there’s no functional explanation I can offer to back this up, beyond my mere observations. Clearly, the FGDB is creating its own brand of spatial indexes and the like on this large feature class, but runs smoothly just the same. In contrast, Oracle should be even more robust, essentially shrugging at a data table with a mere 3 million records – yet, the performance in this case is abysmal. Thus, I keep coming back to something that the application is doing, and not simply the database itself.

 

If anyone has any similar experiences with, or insight into, this issue, we’d appreciate it greatly!

 

Gavin

14 Replies
GavinMcDade
New Contributor III

*** UPDATE ***

 

Having neglected to update this now-aging thread, I’ll provide a brief bookend for those interested:

 

The ENTIRE problem was effectively eliminated when we upgraded our ArcSDE to 10.5.1… That being said, we have NO explanation for why this was/is the case, nor do we really (pragmatically) care at this point – although, I would personally love to know what the issue was, in the event a similar situation arises down the road.

 

ESRI’s extensive testing in their duplicated environment clearly yielded dissimilar results, with them quite unable to reproduce our degraded performance even slightly. At no point was our ArcSDE 10.3.1 environment pointed to by ESRI as a potential issue when interacting with the then current LGIM/Parcel Fabric, so our only (now untestable) hypothesis is that some deeper combination of ArcSDE version and the bottomless pit of tuned settings in our Oracle environment may have worked in a negative synergistic fashion to cripple itself. Something, whatever it was, about our environment did not play nicely together – but, reversed itself with nothing more than the ArcSDE 10.5.1 upgrade.  

RaymondCrew
New Contributor II

Thank you Gavin for the update. Further evidence I need to move my group to 10.5.x fully instead of a mixture of 10.3.x and 10.1.x. 

0 Kudos
GavinMcDade
New Contributor III

Indeed. I would think 10.1.x, especially, would create some issues for you, as simple things like several geoprocessing tools (and their attendant Python syntax) have changed. We typically always run in a somewhat mixed environment for a "brief" period of time, as our rollout of Desktop upgrades are never exactly concurrent with our ability to test and upgrade SDE itself. That said, we're never more than a single version apart at any one time (this includes the task of updating scripts, etc. to meet the geoprocessing version). Nonetheless, we've never encountered an issue like this one - but, then again, we've never worked with something as intricate (from a database admin. POV) as the Fabric before. There's a lot of stuff happening under the covers in Fabricland. 

Good luck! 

0 Kudos
ScottTaylor8
New Contributor II

Hello all,

I came across this post and thought I would share our experience in getting our ArcGIS installation to perform.

Alot of what Gavin mentioned in his original post also happened to us. We migrated from another GIS technology to Esri. The result was a fabric which has 190M records in the Fabric Lines table for example. Our experience of registering the Fabric as versioned was also a multi-hour process but we had experienced this in testing so we were expecting it.

Once the system was live and we saw the poor performance, we took a number of steps.

  • We setup PerfQAAnalyzer which allowed us to repeatably test the screen display performance (for a given set of extents) and objectively evaluate our changes.(At our worst, we had screen refreshes in the 30+ seconds range.).
  • We verified the performance of each component in the system (Citrix servers [we run a virtualized ArcMap setup], network, database, custom code). Through this, we were able to zone in on the database server as the bottleneck (we also are running Oracle 12c)
  • We had a number of database tuning exercises that resulted in a number of tweaks. However, the changes that gave us the best results were:
    • Turning off Oracle's Parallel Degree Policy. Oracle was not great at allocating CPU resources to multiple sessions. One session would tend to suck up all the CPU resources to the determent of the others.
    • We were missing some attribute indexes.
    • Changing our Statistics strategy. (This was probably our most impactful change)

   

 We found that our delta tables fluctuated in size greatly over the course of the day (then reset at night when we did a Compress).  As a result, over the course of the day, as Adds and Deletes were created, the statistics gathered after the Compress the night before became of less and less value to Oracle.  Oracle recommends that, for dynamically changing tables like these, you delete and lock the stats.  This essentially wipes out the stats for the delta tables.  Oracle then uses dynamic sampling to optimize its queries.  This was also not helpful.

So, instead, we went another way. We manually set the stats at a high water mark (ie: max number of rows we were seeing in each delta table over time) and lock them. Setting the stats allowed the Oracle Profiler to then recommend tuning changes for each query (when dynamic sampling was used, Oracle couldn't provide any tuning results). We then watched for expensive SQL statements, ran Oracle Profiler and enacted its recommendations.

The result was screen refreshes that were now in the 1-3s range instead of 30+ seconds. Our CPU usage on the DB server dropped and we saw general performance improvements across the board.

I'm glad to hear an upgrade to 10.5 helped (this all took place on ArcSDE 10.2.2 and ArcMap v10.4). It looks like some SQL improvements may have been put in place which would be great.

This might not be indicative of everyone's problem, but its how we got to acceptable performance levels.

Good luck.

ColinLindeman1
New Contributor II

Just another area to look at (for others finding this thread) is the number of records in the Job, Vector and Adjustment tables.

Recently we were up to around 80,000 job and around 16,000,000 vector/adjustment records. After truncating those tables using the 'Delete Fabric Records' add in which can be found at http://www.arcgis.com/home/item.html?id=52ab3234fc0b44068567d06e5d6f9175 we noticed that the parcels loaded quicker when panning around in ArcMap. Your results may vary or you may want to keep those table records for tracking and history purposes in which case you probably do not want to truncate those tables.