Hi All,
We recently implemented UN from GN for our water distribution system. We've mostly built out our pressure tier (we have hundreds of pressure zones), and have been experimenting with different isolation trace configurations at the pressure tier. For the vast majority of our subnetworks, isolation traces run really fast. The exception is our very large gravity zones that have hundreds of thousands of features and a large number of controllers each. On these areas, when an isolation trace is run, they come back with no results and a "Trace completed with warnings" message WARNING 002535. This message seems too vague to use to troubleshoot. We don't have any dirty areas or topology errors. On the subnetworks in question, their 'is dirty' flag shows False. In the trace config, we're using condition barriers and filter barriers as relevant to our data. Have tried outputs of selection and aggregated geometry with the same results. Watching server resources at time trace is running, don't see any significant strain on RAM or Processors. Have java heap size set to 512. UN version UN5 on Server 10.9.1. Patches have been applied. There's no practical way to break these large subnetworks down into smaller areas. If anyone else has faced similar issues working with large subnetworks please let me know how you dealt with it. Also, if anyone has any ideas on how to troubleshoot please let me know.
thanks,
Solved! Go to Solution.
Please log a case with support. There were several bugs with isolation traces that match the behavior you are seeing (one of them is BUG-000179357) and they can help determine what the options are. The fix for these bugs has been backported as far back as 11.1. But since you're on 10.9.1, that may limit your available options. If you were to upgrade to 11.1 patch 5, 11.3 patch 2, or 11.5 patch 2 this issue would be fixed.
What filter barrier are you using? Is it possible that the area you are isolating doesn't have enough equipment that meets the criteria of your filter barrier to isolate it from all your sources?
Sounds like the same isolation trace works for smaller subnetworks but fails for larger ones.
If you have the UN available on a gdb, I would try this same trace on a mobile gdb and see if you get the same outcome. This might help isolate the problem to the server and confirm it's not a data issue.
Hi Robert,
I'm using filters like category isolation, device status open, lifecycle status in service, closable yes. if i do a subnetwork trace to select all my features in the subnetwork and then select by attribute on devices in an attribute query (against those selected) i get thousands of valves.
thanks,
Can you find an area you can confirm is isolatable (visually) and post a screenshot? The easiest way to do this would be in the large pressure zone in something like a cul de sac.
I want to see an area of your network that you are sure you should be able to isolate but the software isn't properly identifying those features. I have seen customer datasets where portions of the network are not isolatable because valves are either missing or not properly connected to the system (GIS Data Problem) or even situations in which the network wasn't designed to be isolatable (Engineering problem).
Edit: For testing purposes just rely on the network category, this rules out any issues with the Closeable field being populated.
Thank you @RobertKrisher and @gis_KIWI4 for your ideas. I've done some more testing will try to outline what found so far:
The fact that the mobile gdb traces so fast and with the expected results tells me that at least its not a data issue in the sense that our features are drawn incorrectly or something. Not sure what else to make of that yet though. One thing about exporting out to mgdb was it requires a drop and re-enable of topology and a re-update of all the subnetworks. Not sure if that plays into this at all. I could try deploying the mobile gdb into our test enterprise system and see if it maintains the quick and correct traces? the fact that the wonky results on the poorly spec'd vm with the mobile gdb kinda makes it smell like a resource problem. Please let me know what you think. Thanks again.
Please log a case with support. There were several bugs with isolation traces that match the behavior you are seeing (one of them is BUG-000179357) and they can help determine what the options are. The fix for these bugs has been backported as far back as 11.1. But since you're on 10.9.1, that may limit your available options. If you were to upgrade to 11.1 patch 5, 11.3 patch 2, or 11.5 patch 2 this issue would be fixed.
We have experienced similar results with the Supply tier after we Upgraded from 10.8.1 to 11.5 and upgrading the UNM from 4 to 7. Our isolation traces resolve mostly (sometimes with no results but no error either), now taking 18 mins after the upgrade. Previously they were resolving in under 1 min (average) on 10.8.1 and UNM4. We are working with Esri Australia and Esri Inc on a resolution for this issue currently.
@KymBeard07 If you let support know about the bug number I've mentioned above, they should be able to get you more information.