|
POST
|
The .NET Batch Tracing Core Host method appears to be no faster than the Python / GP execution. I'm pointing to my production Enterprise feature service for UN and starting points (mid-points of water mains), rather than the Naperville (FGDB). You can see in my screenshot below it's returning elements just fine. No different than how it would look in Python, except Python is a little easier to watch/debug and I've already invested time into the custom actions around the tracing. Using the same trace configuration (pointing to water pressure tier) it takes ~7 seconds (sometimes barely a second) per trace. Unless you suspect something isn't configured correctly in my .NET solution, I have to assume Python is still just as good as anything. Running the trace directly from REST endpoint is the same time as well. As for manually inspecting results, I don't think I'm able to parse the results to handle it all in one trace. My condition barriers are defined by a single field in our valves feature class ("Is_PP_Divider" = Yes), not relying on a closed vs open status, and our filter barriers are Category IS_EQUAL_TO SPECIFIC_VALUE Isolating. So there is no way to further dissect the results when this is all I have available. No properties that help me pick out the barrier (isolating) vs contents (isolated). elements (networkSourceId, globalId, objectId, terminalId, assetGroupCode, assetTypeCode) sourceMapping resultTypes I think I'll abandon the .NET method and return to my Python solution...which still seems to require multiple traces.
... View more
10 hours ago
|
0
|
0
|
4
|
|
POST
|
The terms "contents" vs "barriers" are easier to write. Got it. As for Option 2, I have the Core Host tracing solution downloaded and have been trying to plug in my own UN connection properties. I can somewhat follow along with what the C#.NET code is doing. I borrowed the example json files to create my own config file (see below). Originally, the error was about geodatabase not being found, but now it's failing on "dataset was not found". I've tried all variations imaginable. I realize you were pointing me to the Partition_Water_Isolation" json example, but I think for my workflow and the ultimate goal it'll requires me to use "type": "Trace" like the Trace_Electric_Customers.json example. I don't meant to go too far with troubleshooting here, so unless you have a quick suggestion I'm wondering if there is any example from ESRI that shows how to connect to a UN feature service using .NET Pro SDK. All of the examples I've seen so far use File Geodatabases. Surely, I don't use a .sde file path, since there is mention of portal URL and user/password. The documentation for OpenDataset doesn't help me. Once I can properly connect, I should be able to figure out the rest in the trace looping and working with the json response to inspect elements. My current Water_Trace.json file (with sensitive domains/paths and user/pwd values removed) {
"analysisName": "Isolation Trace",
"type": "Trace",
"networkSourceName": "UtilNet",
"assetGroupCode": 1,
"domainNetworkName": "Water",
"tierName": "Water Pressure",
"definitionQuery": "",
"namedTraceConfigurtion": "Isolate Lines and Valves",
"inputWorkspace": "https://myserver.domain/arcgis/rest/services/.../Water_UN/FeatureServer",
"sourceUtilityNetwork": "UtilNet",
"outputFile": "C:\\UtilityNewtorkTracing\\ArlWaterTraceResults.csv",
"outputWorkspace": "C:\\UtilityNewtorkTracing\\BatchTracing.gdb",
"outputPoints": "Pressure_Points",
"outputPolylines": "Pressure_Lines",
"outputPolygons": "Pressure_Polygons",
"outputTable": "Pressure_Analysis",
"portalUrl": "https://portalserver.domain/agsportal",
"portalUser": "Domain\\username",
"portalPassword": "password"
} Full error message: 12/30/2025 7:50:58 AM: Reading configuration file: D:\OtherProjects\BatchTracingCoreHost\JSON Configurations\Trace_Water.json 12/30/2025 7:50:58 AM: Isolation Trace - Performing analysis ArcGIS.Core.Data.Exceptions.GeodatabaseDatasetException: The dataset was not found. ---> System.Runtime.InteropServices.COMException (0x80040301): 0x80040301 at ArcGIS.Core.Internal.IGeodatabaseIOP.Geodatabase_GetUtilityNetwork(IntPtr workspaceHandle, String name) at ArcGIS.Core.Data.GeodatabaseCore.OpenDatasetCore[T](String name) --- End of inner exception stack trace --- at ArcGIS.Core.Data.GeodatabaseCore.OpenDatasetCore[T](String name) at ArcGIS.Core.Data.GeodatabaseCore.OpenDataset[T](String name) at ArcGIS.Core.Data.Geodatabase.OpenDataset[T](String name) at BatchTracingCoreHost.Classes.BatchTrace.BatchTraceUsingPaths(String inputWorkspacePath, String utilityNetworkClassName, String networkSourceName, Int32 assetGroupCode, String definitionQuery, String sourceFieldName, String outputWorkspacePath, String polygonClassName, String polylineClassName, String pointClassName, String outputTableName, Int32 functionFieldCount, String analysisName, String namedConfigurationName, String terminalName) in D:\OtherProjects\BatchTracingCoreHost\Classes\BatchTrace.cs:line 78 12/30/2025 7:52:17 AM: Isolation Trace - Analysis complete 12/30/2025 7:52:17 AM: All analysis complete
... View more
Tuesday
|
0
|
0
|
32
|
|
POST
|
Yes, I have already tried...and ditched...Option1, hoping the Connected trace approach might work for round 2 (to get double isolation valves). The problem there is that if the round 1 trace gets all isolated features (lines and valves being isolated) then I don't have the isolating valve(s) defined yet. It's the inverse for getting only isolating valves in round 1 trace: I won't have all the isolated lines, which would be best for cost of outage analysis later on, i.e. how many and which type of customers would be impacted. So basically, without having isolating valves in round 1 I cannot go further. A connected trace would go inside and outside the isolating group, selecting more valves than I want. It will take 2 - 3 traces to accomplish what I want: isolated valves/lines, isolating valves, 2nd outer layer of isolating valves. With my SQL Server table capturing valve OIDs for both isolated and isolating, it looks like only 2 rounds of tracing could be enough, with the rest of the output dependent on selecting rows from the SQL table in a particular way. No matter what, though, your Option 2 may be worth starting over with the Pro .NET SDK. At least I have most of the general logic worked out. The Python / GP time for execution is my worst headache right now. Thanks. I hope to report back with a final solution eventually.
... View more
|
0
|
0
|
47
|
|
POST
|
@RobertKrisher and @MikeMillerGIS: One last question as it relates to my bulk tracing solution. I made some good progress but now hit a wall. For roughly 70,000 water main features, it may take around 24 - 36 hours to trace everything and store ObjectIDs of all the isolated lines. However, I also need isolating valves. I cannot find any way to perform a single trace and extract the valves that are actually isolating + the full set of water mains being isolated. Here's the reason I want both sets. Besides knowing which valves to turn off in case of an emergency, having all isolated lines will allow me to essentially trace one more level out, i.e. a "double isolation trace". This is nice to offer the field crews for critical situations like a busted 24" main. Having the second layer of isolating valves will potentially save the crews time if they cannot shut of water based on the immediate (1st level) of valves to close. So rather than run a bunch of isolation traces from all the results of the first initial trace, I could use the database row to quickly and easily return isolating valves for those secondary lines. Here's a visual that may help to explain. My starting trace point is the green point in the middle. The valve on the south end is the only isolating valve. The other two circled in red are what I wish to exclude...yet include all those isolated line elements that are selected in the map. Is it possible to capture this combination in one run, or am I forced to trace the entire system twice if I want to be able to select double isolation valves directly from my database table of pre-traced lines? The two valves circled above should not be included in my results. Update after posting It looks like this question has already been answered: https://community.esri.com/t5/arcgis-utility-network-questions/identify-isolating-valves/m-p/1649732#M5753 Is there any way to take advantage of a Connected trace or some other type of faster trace than a second Isolation? That's the main concern, avoiding two isolation traces for the same feature.
... View more
2 weeks ago
|
0
|
1
|
92
|
|
POST
|
Right, definitely an important workflow for editors. While that's applicable for typical edits to lines and valves, sometimes a pressure plane expansion could mean a slow motion updating of features as the construction project and engineering discussions go on for weeks or months. In such rare cases where a broken boundary in GIS is going to be intentional (and of course temporary), I suppose we could create a no-edit/static version on the side before breaking the boundaries -- a snapshot while the pressure planes are still sealed -- and use it for re-tracing if an updated bulk trace was needed. Hopefully there wouldn't be a need to run an isolation trace for that small expansion/modification area under construction, but otherwise for everywhere else at least this once again shows the value of having a pre-traced system that you don't have to worry about the Default version having issues at any given moment.
... View more
2 weeks ago
|
0
|
0
|
140
|
|
POST
|
Glad you asked these questions. I have been using System tier, thinking it wouldn't really make much of a difference for tracing performance. I was clearly wrong. Tracing times "Water System" tier "Pressure Plane" tier [ObjectID]: [Time to Trace] [ObjectID]: [Time to Trace] 723: 18 sec 723: 8 sec 771: 19 sec 771: 9 sec 11705: 19 sec 11705: 6 sec 3333: 19 sec 3333: 6 sec The main reason I stuck with System tier until now is because I was erring on the side of caution: what if our pressure plane divider valve features (the only parameter for defining condition barrier for pressure subnetworks: Is_PP_Divider is equal to Yes) were modified by a Technician and it messed up the pressure plane boundaries so as to not be sealed anymore. I'm pretty sure tracing would fail with "Multiple subnetwork controllers with different subnetwork names found." I suppose it's safe enough to rely on Pressure Plane tier for isolation tracing. This drop in execution time seems worth the low risk of a rare edit made to pressure plane valves (marked with a PP_Divider field). We'll need to work out how best to handle significant edits for pressure plane expansion projects which we've only had one or two in the last 10 years. As for controllers, again glad you brought this up. I hadn't made it around to adding more than the bare minimum to define each subnetwork on each tier. I'll keep going with adding more controllers for storage towers, as there are a 3-4 more each that I could add to the two largest pressure planes. This would likely help as well, eh? We have another treatment plant that I could add on the system tier. This will only put in me a better place for bulk tracing. Thanks!
... View more
2 weeks ago
|
0
|
0
|
398
|
|
POST
|
Thanks, I had to fix my input parameters in a few places, including add "percentAlong" for my line features (midpoints) as starting points which I mistakenly removed while experimenting. Now my same general workflow (Python script) is working with API for Python to insert rows into a database one at a time, but it's enhanced in that I can point to a version that is free of dirty areas. VersionMgr = arcgis.features._version.VersionManager(url=urlVersionMgmtServer, gis=gis, flc=restFeatLyrWaterUtilityNetwork)
VersionForTracing = VersionMgr.get(version=BranchVersionName)
VersionForTracing.start_reading()
UtilNetMgr = arcgis.features._utility.UtilityNetworkManager(url=urlWaterUN, version=VersionForTracing, gis=gis)
...
...
trace_results = UtilNetMgr.trace(locations=TraceLocations, trace_type="isolation", configuration=traceConfiguration, result_types=resultTypes)
Lots to say still. I don't want to take up your valuable time. It seems like ESRI could dedicate a white paper to this topic of bulk network tracing: different strategies, when it makes sense to do so, optimization, etc. In his reply above, @MikeMillerGIS seems surprised by the 15 - 30 sec execution time for my traces. Yes, this is the most depressing part. It's takes equally long per trace if I run through API for Python (UtilityNetworkManager) or even directly from the server at the REST end point page (.../UtilityNetworkServer/trace) by plugging in a trace config ID and a water main Global ID as starting point. Maybe our geodatabase needs some fine tuning since our deployment 2 months ago? As for skip logic, I think I'm able to handle this fine while inserting into my SQL table. My script didn't finish after ~ 36 hours, but it got pretty far along. The problem is that more than a few hundred traces interspersed throughout the iterations were giving results with nearly all water lines included! That slowed it down a lot, no doubt. I'll have to figure out what's going on there. As for @MikeMillerGIS's caution about "a larger area isolates pipes that could be isolated with a different set of valves.", again I think that can be resolved with my output database by simply looking at duplicates for a given input Water Line ObjectID and choosing the one with the longest or shortest value (depending on what makes the most sense). I will check into the Pro SDK batch tracer. I'm open to whatever tool works the fastest. So that leads me to ask, would it really perform any better than a direct REST endpoint trace or Python API? I suspect not, and that in the end I'm still dealing with the limitation on the server/GDB side of things. Maybe you have some final thoughts? Otherwise, I'll close this as answered, although others may find their way to this thread and have more to say.
... View more
3 weeks ago
|
0
|
2
|
419
|
|
POST
|
Hi Mike. I do see BatchTrace has the capability to reduce trace runs by taking advantage of "Group By" field from the output isolation starting points created by BuildStartingPoints. That may help for what I'm hoping to accomplish. First, here is the output produced with debug enabled. Secure paths are obviously obfuscated in the text below. You can see it's not 15 sec this time but 30 sec. I currently have my own arcpy.trace routine going. It's about half way through (~40,000 traces) since starting yesterday. Maybe it'll finish in under 36 hours with a somewhat complete set of trace results. (At least a few hundred had a giant set of results which I skip and move on, but making a note.) I'll get back with more soon, if possible, but I still don't think BatchTrace is an optimal method if I want to have the ability to restart where I left off if a connection is broken midstream. My looping routine will check what's already entered into my SQL table and keep going, so it's pretty safe in that regard. Start Time: Sunday, December 14, 2025 8:16:53 AM ArcGIS Pro 3.3.3.52636 udms 3.3.4 Executing from ArcGIS Pro, 9 map(s), activeMap = True ****PARAMETERS**** Input Utility Network: Network Utility Network Trace Locations: Iso_Pts Result Types: ['ELEMENTS'] Trace Configuration Name or Field: Trace Config:: Isolate Lines and Valves Expression: None Output Folder: C:\\my_network_path_here\\WaterTraceIsolationResults\IsolateLinesValves_Output_2 Group Field: GroupBy Store Summary Information on Starting Points: None Fields to update: None Calculate on Starting Point Features: None JSON Result file folder: None Aggregated GDB: None Historical Date Field: None Stat Table: None Default Terminal ID: None Code Block: None ****ENVIRONMENTS**** udms.logic.batch_trace( utility_network=<arcpy._mp.Layer object at 0x0000021D2A6EEC10>, trace_locations=<arcpy._mp.Layer object at 0x0000021D2A6EF490>, result_types=['ELEMENTS'], output_folder='C:\\my_network_path_here\\WaterTraceIsolationResults\\IsolateLinesValves_Output_2', summary_store_field=None, field_mapping=None, key_field='GroupBy', expression=None, trace_config='Trace Config:: Isolate Lines and Valves', calc_on_start=None, history_field=None, default_terminal_id=None, user_code=None, ) In Path: Network Utility Network URL: https://domain/UN_service/FeatureServer workspace: https://domain/UN_service/FeatureServer Path: https://domain/UN_service/FeatureServer UN Loaded Collecting Trace Configs Validating the inputs Trace Config:: Isolate Lines and Valves Setting up parameters Opening Data Element Verifying calc fields Verifying Lookup Fields and Target Tables Getting Starting Points Getting Trace Info Tracing dict_keys(['Isolate Lines and Valves', 'Isolation, PP Tier, Select Valves', 'Connected, all valves including dead-ends, mains only']) Tracing 1/2 Trace 141_::_Isolate Lines and Valves about to be run [{'element': 'table', 'data': [['Function', 'Network Attribute', 'Filter', 'Operator', 'Filter Value', 'Result']], 'elementProps': {'striped': 'true', '0': {'align': 'left', 'pad': '30px'}, '1': {'align': 'left', 'pad': '30px'}, '2': {'align': 'left', 'pad': '30px'}, '3': {'align': 'left', 'pad': '30px'}, '4': {'align': 'left', 'pad': '30px'}, '5': {'align': 'right', 'pad': '30px'}}}] Trace 141_::_Isolate Lines and Valves completed in 29.701016500010155 seconds Tracing 2/2 Trace 5706_::_Isolate Lines and Valves about to be run [{'element': 'table', 'data': [['Function', 'Network Attribute', 'Filter', 'Operator', 'Filter Value', 'Result']], 'elementProps': {'striped': 'true', '0': {'align': 'left', 'pad': '30px'}, '1': {'align': 'left', 'pad': '30px'}, '2': {'align': 'left', 'pad': '30px'}, '3': {'align': 'left', 'pad': '30px'}, '4': {'align': 'left', 'pad': '30px'}, '5': {'align': 'right', 'pad': '30px'}}}] Trace 5706_::_Isolate Lines and Valves completed in 32.126651099999435 seconds udms.logic.batch_trace 67.6632496000093 Succeeded at Sunday, December 14, 2025 8:18:05 AM (Elapsed Time: 1 minutes 11 seconds)
... View more
3 weeks ago
|
0
|
1
|
512
|
|
POST
|
After reading through various documentation and searching the Community board, I have yet to find a summary that thoroughly explains - with full examples - what's possible for bulk tracing a UN. I'm on Enterprise 11.3, UN version 7, Pro 3.3.3. My goal is to use either Python API or ArcPy for the following: While looping through each of our water UN's ~77,000 line features... Run an isolation trace on each feature, one at a time, and process the results for each trace. All I need are the "elements", no geometry. For each starting water line segment ObjectID, extract the isolated valve ObjectIDs and isolated line ObjectIDs and then insert them into a SQL table as comma delimited for easy database retrieval (e.g. "3815, 3940, 9914, 2147"). In this way, I merely run a database query instead of executing an actual trace in the front end application/script: "...where ObjectID In (3815, 3940, 9914, 2147)" This script would be run every so often (~ 4 times / year, maybe?) Benefits of this approach: No worries about dirty areas preventing the UN trace from executing for end users. Tracing ahead of time assures me that nobody will see an error saying that trace cannot run. Our Technicians post to Default throughout the day. Even though they are in the habit of Validating the Default to clear out everything including the harmless "Feature has been modified" dirty areas, they may forget or there may be a real error that isn't resolved at any given moment. Lightning fast results for the front end application - for a single level isolation alone, but with this type of arrangement I could perform a double-level isolation very quickly which could be really beneficial in cases of a large main break so the crew knows for sure what all valves to close to guarantee water flow is blocked. Double level isolation may be rare, but it almost assures you that if the GIS data is off at least you have a safety net for identifying critical valves to close. The results can now be used for other asset management scripts/workflows that would not otherwise be feasible if you were analyzing the entire system and had to execute a trace for each feature. It could take days to run continuously, which is unrealistic, when a simple database query for each line segment would require a tiny fraction of that time. What works, what doesn't, where I lack knowledge: Before going into specifics, my frustration is centered around the fact that it's hard to find a method that allows me to dynamically define my starting point (as a mid-point of each water main) for each iteration and then retrieve results in memory...preferably while running against a no dirty area version that is free from interruptions. ArcPy Trace "arcpy.un.Trace(...)" - currently the only method that works well enough, if not ideal. I reference a starting points FileGDB (on C:\...) as the template. It has a single point feature. Using UpdateCursor I simply set the FeatureGlobalID of the current water main. It successfully completes the trace on a small scale so far, but I have to output the results to a physical JSON file where I then pull the "elements" properties and then delete the file...continue with looping. In ArcPy I cannot seem to reference a version other than SDE.Default. I've tried appending syntax like this "?gdbversion=MyUser@Domain.TraceTesting" (both with and without forward slash before the "?") to the URL for the UN layer, but it doesn't seem to take. ArcGIS Python for API - (arcgis.features.managers module) - either I cannot get the syntax right or even if I could I'm not sure it'll handle the per feature input as with the arcpy trace (using a FGDB point). I'm fairly confident my input parameters are fed in correctly with the "trace" method: TraceLocations = [{
"traceLocationType": "startingPoint",
"globalId": GlobalID, ## example: "{288D22C3-301A-44D1-81BA-E66F094413D9}"
}]
traceConfiguration = {
"includeContainers": True,
"includeContent": False,
"includeStructures": False,
"includeBarriers": True,
...etc.
}
resultTypes=[{"type":"elements","includeGeometry":False,"includePropagatedValues":False,"networkAttributeNames":[],"diagramTemplateName":"",
"resultTypeFields":[]}]
trace_results = UtilNetMgr.trace(locations=TraceLocations, trace_type="isolation", configuration=traceConfiguration, result_types=resultTypes) It's supposed to produce a dictionary with {"traceResults": {"elements": list,}"success": bool} Here's how it looks when my trace completes. With all the variations I've tried, I never see "traceResults" or "elements" returned. REST API requests.post(service_url, data=payload, headers=headers) - it doesn't allow me to define a starting point dynamically while looping through my water line features. I can get it to run from REST endpoint using a Global ID of a water valve (device), but I cannot seem to get this approach to work as explained above. Can I reference local data? I don't want to store starting points in my enterprise geodatabase, since they change all the time with continual edits to our system. BatchTrace (Utility-Data-Management-Support-Tools) isn't a viable solution if you're trying to handle results for each feature. In theory it sounds good, but practically speaking it's highly inefficient and unrealistic. The tracing still takes 15 - 20 seconds per feature, which would mean many days of running. --------------- Here's my strategy to be most efficient: Instead of having to trace all 77,000 features, what I'll actually trace will be much less - perhaps as little as 1/10 of this total. For each line traced, I'm capturing all the lines being isolated from that run. Therefore, I already know that full group of lines is covered by a certain combination of barriers (valves). So I can then insert all those rows into my SQL table before moving on to a new isolation area, if that makes sense. I really just need to trace one line segment for each isolation area/group. It could entail 2 lines total or it could entail 18 lines, but it cuts down on a lot of processing.
... View more
3 weeks ago
|
0
|
20
|
770
|
|
POST
|
Oh yes. Funny, I would have eventually realized this after digging through all of my own code, because I actually already use fetch() in all cases for retrieving HTML elements and JSON objects. Apparently, I wasn't actually relying on the "appHtml" path in my dojo loader file for some time. In case anyone else comes across this discussion, here's what I mean. You have a separate HTML file in your project (in this case under the /html folder) that you want to leverage for constructing a custom interface. fetch('html/redlineAttributes.html')
.then(function (response) {
return response.text();
})
.then(function (htmlRedlineAttributes) {
createRedlineAttributesfDialog(htmlRedlineAttributes);
}); A function to create the dialog, while querying HTML elements by ID or class name... function createRedlineAttributesfDialog(htmlRedlineAttributes) {
const parser = new DOMParser();
const renderedHtml = parser.parseFromString(htmlRedlineAttributes, 'text/html').documentElement;
const redlineAttributesDialogAttachNode = document.getElementById("redlineAttributesDialogAttachNode");
const redlineAttributesDialogContainer = document.getElementById("redlineAttributesDialogContainer");
const dialogRedlineAttributes = renderedHtml.querySelector(".dialogRedlineAttributes");
....etc. In my case I display the dialog in a separate function. this.showRedlineAttributesDialog = function (curGraphic) {
appAttributes.graphicID = curGraphic.graphicPairID;
const redlineAttributesDialogContainer = document.getElementById("redlineAttributesDialogContainer");
const redlineAttributesDialogAttachNode = document.getElementById("redlineAttributesDialogAttachNode");
redlineAttributesDialogContainer.style.visibility = "hidden";
redlineAttributesDialogContainer.style.display = "block";
if (redlineAttributesDialogAttachNode.offsetHeight) {
redlineAttributesDialogAttachNode.style.top = "5px";
}
... View more
11-26-2025
12:49 PM
|
1
|
0
|
492
|
|
POST
|
Thanks, Joel. After talking to my colleagues and reading your post I realized it's probably going to work out once I get moving into ESM. I was mixing a few things together in my mind. If I can do something like this below to replace AMD then I should be fine. import appPrintPDF from "./js/printPDF";
appPrintPDF.initialize();
import identifyPopupHTML from './html/identifypopup.html'; The website you provided will be a helpful reference, too.
... View more
11-26-2025
12:21 PM
|
0
|
2
|
500
|
|
POST
|
I have a large web application used by most of our department that I've been maintaining since the early versions of the ArcGIS API / Maps SDK for JavaScript. Along with looking into the Widget to Component transition, there is one really important concept for which I need clarification and guidance. Will my dojoConfig.js file (entire code pasted below) continue to work after 4.x? (I'm currently on 4.27...you guys move too fast for some of us!) I realize Dojo framework itself has long been removed, but this piece continues to work and appears to be supported through the last 4.x version (4.34). As I understand, it’s basically a Require.js remnant that I still need to use for defining custom paths in the AMD structure in order to break up large amounts of code and for having separate classes and HTML templates. Either it will become unsupported altogether in 5.x (immediately or within a few versions) or maybe there no good way to weave this into Web Components…in either case it would leave me scrambling to find a way to reference all my files in the Visual Studio (.NET Framework) project. I asked the Community about this a couple of years ago but never got a good answer. This will take a substantial amount of time for me to re-write, when converting to Components alone, so I don't want to waste any time with my approach over the next 6 months. Do I need to re-arrange my code altogether, and if so how best to accomplish in today's modern JS design (preferably without relying on React/Angular/Vue frameworks)? I don't have much time to research these things, having just finished overseeing a double UN deployment and loaded with many other projects. I'm only knowledgeable in so many directions. dojoConfig.js let locationPath = location.pathname.replace(/\/[^/]+$/, '');
window.dojoConfig = {
async: true, parseOnLoad: false, packages: [
{ name: "appJavascript", location: locationPath + "/js" },
{ name: "appJavascriptClasses", location: locationPath + "/js/classes" },
{ name: "appJavascriptChartJS", location: locationPath + "/js/chart_js" },
{ name: "appHtml", location: locationPath + "/html" }
],
has: {
"esri-native-promise": true
}
} This is used to call Javascript files from the "appJavascript/..." path define(["esri/config", "esri/geometry/SpatialReference", "esri/geometry/Extent", "esri/Map", "esri/views/MapView",
"appJavascript/buttonHandlers", "appJavascript/layersLegend", "appJavascript/measure", "appJavascript/redline",
], function (esriConfig, SpatialReference, Extent, Map, MapView,
appButtonHandlers, appLayersLegend, appMeasure, appRedline
) { Here's a portion of my file structure to get an idea of the size: Here's what else I know: 4.25 release notes saying that Dojo Loader is still included in the CDN AMD npm package and esri-loader are deprecated at 4.29 It seems like only a short matter of time before Dojo Loader is obsolete.
... View more
11-24-2025
07:01 AM
|
3
|
4
|
586
|
|
POST
|
Ah yes, so simple. Using "Connected" for the type, I then refined my barrier conditions and for output specified I only want certain valve asset groups / types. The final selection was only a set of valves and nothing else. Seems like this accomplishes the goal. Thanks!
... View more
11-14-2025
12:04 PM
|
0
|
0
|
440
|
|
POST
|
Thanks Patrick (and Mike). You both are of course right in the overall sense. I cannot argue with what you're saying and how isolating works in this case. I seem to have forgotten "isolation" traces are about the network as a whole (from sources outward to dead-ends, essentially). In real life, though, I'm assuming our Field Operations staff would want to turn off this cul-de-sac valve along with the others for fully isolating the pipe segment(s) involved, but that's an internal question I have for engineering and operations management.
... View more
11-14-2025
12:03 PM
|
0
|
0
|
441
|
|
POST
|
It selects the valve when Include Isolated Features is checked. So it's going right past this device feature. It's just odd, because this is one of over 21,000 valves with Asset Group = System, Asset Type = System. I didn't edit this feature. It's showing as having a last modified date the same as nearly every other valve which is the day we migrated over last month. That's why it has me nervous. If this seemingly random one is overlooked, then why are the others getting selected when spot checking so far.
... View more
11-14-2025
07:38 AM
|
0
|
2
|
477
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 08-06-2025 02:40 PM | |
| 1 | 08-04-2025 08:13 AM | |
| 1 | 11-26-2025 12:49 PM | |
| 3 | 11-24-2025 07:01 AM | |
| 1 | 07-01-2025 10:14 AM |
| Online Status |
Online
|
| Date Last Visited |
12 hours ago
|