|
POST
|
@EdwardBlair you'll want to migration the feature linked annotation class to use global ids.
... View more
2 weeks ago
|
0
|
0
|
175
|
|
POST
|
@Andy_Morgan if you want to get the nested trace groups then you'll need to do secondary processing on the results. The downside of tracking which lines you've already isolated is that you can miss out on nested groups. If your deadline is in two days, I'd be prepared to roll out with this limitation and come back to a more precise solution for the next round. Question: Do features that belong to a nested outage need to be excluded from higher level outages? If the answer is yes, then you're looking at a basic network partitioning scheme. If you need this information, then you'll want to do the secondary processing on the isolation trace results. However, because those nested groups may have their own nested groups its not enough to just look at object ids, you need to an analysis of the network graph. This leads you down the path of needing to identify barriers. Which, once you get that working, means you can just analyze an entire pressure zone using a single trace. I'd recommend you run the isolation trace first, get the results, then do all your secondary processing on that result set. this would let you
... View more
2 weeks ago
|
0
|
4
|
140
|
|
POST
|
@Joshua_MorenoAre you doing this in a local geodatabase or an enterprise geodatabase? Can you show a screenshot of enabling the network topology succeeding (with the parameters) and update subnetwork failing? If you're enabling the network topology against an enterprise service, its possible you're doing this in a version other than default. Its also possible that your enable network topology GP tool is set to "only generate errors", which will create topology errors but not enable the network topology for tracing.
... View more
2 weeks ago
|
1
|
1
|
299
|
|
POST
|
@Andy_Morgan if you were already using the ArcGIS API for Python then there won't be much of a difference, but if you were calling the Trace GP tool, there should have been a noticeable difference because of the GP overhead. In terms of parsing the results to differentiate, it makes the code quite a bit more complex (at the current release) but would add less than a second to your overall processing time. If you think the time investment and complexity is worth it, I can point you in the right direction, but it isn't for the faint of heart. We have some items on our roadmap to make identifying the barriers of a trace easier, but at this point you'd need to analyze the network in memory to determine this (a topic that there are usually presentations on at the Developer and Tech summit).
... View more
2 weeks ago
|
0
|
1
|
165
|
|
POST
|
@VenkataKondepati I'm going to focus on the guidance portion of your question. I have concerns about modeling an unbalanced electrical network using a hierarchical electric domain, but since you are no longer with that company it doesn't seem appropriate to discuss it here. If you maintain an error free network, you should be able to validate network topology using the full extent on a nightly basis for a network of that size as long as you have reasonable editing workflows in place that aren't causing a large number of dirty areas to be created. If you carry a large number of topology/subnetwork errors in default that will make things more challenging. This is why it is important to have a plan to resolve errors during the implementation where they can be addressed through automation, configuration, etc. In particular if your data is not clean and you have Manage IsDirty set to true on your subnetworks, this can introduce significant cost to the validate network topology operation as it attempts to discover and mark subnetworks as dirty. If you do have a large volume of dirty areas, then you could use some of the tools in the utility data management support tools to create quad trees and validate the topology using those polygons.
... View more
2 weeks ago
|
0
|
0
|
252
|
|
BLOG
|
This article was written to help administrators, IT staff, or other technical staff supporting utility network deployment understand how to interpret log information specific to its workflows. This article is very technical in some places and requires a strong understanding of the concepts and technologies used to describe the utility network. To understand this article you must first have read and be familiar with the Utility Network Diagnostics article. That article describes how to capture log information for different utility network operations. This article builds on those concepts by providing some tips for interpreting these logs and how performance metrics can be extracted to assess the performance of your system. These logs are not a replacement for tools like ArcGIS Monitor but allow you to dive in and investigate potential performance bottlenecks. Why is this important? Knowing how to precisely measure and assess performance is an important skill, as it can allow you to quantify the impacts of your data modeling or architectural decisions. You can find a collection of resources that discuss the impact of these decisions in the conclusion of this article. Note: This article displays screenshots of different charts and logs but does not include any sample data to work with. The charts use different data sets and scales and should not be used to draw any conclusions. You should perform testing with your own data to draw conclusions using the techniques described in this article. The logs shown in this article are from ArcGIS Enterprise 12.1 so newer and older releases may show different messages. The specific wording and structure of the log files change over each release, which is why it is important that you understand how to interpret logs instead of memorizing specific messages. Another important takeaway from this article is to think about how logs are used. They are an important troubleshooting tool, and they can also demonstrate the impact that data modeling decisions, configuration, or even architecture can have on performance and the end user experience. Server logs provide detailed information that allows you to measure the performance of specific operations, and in many cases also allows you to correlate the performance impacts to specific subnetworks or about the number of features impacted by that operation. For an example of why this is important, consider the situation where a user complains that subnetworks are updating too slowly. A normal performance assessment would run the update subnetwork operation against all the subnetworks in the system, using a single process, to create a graph that shows response times returned by this server. This produces a graph, like the following: While it is interesting to see the distribution of response times, it does not provide any insight about why some responses are better than others, or what requires further investigation. Instead, this approach treats each response as equal, which in the case of subnetworks is not the case. By parsing the log files, you can produce graphs that offer more actionable insight. One easy thing you can do to identify problematic subnetworks for review is to create a graph that includes the name of the subnetwork from the logs with its response time. This allows you to find specific a subnetwork in the data for investigation while also giving you a sense of how the overall system is behaving. This makes it easier to identify your best and worst performing subnetworks, as well as any outliers. With a little more work, you can also parse the number of features in each subnetwork from the log files. This allows you to create a chart that uses the number of features in each subnetwork on the X-axis. This chart makes it easier to see the correlation between network size and response time. This tells you that most of the time, the subnetworks that take longer to update are larger. It also lets you see that there are a few smaller networks that are taking longer than expected, allowing you to focus your attention on them. You can take this approach even further by parsing more detailed information from the logs to see the timings associated with specific operations within each subnetwork. This chart allows you to see the operations, and subnetworks, which are taking the most time. This detailed information allows you to investigate what steps can be taken to improve the timing of those specific operations. In this article you will learn some of the key considerations to be made when interpreting the times for the following logs: Tracing uses the TraceLog Update Subnetwork uses the UpdateSubnetworkLog Export Subnetwork uses the ExportSubnetworkLog Enable Network Topology and Validate Network Topology use the BuildLog Before talking about the individual logs, let us discuss the importance of using these tools to help isolate and troubleshoot performance issues. Isolating performance When troubleshooting a performance issue in ArcGIS Enterprise, it is possible that the performance issue could be caused by a wide variety of architectural, configuration, or even data issues. If the issue you are investigating is focused on the performance of a single operation in the utility network which doesn’t require versioning, a good way to reduce the number of dependencies and isolate the issue is to first copy the utility network to a new mobile geodatabase. By working with a local mobile geodatabase, you can focus on the performance of the utility network itself without the additional variables associated with the architecture of the ArcGIS Enterprise deployment. This allows you to focus on a single-user, non-versioned workflow. The performance of the utility network in a local environment is not equivalent to one in an enterprise environment. However, this does allow you to determine whether a performance issue is caused by the data and configuration of the utility network, or the architecture and configuration of the environment. If the performance issue is not reproducible in the local environment, you can still use the information gained from local tests to help inform your investigation in enterprise. You can compare the times and steps of the detailed logs between the local geodatabase and the enterprise geodatabase to see if any step is taking longer. This could indicate an issue with the database that you can dig into by evaluating the performance plans using tools available for your DBMS. You can also compare the time for each operation reported by the ArcGIS Server logs and compare them to the response times reported by the client. If there are large gaps or inconsistent response times between the two, this could indicate a communication issue between the client and server. The graphics below show you what timings are captured by different logs. Measuring client response time includes the total time spent completing the request at the application, server, and data tier of the architecture. This is often not useful for identifying the underlying cause of a performance issue but remains important for a full understanding of the context and workflow needed to reproduce the issue. The ArcGIS Server logs allow you to focus on the time spent in the server and data tier, while also providing the context for each request in addition to the time it took. Reviewing performance at the database tier also provides useful insight into performance issues, especially when they are database-related, but lacks the context of the client or application tier. Copying data to a local, mobile geodatabase and capturing logs using the Diagnostic Monitor in ArcGIS Pro is a powerful way to isolate performance issues. This is because it allows you to control the workflow and context of each operation, while also providing the ability to measure its performance with as few dependencies as possible. You can see a diagram of the scenario below. Now that you understand how and why you would want to isolate performance issues when testing, we will look at how you can analyze the utility network logs for performance information. Trace Log The Trace Log has four important sections: Environment Trace Parameters Steps and times Network index statistics When assessing the overall performance of a system, it is common to look at the overall time each trace took compared to the number of elements returned. The most common test scenario for capturing this is to run a subnetwork trace for each subnetwork in the utility network and then measure the time it took to run the trace versus the number of elements in each subnetwork. The trace on the same subnetwork will return different performance numbers based on whether it is a user configured trace, or the trace used by the export subnetwork or update subnetwork operations. When measuring the performance of traces, you should look at the performance of subnetwork traces using your standard configuration during update subnetwork. If you plan to use export subnetwork, you should also plan to measure the performance of trace during export subnetwork using your expected configuration. When a particular subnetwork is underperforming, you can then look at the time associated with the individual steps for each subnetwork vs. the number of elements in each subnetwork. When looking at the various steps associated with a trace operation, you will start to understand why certain traces take longer, and the significant impact the configuration of a trace has on overall performance. As an example, if you use a chart like this to analyze the performance of trace after adding functions or specific result types you will be able to measure the performance cost of those specific changes with each displayed as its own operation with an associated cost. Environment and Trace Parameters The configuration section helps you understand the context of the trace. It communicates the type of trace, version, the starting points, and trace configuration, along with the result types specified for the trace. All these parameters affect the behavior of the trace. Making a change to any one of them would produce a different result and could affect performance. Steps and times This section of the log includes detailed information about all the operations performed during trace, and how long each took. Some steps also include information about how many features were involved in that step of the trace. When analyzing the time, the first thing you want to do is look at how much time the total trace took, then you want to look at the size of the results. The number of elements traversed and returned is at the end of the steps and times with the following lines: The Total Trace Time is easy to understand; this is the total time taken to perform the trace. The number of elements discovered requires a bit more consideration as it includes the number of elements traversed along with the number of elements in the result. This is important as many traces will traverse many elements yet return only a subset of the results. Even though a small number of features may be returned, the trace may require many features to be analyzed. Examples of this include upstream or downstream traces, traces with a filter barrier configured, or traces run during update subnetwork to discover features in multiple subnetworks. Because of this, traversed elements are often a better predictor of performance than the result size. You will also want to review the individual steps and their times during trace. These details will show you where the time is spent during trace. Network index statistics The network index statistics let you know how much network information was loaded from the database during the analysis. These logs are typically only used by support and development to diagnose specific issues. This section contains the following statistics: Topology table statistics – A summary of how many rows were read from the network topology. Associations tables statistics – A summary of how many associations were read. Weight engine statistics – A summary of how many network attributes were read. Memory manager statistics – A summary of the memory used. If you do choose to dig into the statistics in this report, you may notice that not all network attributes are reported in the Weight engine statistics section. This is because network attributes stored in-line are stored inside the topology tables and are included when the connectivity for the feature is accessed from the database. Each out-of-line network attribute read from the network index has a small cost associated with it, usually not more than a few milliseconds for a small network. However, when many out-of-line network attributes are read, or when the network is much larger, the cost can become noticeable. This is the reason you should consider storing the network attributes needed for most of your traces in-line, especially those referenced by your subnetwork definition. Why aren't all network attributes stored in-line? Because there is a limited amount of storage available for in-line network attributes, so you must decide which attributes are most important and ensure they are stored in-line. You will also notice network attributes stored in-line do not appear in the network index statistics. When trying to compare results between two tests, you can also look at cache misses to determine how much was available in memory (cached) as opposed to rows read from the database. When trying to compare performance between different traces or operations, it is important to pay attention to the number of cache misses. A trace that is run entirely from memory (hot cache) will perform better than the same trace that must load connectivity information from the database (cold cache). There is not much you can do to control this in user workflows, but it is important to consider when setting up a consistent testing methodology so you can accurately compare results of different tests. The most conservative and consistent way to measure results is to ensure that all tests are performed against a cold cache. Update Subnetwork Log When assessing the performance of update subnetwork, you will want to start by looking at the Update Subnetwork Log created during the update subnetwork operation. Additionally, you will often want to review the Trace Log associated with each subnetwork, since this can often account for most of the time spent updating a subnetwork. Note: When reviewing the Trace Log for update subnetwork you may notice the steps and time are different than if you just run a trace. This depends on your network configuration, but you will see more time spent finding elements in multiple subnetworks (for networks with propagation), time spent retrieving geometry to calculate the subnetwork line, as well as time spent calculating functions for the aggregated line. Like the Trace Log, there are three sections to the update subnetwork log. Environment Subnetwork Parameters Steps and times Network index statistics When assessing the performance of update subnetwork you will consider the following pieces of information: How long did the subnetwork take to update? How large was the subnetwork being updated? How many features were updated? The first two are easily discovered in the log; the last one requires reading the log to find out how many features have changed. When assessing overall system performance, you will typically look at the amount of time it takes to update each subnetwork in the system versus the number of features in the subnetwork. When performing this analysis you can consider three different scenarios: How long does it take to perform the first update subnetwork operation? How long does it take to update a subnetwork when nothing has changed? How long does it take to update a subnetwork when a reasonable number of features have changed? You will typically want to focus on how much time it takes to run update subnetwork against a reasonable number of edits, since this is what users will experience in their daily workflows. The first update subnetwork is important to consider because it is the most time-consuming and must be performed when the system is deployed. The no-change scenario is interesting to consider as it is the best-case scenario for performance. When you find a subnetwork that is underperforming, you can look at the time of individual steps to identify whether there is a particular step that takes most of the time. If you compare the time spent running a trace during an update subnetwork operation with a normal subnetwork trace, you will usually find that the trace run during update subnetwork takes longer. This is because update subnetwork does added work to load geometries to aggregate, calculate summary functions, and in some cases find elements in multiple subnetworks. Environment and Subnetwork Parameters When reviewing the configuration section of the log you should pay close attention to the following configurations: Version Name Edit Mode Tier Name The version name and edit mode are important, since the behavior and time it takes to update subnetwork can be different depending on the edit mode used, and whether it happened in default or a named version. You can learn more about these considerations in the Understanding Subnetworks: Edit mode article. In short, when the edit mode is set to ‘with events’ you incur added performance costs from attribute rules, and when the edit mode is ‘without events’ off in a named version, you may not be updating all the features in the subnetwork. It is also important to consider the tier name when looking at performance, because the trace configuration of the tier controls the behavior of update subnetwork with regards to creating or updating a subnetwork line, network diagrams, etc. Steps and times When looking at the steps and times, you are primarily concerned with the following sections: Trace The various update steps (Connectivity, Content, etc.) Managing the subnetwork line Managing the network diagrams Total You will look at the Trace to see how much time the trace took, and most importantly how many features were discovered to be part of the subnetwork. Next you will look at how much time was spent updating the various attributes in the database that track the persisted subnetwork info (subnetwork name, is connected, etc.). The amount of time spent managing the subnetwork line and network diagrams is usually relatively small. But when they are taking a significant amount of time you may need to review the configurations for those items. The Total line tells you the total time it took to update the subnetwork. Network index statistics The considerations for reviewing the network index statistics for update subnetwork are the same as for the Trace Log. Export Log When assessing the performance of export subnetwork you are considering three things: What result types, attributes, etc. were exported? How long did it take to get all the information that was requested by the trace to be exported? How long did it take to generate the file? To answer these questions, you will primarily look at the Export Log. There is a TraceLog generated for the trace that is run as part of the export subnetwork operation if you need more detailed breakdown of time spent during that operation. Within the Export Log there are five different sections: Environment Subnetwork Parameters Export Parameters Steps and their times Network index statistics When assessing the overall performance of export subnetwork, you want to compare the amount of time it took to run export subnetwork with the number of features being exported. However, unlike TraceLog and UpdateSubnetworkLog it does not include a count of how many features were returned by the trace. The count of features can, however, be extracted from the TraceLog. When you find a subnetwork underperforming during the export operation you will want to look at where the time is spent in the export log. If most of the time is spent on the trace, then you will want to review the trace log. When doing this, it is often useful to compare this with the amount of time spent during a trace in export with the time spent during a normal subnetwork trace (that does not include any result types or functions). This approach lets you identify how much time is spent during the trace in export subnetwork getting each result type (connectivity, feature elements, etc.) along with how much time was spent running the trace. The trace during export will always take longer than a regular trace because it must read additional information from the database. Looking at the details of the trace log during export lets you see how much time is spent getting each result type. This is why it is important that you only export the attributes and other information that is necessary, because the cost of exporting unnecessary information can be high. Environment and parameters Export subnetwork has many options which control what you can export. However, the more information you include in the export, the longer the trace used to gather all the information will take. The more information you include in the export, the larger the files will be and the longer it will take to generate and download the files. You can see what information a user specified to include in their export by looking at the export parameters section of the report. This lets you see what result types were included along with how many network attributes, result fields (for features), and related record fields (for related records) were selected. Including many attributes from features and related records requires making added queries to the database, which can add more time to the trace to get this information. Additionally, selecting many attributes can drastically increase the file size. Including all the network attributes for a subnetwork can double the file size and the amount of time it takes to export the subnetwork. Including attributes from many different tables will have an even greater negative effect on performance. Steps and times There are fewer steps to analyze in the export subnetwork log. In most cases, the trace is going to be the highest cost during export subnetwork. However, if you see a large amount of time spent on the Process/Write JSON steps this is an indicator that the file is large and is taking a long time to serialize, download, and persist. Network index statistics The considerations for reviewing the network index statistics for export subnetwork are the same as for the Trace Log. Build Log The format of the build log is different from the rest of the utility network diagnostic logs because it is designed to be an incremental log generated during potentially long running sessions. Because of this, each line in the build log reports the amount of time it took to complete the step, the total amount of time taken up to that point, and the amount of memory used at that moment. The same log file format is used for all three build operations: Enable Network Topology Disable Network Topology Validate Network Topology Because of this, you may notice that the numbering of steps in some logs may appear to skip certain steps. This is because all steps do not apply to all operations. When reviewing the logs you want to focus on the following sections Environment Steps and their times Build network setup Post build processing Network index statistics When evaluating performance, you want to consider what kind of build is occurring, how many network attributes are processed, how much available memory/disk space there is, whether there any analysis was performed to identify subnetworks affected by the validate (post-build processing), and how many features are processed. Most of this information is available in the environment and build network setup sections of the log. For Enable Network Topology and Disable Network Topology, you are primarily concerned about throughput, memory usage, and disk usage. The more you can rely on memory to build the network topology, the faster it will go, but for large datasets this is not realistic. In those cases, the build will start writing information out to disk, at which point you will need to ensure that the configured disk is fast (ideally an SSD) and that there is enough disk space to hold the temporary files. For Validate Network Topology, you are primarily concerned about the total time it took to build the network since a user typically calls Validate Network Topology, and you want to minimize the amount of time they are waiting. If you are noticing a large amount of time spent on the Post build processing, then you will want to familiarize yourself with the Understanding Subnetworks Status article. This behavior is controlled by the subnetwork definition for each tier in the network and can be modified even after you've deployed your utility network. Utility networks configured to maintain a status field on their subnetworks must perform post build processing during Validate Network Topology to find the subnetwork(s) affected by the validate so they can be marked as dirty. Environment and build network setup The extent validated is shown in the environment section of the build log, the extent is only meaningful when evaluating Validate Network Topology. This is because Enable Network Topology and Disable Network Topology always run on the full extent of the network. The build network steps, along with the name of the log file, will show the type of build performed. You can also see how many network attributes, memory, and disk space were available when the process started. If the amount of memory used during build exceeds the amount of memory available, then the process will start writing to disk and will become slower. Steps and their times There are many steps during the network build process, and while they are not all described here you should keep the following items in mind while reviewing the logs: How much information was processed during this step? How much information was created during this step? How much time/memory did this step use? Each step typically reports the total amount of time the step took to complete in the last message for the step. You can find the total time the entire operation took by looking at the last line of the log file. To identify the amount of memory used you will need to compare the total memory reported during the first log message for the step with the total memory reporting during the last log message for the step. Network index statistics Because the network build process populates the network index, this section can be interesting to understand how much information the system tables read, wrote, or created during the process. However, there is not much you can do to influence these numbers once you have deployed a utility network. If you are early on in a project, you will be able to see the impacts of how many network attributes you have and can use the opportunity to ensure that you require all the network attributes that in the model you are building. If you do not need the network attributes for any workflows, and you are early in a project and can still remove them from your model, consider doing this. You can always add a network attribute to a network, but once it is deployed, it cannot be removed. If you are considering whether to continue to model related records as related records, or to model then as nonspatial objects with connectivity and/or containment, you will be able to see the time it takes to build the network to incorporate those additional nonspatial objects into the network. Conclusion Now that you have read this article, you should be familiar with how to interpret the logs for the four major operations of the utility network. You can begin to run performance tests and interpret the results at a granular level. As you design your architecture and make important data modeling decisions, you can measure the impact of these decisions on your performance. For a more systemic approach to capturing and measuring performance numbers, you may want to use a tool like Extract Log Files to combine logs in a database. The graphics in this article were created using an approach where performance numbers were extracted from each log file using a regular expression and used to create visualizations. These log files are important because they give a precise measurement of the amount of time that the server is performing specific operations. They are extremely valuable for assessing a single operation; however, they do not provide the whole picture. Performance analysis must be performed holistically, so it considers not only the performance of the utility network, but the impact that the entire architecture has on performance as well as how the system performs under load. For examples of performing a more holistic approach to testing and design, visit the ArcGIS Architecture Center. For examples of how modeling related records can affect performance, refer to the Modeling related data in a utility network article. For examples of how your edit mode configuration of your subnetwork can affect update subnetwork performance, read the Understanding Subnetwork Edit Mode article. For examples of how the status management configuration of your subnetwork can affect validate network topology performance, read the Understanding Subnetwork Status article.
... View more
2 weeks ago
|
2
|
0
|
590
|
|
BLOG
|
One of the unique features of the utility network is its ability to model connectivity using a combination of spatial and non-spatial objects. This capability has been used by many customers to create their digital twins because it allows them to include detailed information about network assets in the connectivity model. This article describes several of the most common techniques for modeling these assets, and the pros and cons associated with each technique. You can find a presentation discussing the approaches taken by electric utilities in the Electric Utilities: Deep Dive presentation from the ArcGIS Solutions team. There are different patterns used to capture these details, and when customers are implementing their models, they often ask what is the best practice for capturing these details? Start by looking at your requirements. Downstream systems expect more details to be stored in GIS to meet the evolving requirements of Operations and Engineering. Locations that contain equipment critical to operating the network are now being modeled at a higher level of detail to meet those requirements. Instead of modeling a station (pump, substation, etc.) as a single point on the map, all the major equipment required to analyze and operate the real-world network is stored in the GIS as separate features that are connected to the network model used by the digital twin. This makes sense for these larger structures, but we can also take this approach down to any location where there are multiple pieces of equipment being represented by a single point on the map. If you have an apartment complex with a meter bank, do you need to draw every customer meter and pipe? Is it sufficient to continue to represent the meter bank as a point with all the customers related to it? The truth is there is no single best practice, the answer depends on your requirements and on the type of information being modeled. What we will do in this article is explain the different approaches and provide the pros/cons of each approach so you can draw your own conclusions based on your specific situation. It is important to note that most projects use a hybrid approach in their implementation, choosing to implement different patterns based on the type of location. Each of these approaches models every significant location of the network using a feature, typically a device. The difference is how the individual components of that location are modeled. Related records – The details of the location are stored in non-network tables that are related to the device. Nonspatial content – The details of the location are represented as nonspatial objects that are contained within the device. Nonspatial connectivity – The details of the location are represented as nonspatial objects that are contained within the device, connected with each other, and connectivity passing through the device passes through all its content. With the definitions out of the way, let’s look at the pros and cons of each approach, with examples. Related records The related records approach is the way that this challenge has traditionally been solved using GIS. The GIS would show a single point on the map, like a substation or a transformer, and the details about the equipment at that location would be shown as related records in the GIS. These related records are versioned, non-network feature that exist in the same geodatabase as the features in the utility network. Below you can see a picture of a Transformer (Electric Device) with a related Distribution Transformer Unit (Transformer Unit). Note: The pros/cons and alternative approaches outlined in this document are meant to discuss the representation of physical assets in the field and not related data like inspection records. Pros The benefit of continuing to model features this way is that it keeps the amount of data being stored in the network to a minimum, which means tracing and validating is faster, and it doesn’t impact the way you manage your network data. If you’re worried about how to extract these details for network analysis, there is even an option to include specific related records in your network exports. Additionally, because related records do not belong to the network you don’t need to worry about edits made to these features impacting the network topology. Cons However, this benefit is also a drawback. Because related records are not part of the network, they are not included in any of the network analysis or validation that the utility network performs. The network rules and restrictions you rely on to ensure the data is correct do not apply to related records, and must be enforced using other mechanisms like relationship rules, data reviewer checks, etc. Additionally, exporting values from related records is slower than exporting network attributes because network attribute information is included in the trace results automatically but the information from related records must be queried from the tables. Finally, because the related records are not part of the utility network there are not any system-maintained fields that describe their association status or subnetwork information. This information can be determined by looking at the related network features. Pros Cons Editing workflows are not impacted Related records cannot be included in analysis Adds, deletes, and updates to related records do not create dirty areas Related records are not part of the subnetwork Related records can be exported as related features Exporting related records is slower than if they were content Locating and analyzing units spatially requires additional work Nonspatial content Modeling additional details using nonspatial content involves migrating all the important related records to nonspatial objects in the utility network (junction objects and edge objects) then turning the corresponding relationships into containment associations. This allows you to turn tabular records into nonspatial network objects that can then be included in analysis, exports, and participate as content of a subnetwork. Below you can see a Medium Voltage Transformer (Electric Device) that contains an Overhead Single Phase Transformer (Electric Junction Object). Pros But what does it mean to participate as content of a subnetwork? Features that are discovered while tracing a subnetwork are part of the connectivity for that subnetwork. Features that have an association with one of the connected features can be included in that subnetwork as content, containers, and structures. This allows associated features to appear in the subnetwork when traced, can be included in calculations for that subnetwork, and can easily be exported with the subnetwork. Nonspatial content is also validated and included in the network, so it benefits from all the rules and configuration of the utility network. The workflows and tools for creating and maintaining nonspatial objects as content are different that maintaining related records, but they are similar enough that they shouldn’t pose a significant additional editing burden. Cons The most obvious downside to using nonspatial objects instead of related records is that you are now including more features in your network. This means that tracing and validation will have more features to process, so the more features you manage in your network the longer these operations take. You can mitigate the performance impact to trace by choosing whether you want content, attachments, and structures in a utility network to be included in your trace results using the Include Containers/Content/Structures option in the trace configuration. Likewise, you can control this behavior in update subnetwork in two ways. First you can choose whether you want to include associated data in the trace, using the same options for the Subnetwork Trace Configuration. Then you can also choose whether you want update subnetwork to populate the subnetwork name field on these features using the Update Subnetwork Policy. The next thing to be aware of is that changes to these nonspatial objects that affect the network must be validated. You need to consider the workflows you have for editing these nonspatial objects and be aware of their impacts. If you have field crews that are installing or replacing thousands of nonspatial objects every day, you’ll need to have a process in place to validate those edits. Finally, because nonspatial content is not included in the connectivity of the network it cannot influence the results of the trace. You can’t stop a trace when a device contains a nonspatial object in a particular status (open/closed), only the features that are discovered through connectivity can affect the trace. Pros Cons Workflows are similar to maintaining related records Adds, deletes, and certain updates create dirty areas Included in functions and analysis Cannot affect traversability of trace Validated as network features Validates require additional time to identify and validate content Easily exported as subnetwork features Tracing and update subnetwork performance can be affected Locating and analyzing units spatially requires additional work Nonspatial connectivity The final method of representing these additional details is to use nonspatial objects that are part of the connectivity model. This is done when you need to control the tracing and analysis of the network using features that it isn’t practical to represent and maintain spatially. The most obvious example of this is communication networks, where a single cable can contain thousands of connections on either end of the cable. In these situations, the approach is the same as with nonspatial content, but you also create connections between the spatial and nonspatial objects that allows analysis for a continuous set of connectivity for the subnetwork through both spatial and nonspatial objects. Pros The main benefit of this approach is that the nonspatial objects can fully control the analysis of the network. This makes it possible to do more advanced, fine-grained calculations like determining demands on a particular wavelength of light or phase of electricity. In addition to this it means that nonspatial objects have their own state, allowing them to act as barriers to tracing or to model a mixture of in server, proposed, and abandoned equipment at the same location. The other benefit of modeling these details as nonspatial objects is that because they do not have any geometry you do not need to worry about how they will be drawn on the map, or how you will manage the cartographic clarity of a location that contains thousands of objects at large scales. Content is part of the subnetwork Cons Even though you don’t need to worry about maintaining a spatial representation of this data, one of the largest challenges to maintain large nonspatial datasets is that you need to create processes or tools that make it easier to view and maintain the connectivity of these nonspatial objects. Creating and editing relatively simple collections of nonspatial objects, like three-phase switches, does require more work than maintaining them as related object but is still reasonable. Maintaining complex connectivity structure like entire stations, treatment plants, or fiber splice enclosures requires special tools and workflows. Pros Cons Included in tracing, functions, and analysis Workflows are more complex than maintaining related records Validated as network features Data is harder to understand from a simple web or mobile application Easily exported as subnetwork features Adds, deletes, and certain updates create dirty areas Validates require additional time to identify and validate content Tracing and update subnetwork performance is affected Locating and analyzing units spatially requires additional work Note: The performance impact of tracing non-spatial objects can be mitigated by enabling and using Cluster Keys on your network features. Once this is done the main performance impact of this approach is that more objects are included in every network and this increases the amount of time it takes to analyze or update them. Conclusion Now that you’ve read this article you are familiar with the three basic strategies for modeling related network data using a utility network. You are aware of the pros/cons of these strategies and know that you can use different strategies for different objects. The following table summarizes some of the key pros/cons from the sections in this article: Requirement Related Records Content Connectivity Analyze with trace Best Trace performance Best Update subnetwork performance Best Extract network information Yes Yes Yes Network content validated No Yes Yes Validate Network Topology performance Best Editing experience Best
... View more
2 weeks ago
|
2
|
2
|
754
|
|
POST
|
@VenkataKondepati First, make sure you have a case logged with support on this. Second, are you saying that your entire dataset has 500k features? or that you have certain subnetworks that have hundreds of thousands of features? Third, can you give a screenshot of the tool that is running slow and the parameters you used to run it? "batch validation" isn't a core product or tool so I'm trying to figure out exactly which tool you are referring to. Is this is a workflow or script you've created?
... View more
2 weeks ago
|
0
|
1
|
263
|
|
POST
|
The only time a network category is required for an asset type to show up in Set Subnetwork Definition is when you want to make something a subnetwork controller (in which case there are several criteria). Adding an asset type and setting the subnetwork definition both require the network topology to be disabled, so you do not need to validate the network topology for the new asset type to appear. You can find more information about these kinds of constraints on the Utility network management tasks page. The most likely issue was that the tool hadn't refreshed. If you closed and reopened the tool you should have seen the new asset type (if you were running against SDE). If you were running against a service then you would have had schema locking turned off, which means in order to see any schema changes you would have needed to restart the service.
... View more
3 weeks ago
|
0
|
0
|
274
|
|
POST
|
Yeah, what you really need is access to an ArcMap environment where you can just copy this data out to a file or mobile geodatabase where you can start working with it using ArcGIS Pro. If you want to preserve the exact look and feel of the annotation, you're going to need access to the binary data stored in the element, since users can make fine adjustments to the annotation in the element that aren't reflected in the properties of the feature or even the symbol definition itself.
... View more
3 weeks ago
|
0
|
1
|
430
|
|
POST
|
You can likely figure out how to get the reference scale, but there may be other properties you'll want to get access to like the symbol classes, etc.
... View more
3 weeks ago
|
0
|
1
|
437
|
|
POST
|
What happens if you do an Arcpy.Describe on the class? I'm not even sure if you can access the contents of the Personal Geodatabase using Arcpy in the ArcGIS Pro runtime.
... View more
3 weeks ago
|
0
|
1
|
497
|
|
POST
|
Because it is a service, try using the layer ID or layer name of the utility network as it appears in the rest endpoint for the service. The Pro SDK can read branch versioned data through an SDE connection, but we don't allow you to trace or edit data. So, for enterprise this process requires a service.
... View more
3 weeks ago
|
0
|
1
|
137
|
|
POST
|
What form and version of ArcGIS is installed in the environment where you're running your Python? This will determine which APIs are available to you.
... View more
3 weeks ago
|
0
|
1
|
504
|
|
POST
|
You can use templated attribute rules to give you a head start.
... View more
3 weeks ago
|
0
|
0
|
258
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | Wednesday | |
| 1 | Tuesday | |
| 1 | Monday | |
| 3 | Monday | |
| 1 | 2 weeks ago |
| Online Status |
Online
|
| Date Last Visited |
4 hours ago
|