DOC
|
This zip file contains the arcade scripts to enhance Field Maps for ArcGIS with the following capabilities:
A sample geodatabase compatible with UPDM 2021. This sample schema has been modified to address the specific needs of mobile as-builting.
Arcade calculation expressions to automatically decode the ASTM F2897 barcodes when scanned by a mobile devices camera or Bluetooth scanner. These scripts work in both a network connected and a network disconnected mode.
Incorporates modifications made to the ASTM F2897 standard in late 2023. This includes new manufacture components, material types, and pipe dimensions.
Automatically capture the original GPS data and write to feature attributes (GPSX, GPSY, GPSZ).
Arcade pop-up expressions perform real-time validation of the collected pipe and pipe component information. Validation checks include:
-Verify medium density and high density polyethylene components have not exceeded industry recommended shelf life.
-Verify the pipe or pipe component manufacturer is a gas organization approved manufacturer.
-Verify the pipe type and size are compliant with gas organization codes and standards.
And more…
Thank you
Tom DeWitte
Esri Technical Lead – Natural Gas Industry
... View more
10-17-2024
12:11 PM
|
1
|
0
|
236
|
BLOG
|
What Data Belongs in a GIS for Pipe Utilities By Tom DeWitte and Tom Coolidge We, “The Toms of Esri,” “have supported pipe organizations for many years. Neither of us has enough fingers and toes to count our years of service to the natural gas and hazardous liquids industries. Over the years, industry professionals have asked us the following foundational question: What data belongs in the GIS? This question gets asked by all levels of staff within a natural gas or hazardous liquid organization. Everyone from entry-level GIS analysts to Chief Information Officers want to know the answer to this question. So, what data belongs within a GIS? The simple answer is that all data with a defined location should be stored and managed within a GIS system such as ArcGIS. This technically correct non-specific answer may not quench your desire for a fuller understanding of why a specific dataset should or should not be managed with a GIS. A fuller understanding requires answers to at least four questions: Question 1: What is the spatial accuracy of the data? Question 2: How should the location be described? Question 3: Who needs the data? Question 4: How will the organization consume the data? What is the spatial accuracy of the data? This is the question that non-geospatial-oriented persons struggle with the most. However, the answer is critical to help determine which enterprise system should store that data. Ask a non-geospatial-oriented person where a critical valve is located, and they will likely try to provide you with a street address. When attempting to find a buried asset less than 6 inches in diameter, the street address for a major manufacturing facility may describe a location covering over a square mile! The street address is technically correct, but its lack of spatial accuracy fails to meet the need of the field employee attempting to find the specific valve to close during an emergency event. This example highlights that whichever enterprise system stores this information must be able to store the location to a level of accuracy that meets the needs of the organization’s users. Within the natural gas and hazardous liquids industries, there is an evolving consensus to locate each buried pipe asset to within 18 inches of its absolute location. How should the location be described? It is helpful to know that the location of a pipe network buried asset should be defined to within 18 inches of its true location on the surface of our planet. Most enterprise information systems can easily store a latitude, longitude, and elevation value for a single record within their data storage solution. But what happens when more is required to describe the asset? What happens when the asset is linear, such as a 50-foot-long section of pipe? What happens when the asset is polygonal, such as a right-of-way easement. Accurately describing the location of a plastic pipe section that curves around a cul-de-sac can require dozens to hundreds of x, y, and z coordinates. Answering this second question is our first clear separation of capabilities between information systems. Most information systems cannot manage the complex geometries of pipes, pipeline routes, and right-of-way easements. These complex geometries require not only special spatial data types within the data repository, but also the ability to display, query, and analyze. A tabular display of a plastic pipe segment with dozens of vertices to accurately represent its location is not a useful nor easy to understand presentation of data. Who needs the data? Now, we need to look at who within a natural gas or hazardous liquid organization needs this geospatial representation of the asset. An initial answer to this question is anyone in the organization who needs to see and understand the relationship between the assets, the relationship between the assets and nearby hazards, or the relationship between the assets and the environment in which they reside. Every engineer within the organization must understand how the individual assets are combined and connected to create a pipe network. Without this understanding, they cannot model and understand how the gas or liquid flows through the pipe network. Field staff use maps every day to help them understand where the pipes are buried. The geospatial location displayed on the maps informs the field staff how the pipe system runs across a property, neighborhood, and community. The location displayed easily and clearly provides the field staff with an understanding of where a service line connects to a main, where a valve is located along a pipeline, and which portion of a pipe is located within an easement. The finance department is another pipe organization department which depends on accurate and current representations of tax districts and pipe asset locations. The spatial intersection of these two separate datasets is required to be able to accurately tabulate an organization’s tax bill. Every time a pipe segment spans more than one tax district, the finance department needs to know what proportion of the pipe asset resides in each tax district. How will the organization consume the data? There are many subsets of data for which the answers to the first three questions do not clearly and logically define where a dataset should be stored and managed. In these grey-area datasets, the fourth question provides clarity. How will the organization consume the data? Many critical datasets, such as reported leaks, excavation damage, and exposed pipe inspections, often fall into this grey area. Each record within these datasets represents an event that occurred at a specific location. Each record requires a spatial accuracy to define where it occurred on the surface of the planet. Each record is typically defined as a singular coordinate pair (latitude, longitude). Each record is used repeatedly across the organization to support compliance department staff, damage prevention staff, pipe integrity staff, and field staff. Yet, for each of these examples, it is how these departments consume and utilize this information that provides clarity on how it should be stored and managed. The exposed pipe inspection dataset is a great example. In many countries, such as the United States, it is federal law that anytime a natural gas or hazardous liquid pipe is exposed to the atmosphere, it must be inspected. The information collected with this field activity is very valuable to the departments performing distribution integrity analysis, transmission integrity analysis, and main replacement prioritization for capital planning. Digging into how these departments consume this data is where you see that the commercial products typically purchased to perform these valuable analytics require a specific geospatial format for input. Why do they require a specific geospatial format? They require a geospatial format because the risk analysis being performed requires an understanding of where the asset resides with respect to risks nearby, as well as an understanding of the consequences of failure to the portions of the community near the asset. If you store this information in a spreadsheet or other non-geospatial structure, you will have to convert the data into a geospatial feature to support the geospatial-based analysis. The decision to store data that needs to be consumed with a geospatial representation in a non-spatial data structure means that your organization will incur additional O&M expenses every time you want to run these mission-critical analyses. By understanding how information is consumed, IT departments can organize, store, and manage their data in the enterprise system, which provides the organization with the lowest operational cost and greatest efficiency for end users. A logical approach Four simple questions about the data can provide organizations with a logical and defensible approach to answer the question of which enterprise system should store a dataset. These four questions can help guide the organization to a decision that is truly in the best interest of the entire organization. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
07-10-2024
07:26 AM
|
6
|
2
|
555
|
BLOG
|
Helping Mappers Get it Right By Tom DeWitte and Tom Coolidge Correctly mapping a buried pipe network is difficult. Everyone in your organization depends on the information to be “right.” The geometry must be right, the location must be right, the attributes must be right, and the connectivity and flow must be right. If any of these information components are wrong, your organization incurs inefficiencies, extra expenses, and potential damage to your pipe network. The mapper is the person within the organization typically responsible for creating and maintaining this information. Every day, they come to work and face the challenge of consistently entering a large amount of information correctly—information that the entire organization depends upon to perform its own jobs. To succeed at this challenge, mappers need software tools to allow them to enter this information consistently, accurately, and efficiently. Within ArcGIS, many configurations are available to help the mapper enter and update this information correctly the first time. Some of these capabilities are real-time data quality control, and some are validations to be checked after feature editing but before posting. Let’s look at the tools and capabilities available to help mappers create the pipe network information correctly the first time. Getting the Geometry Right Getting the Geometry right is critical to ArcGIS tracing and for hydraulic engineers who import the geometry into their modeling software, such as DNV’s Synergi Gas product. Both ArcGIS tracing and hydraulic modeling software require polylines that are not self-closing and do not have multiple vertices occupying the same coordinate location. Additionally, a polyline representing a pipe segment should never have a cutback of less than 60 degrees. Implementing rules to prevent these types of misconfigured polylines is easy within ArcGIS Pro. A pulldown listing of ready-to-use geometry rules is provided for a geodatabase administrator to select and apply to the desired polyline featureclass. With these geodatabase configurations in place, the mapper will receive real-time feedback when they attempt to create an invalid geometry. Implementing the attribute rule constraint denying polylines with a cutback of less than 60 degrees, blocks this new feature from being completed. Getting the Connectivity Right Getting the connectivity right is also critical for any use case that requires tracing or an understanding of flow direction. Common use cases requiring connectivity are: -Hydraulic modeling -Emergency isolation tracing -Cathodic protection management Within pipe utilities such as gas, water, and district energy, there is another aspect of connectivity that the mapper also needs to consider. That is the correct assembly of the pipe network itself. There are many combinations of connectivity that are invalid. For example, it is not valid for a gas service line to tap directly off a gas transmission line. Nor is it considered valid for a cathodic protection rectifier to be connected to a plastic pipe segment. Metallic-only fittings, such as screws, flanges, and weldolets, should not be directly connected to plastic pipe segments. The Esri-provided data models include a rulebase that defines valid connectivity between assets. Mappers using ArcGIS Pro get rule-based snapping tips to show them the valid connection options. This real-time guidance is a display mechanism to help mappers get it right the first time. When a mapper attempts to connect two assets together incorrectly, the snapping tips will not display. For example, in this example, a steel pipe is incorrectly attempting to be connected to a plastic pipe. When the mapper correctly attempts to connect a plastic distribution pipe segment to another plastic distribution pipe segment, the snapping tip displays to indicate the valid connection. Getting the Attributes Right – Calculations Mappers are continually asked to enter an ever-increasing amount of information about the asset. Implementing attribute rule calculations into the geodatabase is a great way to automate the input of the information and improve its accuracy. Decoding barcodes is a great example of the power of automation while also improving the quality of the data. In this example, an arcade script was written to automatically populate the data fields: nominal diameter, wall thickness, material, manufacturer, manufacture date, manufacturer lot number, and material component type. That is seven data fields automatically populated from one data field. For the mapper, life is simplified by having this very detailed information auto-populated. Getting the Attributes Right – Picklists Most data fields that a mapper is asked to populate have a finite number of valid values. Fields like diameter, material, component type, and even wall thickness have a limited number of valid values. The geodatabase provides two related mechanisms to allow administrators to have ArGIS provide a picklist from which the mapper can choose. Coded-value domains, and contingent values are these two mechanisms. Coded-value domains enable an administrator to create a pre-defined list of values from which the mapper can choose. This eliminates misspellings, inconsistent capitalization, and inconsistent abbreviation mistakes that are common with freeform data entry. When applied to a data field, the mapper will be restricted to selecting a value from the list. There is no override option. Contingent values enhance coded-value domains with a dynamic filtering capability that is based on another data field’s value within the record. For the mapper, selecting a coded-value domain value in one data field, such as asset type = Coated Steel, will automatically filter the coded-value domain list of values in another data field, such as pipe material grade. This real-time dynamic filtering can be defined as the unique combination of values across two, three, four, or more data fields. Getting the Attributes Right – Required Field Constraints Geodatabase attribute rule constraints can also be configured to prevent the submission of an edit unless specified conditions are met. A common example is required fields. In this example, an arcade script was written and applied as an attribute rule constraint. This attribute rule script requires that a tee cannot be submitted unless the data fields; diameter, diameter2, wallthickness, wallthickness2 are populated. More advanced logic can be applied to have conditional required fields based on values of other data fields in the same record. With this type of logic in place, mappers will be reminded at the point of submitting the record when a required data field has not been populated. Getting the Attributes Right – Validations Not all business rule logic can be consistently and correctly applied at the record submission stage. Some business logic needs to wait until the full set of edits for a construction project or repair has been applied. This delayed application of business rules is called validations. Validation checks are initiated by the user with the Validate tool or automatically as a nightly batch process. Common pipe network examples of validations are: -verifying against the utility network rule base, - verifying that the excess flow valve is coincident to a service pipe, -verifying that the valve diameter is the same as the coincident pipe diameter. These types of multi-feature data quality checks help mappers find mistakes that would be missed with simpler business rule logic applications. Getting it Right The geospatial data created and maintained by mappers is critical to the safe and reliable operation of the pipe network. Everyone in the pipe organization, from finance to engineering to field operations, depends on the geospatial data to be correct. The mappers maintaining this enterprise-level data source need robust and multi-tiered business rules to help them enter this information correctly the first time. ArcGIS is the system to provide these capabilities and to aid the mapper in getting it right the first time. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
06-09-2024
07:57 AM
|
3
|
0
|
488
|
BLOG
|
By Tom DeWitte and Tom Coolidge Within every pressurized pipe network, whether natural gas, hazardous liquid, water, or district energy, there are one or more pressure zones. Pressure zones are foundational to the engineering and operation of pressurized pipe networks. A formal definition defines a pressure zone as a distinct subset of the pipe network where a minimum and maximum pressure range is maintained by pressure-controlling devices. Yet, knowing this definition of a pressure zone is not the same as understanding a pressure zone. Understanding a Pressure Zone Understanding a pressure zone has differing meaning across an organization responsible for managing a pressurized pipe network. System planners require an understanding of the capacity of the pressure zone and the load from customers consuming the commodity traversing the pipe network. Hydraulic engineers need to understand the capacity, load, assets, commodity flow, and the type of customers depending on the provided service. Integrity Engineers require an understanding of the pressure zone assets and their characteristics. System control center operators must understand the standard operating pressure and the current maximum allowable operating pressure (MAOP). For many others across the organization, it means visually seeing the extent of the pressure zone. With this wide range of what it means to understand a pressure zone, how does an organization achieve this understanding? Defining the Pressure Zone The need to create an inventory of the assets that comprise a specific pressure zone is foundational to all these various aspects of understanding a pressure zone. From a software perspective, this means utilizing software that can determine a pressure zone. This determination is based on how the pipe assets are connected, knowing the location of the pressure regulating devices, and an understanding of the high side and the low side of those pressure regulating devices. With this understanding of pressure zone components, add some software logic to define commodity flow, and you can now define a pressure zone. Within ArcGIS, the capability to determine a pressure zone is called the ArcGIS Utility Network. The result of the application of this capability for pressure zones is the creation of a pressure subnetwork. Defining a pressure subnetwork within ArcGIS generates multiple items of information to aid in understanding the pressure zone. These aids are: Creating a persistent graphical representation of the pressure zone. Inserting the name of the pressure zone to each asset contained within the pressure zone. Tabulating quantities of the identified assets, such as number of meters and number of valves. Performing engineering calculations to determine MAOP for the pressure zone. This inventorying, tabulating, calculating, and visualizing provide pipe organizations with the foundation to understand the pressure zone. Understanding A Pressure Zones Capacity A common derivative of the pressure subnetwork representation of the pressure zone is the tabulation of the volume within the pressure zone. Knowing a pressure zone’s pipe volume has historically been the primary challenge in accurately determining the standard operating pressure amount of commodity within this subset of networked pipes. For an individual pipe segment, tabulating volume is an exercise in remembering the middle school geometry equation for a cylinder. That equation leverages the diameter and length of the pipe. Repeating this geometry calculation thousands of times for each distribution main and service line within the pressure zone is hard for humans. It is a simple calculation for a computer. With the volume automatically tabulated for the pressure zone, the next step is to apply the Ideal Gas Law equation and some commodity specific values to determine the mass and energy stored within the pressure zone. An arcade script within a pop-up can easily perform this calculation for both the standard operating and the maximum allowable operating pressure. Having the result of this tabulation readily available whenever a user opens a pressure zone pop-up within a web application or mobile map is a significant time saver for engineers and system control operators. Understanding A Pressure Zone’s Assets Once the ArcGIS Utility Network capability defines a pressure zone, the capability updates the name of the pressure zone in all traced assets (pipe segments, valves, fittings, customer meters, etc. This assigning of the pressure zone name to each asset, simplifies follow-on pressure zone asset queries often used in capital planning, such as: Average age of pipe Length of pipe by material Length of pipe by size Understanding a Pressure Zone’s Pressure Range Core to managing a safe and reliable pressurized pipe network is knowing the maximum pressure at which the gas utility can safely operate the pressure zone. To most industries this value is known as MAOP (Maximum Allowable Operation Pressure). This value is asset specific and will vary across the thousands of assets comprising a single pressure zone. The pressure zone MAOP is defined by the weakest link within all the assets of the pressure zone. In software terms, this is the application of a minimum function against all the pressure zone asset’s individual MAOP value to find the lowest value. The ArcGIS Utility Network capabilities can automatically tabulate this engineering value as part of defining the pressure zone. With MAOP now automatically defined and maintained by ArcGIS, whenever a new asset is installed within the pressure zone and the asset’s individual MAOP value is defined, the software will revise the pressure zone MAOP to reflect the installation of the new asset. Combining the calculation of MAOP with the tabulation of volume for a pressure zone allows for the calculation of maximum line pack. Line packing is a natural gas industry term for increasing the mass of gas within a given pressure zone. Line packing is achieved by increasing the operating pressure. An increase cannot exceed MAOP. Control room operators use this value to understand how much additional mass can be placed within the pressure zone pipe network in anticipation of a forecasted increase in customer demand. Understanding a Pressure Zone’s Extent The ability to see a visual representation of the pressure zone is another product of the ArcGIS Utility Network defining a pressure zone. The ability to see the extent of a pressure zone is key to helping a utility’s staff answer their questions. Typical questions include: Emergency Events. Which pressure zones will be impacted by wildfires and floods? Planning: Which pressure zone provides service to the address of a potential new customer? Engineering: When planning a system modification, what is the extent of the pressure zone? Understanding a Pressure Zone’s Customer Base With the pressure zone accurately defined, geospatial analysis methods can be applied to identify and inventory the customers within a given pressure zone. As an example, a buffering of customer meters with the pressure zone name can query the USA Structures Living Atlas feature layer to tabulate valuable information such as: Number of hospitals Number of schools Number of impaired / limited mobility customers Quantity of types of customers (residential, commercial, industrial, government This provides instant clarity of the customer base being served by the pressure zone. This clarity helps emergency event coordinators, engineers, and planners. Bring it All Together Historically, the information mentioned in this article has been scattered across an organization’s many data siloes, spreadsheets, and the brains of individual employees. Leveraging ArcGIS and its ArcGIS Utility Network capability to define the pressure zone brings this information together. This aggregation of information and presenting it in a manner that is easy to understand is what turns defining a pressure zone into understanding a pressure zone. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
04-29-2024
06:17 PM
|
2
|
0
|
2136
|
POST
|
Hello neomapper. Great question about how to manage service lines within UPDM. The purpose of PipelineLine is to store ALL pipes (gathering, storage, transmission, distribution, station, service). Based on your message it looks like this is what you are currently doing. This Utility Network participating featureclass, is where both of the service line segment records you mentioned, should be stored. The Service_Summary featureclass was created specifically for natural gas distribution organizations within the United States. It's purpose was to be a simplified duplication of the PipelineLine - Service Lines. US federal reporting requires all natural gas organizations to annually report a summary of their natural gas pipe networks (summarized by material, diameter, age, and type). The challenge with this reporting requirement is that it requires a single definition of material, age, and diameter for the entire service line. Your 2 service pipe segments have to be merged into a single record. If they have different material, age, or diameter, you have to pick one to represent the entire service line. In summary keep managing your service lines within the PipelineLine featureclass. If you work for or support a US based gas distribution company and are struggling with how to accurately manage a summarization of service lines for US DOT Annual reporting, then consider looking at using Service_Summary to store that merged version of your service lines. Tom DeWitte Esri Technical Lead supporting Natural Gas and District Energy Industries
... View more
04-03-2024
06:48 AM
|
1
|
0
|
202
|
BLOG
|
Hi Rita, You are correct, there is no relationship class between P_InlineInspection and P_ILISurveyReading in UPDM2023. There is only the relationship class between P_ILISurveyReading and P_ILISurveyGroup. I noticed in your message you ask about a relationship between P_InlineInspection and P_ILISurveyReading. But you did not mention a relationship between P_ILIInspectionRange with P_ILISurveyReading or P_ILIInspectionRange with P_ILISurveyGroup. What is the relational hierarchy that you would recommend exist for the management of pipeline inspections and the data collected as part of those inspections? Tom DeWitte
... View more
04-01-2024
05:54 AM
|
0
|
0
|
3374
|
BLOG
|
AutoCAD and ArcGIS Working Together By Tom DeWitte and Tom Coolidge Esri released in January of 2024 an updated version of its ArcGIS for AutoCAD plug-in for Autodesk® AutoCAD®, AutoCAD Map 3D®, and Civil 3D®. The Esri plug-in allows AutoCAD users to become editors of ArcGIS-managed data. The January 2024 update (version 430) of ArcGIS for AutoCAD enhances the editing experience by supporting branch versioning within AutoCAD. This branch versioning support further strengthens the incorporation of the AutoCAD editor into the data management workflows within the world of ArcGIS. What do these new tools and capabilities mean for Utilities? Within the world of documenting new construction projects for utilities, there is a common practice that is needed but very inefficient. This is the practice of using AutoCAD to document the construction project and ArcGIS to document the current as-built state of the entire utility system. After construction, AutoCAD drawings are created as part of the project closeout activity to document what was installed, retired, and modified. The AutoCAD drawings are often converted into PDF files and stored in a document management system as a historical project snapshot. At the same time, the ArcGIS mapping team is updating the as-built representation of the entire utility system to reflect the newly constructed changes. This information is shared with the organization through web and mobile applications, giving the whole utility organization a current representation of the entire utility network. The problem with this dual documentation workflow is that the AutoCAD editor and the ArcGIS mapper perform the same data edits. The ArcGIS for AutoCAD plugin provides a solution to this redundant data entry. Eliminate the Duplication The ArcGIS for AutoCAD plugin solves this duplicate data entry issue by allowing the AutoCAD user to update their CAD drawing with the data edits made by the ArcGIS mapping team. Now, a simple “Synchronize” from the ArcGIS ribbon in AutoCAD allows the AutoCAD mapper to update their AutoCAD drawing with the features created by the ArcGIS mapper. Since the ArcGIS data maps an area much larger than the project area of interest to the AutoCAD mapper, a blue project area bounding box is used in AutoCAD to define the geographic extent of data of interest to the AutoCAD user. When the synchronization is complete, the GIS features within the project area are loaded into the AutoCAD drawing. No additional conversion, prep, or translation is required. This synchronization with ArcGIS is more than simply bringing over geometries. It also brings over features’ attributes, including coded value domain descriptions. CAD And GIS Working Together The increasing collaboration between Esri and Autodesk directly benefits Utilities. The ability of AutoCAD to consume ArcGIS layers of data and basemaps simplifies their tasks for creating CAD drawings and improves their productivity. Long gone are the days when the CAD community and GIS community were antagonists to each other. Now, we can all work off the same dataset. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
03-04-2024
11:43 AM
|
2
|
2
|
1718
|
BLOG
|
Beware the Ides of March By Tom DeWitte and Tom Coolidge Beware the Ides of March. Stated in less Shakespearean form, beware of March 15th. This statement was true for Julius Caesar and is true today for many GIS and compliance professionals across the United States natural gas industry. In Roman times, March 15th was the deadline for settling debts. Today in the United States, March 15th is the deadline for submitting annual reports for gas distribution and transmission companies to the U.S. Department of Transportation (U.S. DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA). The submittal of the U.S. DOT PHMSA Annual Report for Gas Distribution (Form F 7100.1-1) and its sibling, the U.S. DOT PHMSA Annual Report for Gas Transmission and Gathering Pipeline Systems (F 7100.2-1), is an exercise in data mining. If your data is well organized and attuned to the data summarization requirements of these reports, gathering the information required to complete these forms is manageable. Suppose your data is not attuned to the data query, filtering, and summarization needs of these reports' data query, filtering, and summarization needs. In that case, this annual effort is scarier than having a best friend named Brutus. Officially, PHMSA estimates that the time required to gather the data needed to complete these forms is 16 hours for Gas Distribution and 47 hours for the Gas Transmission and Gas Gathering form. This is achievable with a well-attuned geospatial dataset such as an ArcGIS data repository organized with the Utility and Pipeline Data Model (UPDM). If your data is poorly organized, this effort can take several months. What does a well-attuned geospatial dataset look like? Summarizing by Pipe Material The first indication of a data model well attuned to these reports' data summarization needs is how material is defined. Part B of the Gas Distribution report does not ask for the specific material grade of the pipe (PE2708, Grade X42, etc.). It asks for a generic categorization of material (Plastic, Coated Steel, Bare Steel, Cast Iron, etc.). Part D of the Gas Transmission and Gas Gathering report similarly asked for a generic categorization of the material instead of the specific grade. The UPDM data model explicitly defines these categorizations of material in a data field called; AssetType. This AssetType data field has a set list of values which align with the categorization material values requested in the DOT reports. A simple summarization of the assettype and cptraceability data fields will produce the data values needed for section B1 of the Gas Distribution Report. A summarization of unique combinations of assettype, cptraceability, and regulatorytype will produce the data values needed for Part D of the Gas Transmission and Gas Gathering report. These summarizations can be easily accomplished in ArcGIS with UPDM-organized data in minutes versus hours, days, and even weeks. Summarizing Transmission By SMYS Part K of the Gas Transmission and Gas Gathering Report asks for the miles of transmission pipe based on the operating pressure as a percent of specified minimum yield strength (SMYS) for a given DOT class designation. Storing engineering information about the pipe as descriptors about the pipe greatly simplifies summarizing this information for the Gas Transmission and Gas Gathering Report. When the UPDM data fields for this engineering data are populated, this tabulation can be performed. The specific data fields within the pipelineline layer of UPDM are operatingpressure, SMYS, percentSMYS, dotclass and shape_length. Similar to the mileage by material table, a summarization of the unique combinations of percentSMYS, and dotclass, then summing the total length of pipe produces the information needed to populate this table. Summarizing Leaks by Cause UPDM and its optimization of data organization to support the reporting needs of the annual DOT reports extend beyond pipes. It is also attuned to the reporting requirements of leaks. Part C of the Gas Distribution report asks for the number and severity of leaks on mains and services. Here again, the P_GasLeaks featureclass in UPDM is intentionally organized to accommodate this categorization of leak cause and severity. The data field, leakcause includes a set list of valid values, which correlates to the Cause of Leak categories of the reporting table. The data field, revisedleakclass provides the leak severity distinction, and the leakstatus data field provides the indication of the status of the leak. UPDM Alignment with DOT Reporting was no Accident The fact that UPDM is so well aligned with the reporting needs of these annual reports is not an accident of data modeling. When UPDM was originally created, these DOT reports were studied. The intent way back in 2009 was to organize this information to not only address the daily engineering and operational needs of natural gas organizations but also to simplify this required annual reporting task. In the 15 years since the original release of UPDM, the gas team at Esri has continued to monitor the evolution of these compliance reports and, when needed modify UPDM to accommodate changes in reporting. March 15th was a very bad day for Julius Caesar; it does not have to be a bad day for you. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
02-13-2024
12:08 PM
|
4
|
0
|
619
|
BLOG
|
The data dictionary for UPDM is an online interactive website. You can access the data dictionary with this link. https://solutions.arcgis.com/utilities/data-dictionary/index.html?cacheId=560f98a979504c01bdf15ab032c12aef&rsource=https%3A%2F%2Flinks.esri.com%2FGasAndPipelineReferencingUtilityNetwork%2FDataDictionary%2Fv2.3 Tom DeWitte Esri Technical Lead for Natural Gas and District Energy
... View more
02-06-2024
05:58 AM
|
1
|
0
|
4253
|
BLOG
|
Hi Ryan, You can contact me via email at: tdewitte@esri.com. Tom DeWitte Esri Technical Lead, Natural Gas and District Energy Industries
... View more
02-01-2024
08:08 AM
|
0
|
0
|
439
|
DOC
|
Hi Lindsey, You are correct that the Leak Survey sample schema and attribute rules posted in 2021 is just for managing the Leak Survey data itself. For examples of schema for managing reported leaks (ie. Leak Reports) the UPDM data models include a featureclass named: P_GasLeak. For an example of a schema for storing issues identified while performing the leak survey, take a look at the P_IdentifiedIssue featureclass within UPDM. If you want to discuss in greater detail how to use GIS to manage your leak survey program, please give me a call or email me directly. Here is my contact info. Tom DeWitte tdewitte@esri.com Phone: 909 369-8348
... View more
12-20-2023
06:18 AM
|
0
|
0
|
1120
|
BLOG
|
Utility and Pipeline Data Model 2023 is Released By Tom DeWitte and Tom Coolidge Esri’s Utility and Pipeline Data Model (UPDM) 2023 is available now. This release continues Esri’s practice of maintaining a template data model ready “out-of-the-box” to manage gas and hazardous liquid pipe system data within an Esri geodatabase. This release includes enhancements to keep up with changes in industry practice and implementation feedback received since the previous release. Updates to Industry Barcode Standard In mid-2023, the ASTM technical committee overseeing the F2897 barcode standard published an update. F2897 is the Tracking and Traceability encoding system defining the format of the barcode that manufacturers place on their pipe, fittings, valves, and other assets. This update added new materials, new manufacturer components and new manufacturers to the standard. New materials include PE80, PE100 and Reinforced Epoxy Resin, to name a few of the additions. New manufacturer components include new types of pipe, new types of fittings, and more combination components such as Reducer Socket Fusion with EFV. There were also four new manufacturers added to the standard. These additional manufacturers are: -Improved Piping Products -Shawcor Composite Production Systems -Krah USA LLC -Hawkeye Industries Inc All these modifications now have been baked into UPDM 2023 to continue its support of this important industry standard. Updates Based on Implementation Feedback There are now dozens of gas and hazardous liquid organizations in production with UPDM. As each organization goes through its journey of implementing this geospatial system of record, Esri’s industry data model gets tested and retested. These projects are located globally, and our implementation partners continue to provide feedback when a gap is identified. This list of recommendations from implementations is short, as the data model has reached a level of maturity and completeness that limits the need for change. For this release we have: added Storage Facility as a new type of PipelineAssembly. added Olet as a new group to PipelineJunction to cover a type of fitting known as olets. Weldolet, Threadolet, and Sockolet are types of olets that are now a part of our industry data model. added Service Station as an additional type of PipelineDevice to support the emerging idea of compressed natural gas, and hydrogen as fuels for transportation. Gas and Pipeline Referencing Utility Network Foundation For many gas utility and hazardous liquid pipeline enterprises, deploying ArcGIS is more than simply loading the UPDM 2023 data model into an enterprise geodatabase. That’s because ArcGIS leverages the concepts of a service-oriented web GIS. It requires additional steps, such as creating an ArcGIS Pro map configured for publishing the data model, publishing of the Pro map to create the required map and feature services and, perhaps, configuring a location referencing system. To help simplify these additional steps performed with UPDM 2023, Esri has embedded UPDM 2023 into the Gas and Pipeline Referencing Utility Network Foundation. This solution provides UPDM 2023, sample data, and an ArcGIS Pro project configured with tasks and performance-optimized maps. You can access this solution from the Esri ArcGIS for Gas solution site. A full data dictionary of UPDM 2023 is available online. A change log documenting the full list of changes incorporated into UPDM 2023 is also available online. Esri’s Template Data Model for the Industry Esri first released UPDM in 2015 as a part of a new vision of how a geospatial system of record for pipe systems can be much more than a departmental solution. It can be a foundational enterprise system providing a unified office and mobile workforce with a near real time single source of the truth. This belief that there should be a single source of the truth from which the entire organization can view, query, create and maintain their entire pipe network has driven not only the development of this industry data model, but also the development of our network management and linear referencing capabilities. UPDM is the only industry data model built by Esri in collaboration with Esri’s ArcGIS software development team to support the enterprise needs of the gas and hazardous liquid pipe industries. UPDM is a moderately normalized data model that explicitly represents each physical component of a gas pipe network from the wellhead to the customer meter, or a hazardous liquids pipe network from the wellhead to the terminal or delivery point, in a single database table object. This freely available data model is designed to take full advantage of the capabilities of the geodatabase. The data model is created and tested with ArcGIS products to ensure that it works. This significantly reduces the complexity, time, and cost to implement a spatially enabled hazardous liquid or gas pipe system data repository. Looking ahead to the Future A wise man once said “change is the only constant.” This is a great quote when thinking about UPDM going forward. The Esri development team will continue to enhance the capabilities of ArcGIS. Industry will continue to evolve its practices. To continue adjusting to industry practices and incorporating new ArcGIS capabilities, UPDM will continue to evolve. This evolution will help assure gas utilities and hazardous liquid pipeline operators that their GIS industry-specific data model is current with their needs. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
11-20-2023
02:10 PM
|
5
|
9
|
8012
|
BLOG
|
By Tom DeWitte, Kevin Ruggiero, Mike Hirschheimer Part 5 of 5 Maintaining a local cache of data on a mobile device, has historically been a challenge for organizations large and small. The paper medium used for the first 150 years of the utility industry was unable to automatically update the printed copies sitting in a mobile worker’s horse wagon or truck. The first generation of mobile map viewer applications were also static. Updating these digital snapshots of data meant fully replacing the mobile device’s local cache of data. This was time consuming to the mobile worker, extremely taxing of the data server being asked to extract its contents and congested most organizations communication networks. Neither of these options are ideal for providing utility field workers with the up-to-date information they require to safely perform their jobs. What utilities desire is a mechanism that not only maintains the mobile device’s local cache of data without requiring dedicated time from the mobile worker, but also minimizes the query load against the data servers, and minimizes the impact on the communication network. What utilities desire is offline map areas. Offline Map Areas Offline Map Areas is the name Esri gives to the capability that enables the ArcGIS system to automatically transmit data changes between a mobile device and the organization’s ArcGIS data repositories for a specified geographic extent. In the previous articles of this blog series, we described the first three steps required to deploy this capability within an organization. In this blog article we will explain the fourth and final step, deploying to the mobile device. The process of deploying to the mobile device can be divided into two parts, the initial download, and the updating of the mobile device’s data cache. How is the data initially deployed to the device When an offline map area is initially defined, a package of the requested data, layers and tables within a geographic extent are written to the portal server. This process of packaging an offline map area’s data once and storing it on the portal server addresses the issue of overwhelming the enterprise databases. By allowing mobile devices to be initially provisioned with a pre-packaged set of data, mobile workers, contractors, and mutual aid crews can quickly deploy. Each feature service included within the web map will have its extracted data written as a mobile geodatabase to a unique SQLite database file. Each image service defined within the web map will have its extracted data written to a tile package. Each vector tile package will have its extracted data written to a vector tile package. For example, a web map referencing 2 feature services, 1 hosted feature service, and 1 vector tile service will have a pre-packaged set of 3 SQLite files and 1 vector tile package sitting on the portal server waiting for download. For a typical installation of ArcGIS Enterprise, these files will be stored in the arcgisportal install directory. NOTE: When an offline map area is created a companion scheduled process will also be created to update the portal server package. By default, this will be a weekly update. Requesting the download of an offline map area is a simple process. Within Field Maps the user will click on the download icon next to the desired offline map area. The simplicity of the download eliminates the historically time-consuming task of manually loading the packaged data onto the device. When the download is initiated, it will automatically sequence thru three steps. The first step is to copy the portal hosted files to the mobile device. The 2nd step is to register the mobile device with the requested geodatabases. In this step each geodatabase or Data Store (hosted feature layers) will register the requesting mobile device and begin tracking the last successful synchronization date and time. This tracking of last successful synchronization date and time is how all future syncs will know which records to request to update the mobile device with changes made since the last successful sync. The 3rd step is to update the mobile device mobile geodatabases (SQLite files) with all changes made since the portal package’s last synchronization date. In this step all changes made to the enterprise data repositories (enterprise geodatabase and Portal Data Store), since the portal package was last synced, and within the defined geographic extent will be queried and downloaded to the mobile device’s existing mobile geodatabases. This ability to track the date and time of when a mobile device’s synchronization was last successfully completed for each of the local data repositories referenced by the web map is how offline map areas minimizes the network traffic required to keep current the mobile device’s data cache. With an offline map area now successfully downloaded, registered, and updated it is ready to be taken into the field for mobile worker use. Updating the Mobile Device Data Cache The mobile worker with their mobile device local cache of data, can view, and query existing data. The mobile worker can also update existing data, create new data, and delete existing data, if the web map and feature service have been configured to allow these capabilities. With this full set of editing capabilities against the local cache of data, we need to understand how those changes get back to the enterprise data repositories. When we talk about offline map area synchronization, we are talking about only receiving and sending changes to the data. Like step 3 of the initial download, the determination of what needs to be transmitted between the mobile device and the enterprise environment is based on the last successful synchronization datetime and the geographic extent of the offline map area. This datetime value is stored in both the mobile device’s mobile geodatabases, and each unique enterprise data repository providing feature data. If no changes have occurred since the last sync, there is no data to send or receive across the network. ArcGIS Field Maps provides the ability to automate this bi-directional synchronization. When auto-sync is turned on it will automatically initiate a sync when it detects a network connection to the enterprise environment. This auto-sync will occur every 15 minutes if a persistent connection to the network is maintained. For the mobile user this provides a very simple daily workflow. On a typical day the mobile worker would start in the office. While in the office the mobile worker simply opens Field Maps and selects the web map offline map area they intend to use for the day. By opening the map while connected to the office network Field Maps will automatically initiate a sync. With the beginning of the workday sync completed, the mobile worker is ready to leave the office and head out to the field. During the day the mobile worker can view, query, and edit against their local offline map area. All edits will be stored on the device. At the conclusion of the workday, the mobile worker returns to the office. The mobile worker by simply coming into range of the office wi-fi, initiates the sync. This is due to Field Maps detecting a connection to the network and automatically initiating the sync. The mobile device never needs to leave the mobile worker’s pocket or pouch. This end of day sync will push the day’s edits to the servers and update the mobile data cache with changes posted to the enterprise environment. This automated sync frees the mobile worker from having to dedicate time to managing the local cache of data. Managing Changes to the Data Structure With the offline map areas generated and easily deployed to the mobile workforce, administrators will be asking: what happens when the structure of the data changes? The first type of data structure change to consider is the addition or removal of a layer or table to the web map. For these changes, the Update tool within the Field Maps web application is the tool to use. The update tool is very simple to use, simply click on the Update button and the Portal hosted offline map area package will be updated. When the Update tool is run it will initiate the updating of those mobile geodatabases within the Portal hosted offline map area package, which have had layers/tables added or removed. Additionally, the Update tool process will remove mobile geodatabases whose content is no longer referenced by the web map. When the schema of an existing layer or table is modified the Recreate tool is used. Examples of schema changes which require the Recreate tool are the addition or deletion of data fields, and modifications to coded value domains, range domains, and contingent value lists. Running the recreate offline area tool will delete and recreate all the mobile geodatabases contained within the portal hosted package. Managing Changes to the Mobile Devices Administering a fleet of mobile devices and the data caches stored on them can involve situations not everyone thinks about during initial planning and user acceptance testing (UAT). For example, how does an organization deactivate a mobile device from syncing when that device has been lost or stolen. Or how do you unregister a mobile device’s offline map area when the device breaks? A different set of administration tools is needed to manage these types of situations. To manage the offline map area registration data stored within the enterprise geodatabase we need ArcGIS Pro. ArcGIS Pro provides the tools to visualize and manage all the offline map areas (feature service replicas), registered to an enterprise geodatabase. Accessing these Manage Replica tools is done via an administrative connection directly to the enterprise geodatabase. Within the Manage Replica tools is a tab for Feature Service Replicas. This tab will present the list of all mobile device caches registered to the enterprise geodatabase. Searching by the username of the person whose device has failed or is lost, you can find the replicas to that mobile user. From this listing you can select the feature service replica which needs to be removed and unregister. Unregistering will remove its entries from the enterprise geodatabase and prevent synchronization. Summary Unlike the book of paper maps originally stored under the bench of the horse drawn utility wagon, today’s technology enables utility organization to provide mobile users with a local cache of up-to-date information. Offline Map Areas provides the ease of use your mobile users expect from modern computing systems. These offline map areas are maintained with modern data management and communication methodologies which minimize load onto the enterprise geodatabases, and consumption of network bandwidth. All of this is done with a mature set of administrative tools, to simplify management of this enterprise system. About this Blog Series This is the fifth and final blog in our series on offline map areas. You can access our previous blogs with these links. The first blog provided and overview of offline map areas. The second blog provided details on preparing your data for offline usage. The third blog provided details and options for publishing the selected data repositories. The fourth blog provided details on how to automate the creation of the offline map areas. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
11-17-2023
06:33 AM
|
3
|
0
|
839
|
BLOG
|
Hi Thamer, One of the base capabilities of ArcGIS that I really enjoy as a creator of content for others is the automatic inheritance of domains across desktop, web, and mobile esri applications. For the data entry application you are looking to create for your engineers, giving them ArcGIS Pro, is asking a lot. Have you looked configuring a web application or configuring a set of smart forms within ArcGIS Field Maps or Survey123. All 3 of these end user applications will automatically create, and assign coded value domains to those data fields which they are assigned within the Geodatabase. Just like in ArcGIS Pro, where the picklists are auto-generated, the same "creator" experience applies to ArcGIS web applications and to Survey123, and ArcGIS Field Maps. Tom DeWitte
... View more
11-06-2023
06:50 AM
|
0
|
0
|
4514
|
BLOG
|
Taking Your Maps Offline: Creating Offline Map Areas By Mike Hirschheimer, Tom DeWitte, Kevin Ruggiero Part 4 of 5 Your Utility wants to setup a process that enables field workers to have access to your pipes, conductors, and cabling geospatial data when not connected to the communication network. The GIS/IT support team has read the ArcGIS documentation for offline map areas and is ready to begin. For an initial proof of concept (POC) the team created new web maps and then used the Field Maps Designer web app tools to manually define the extent of a single offline map area. The feedback from the field has been positive and now leadership wants to expand to cover the entire service territory which includes 100 distinct areas. The GIS/IT team is super excited about the project moving forward but secretly cringes when thinking about the effort and hours it will take to manually repeat the steps from the POC for the remaining 100+ areas. A similar scenario happened to our team earlier this year. Our customer had 100 “operating areas” throughout their service territory and manually repeating the steps executed in the POC for each offline area wasn’t sustainable. We needed a way to automate the creation of these 100 offline map areas. The ArcGIS API for Python provided the scripting ability that was needed for this automation. In this blog, we’ll discuss best practices for managing the web maps along with using the ArcGIS API for Python to create and report on offline map areas. Configure the web maps The data that the field worker sees on their mobile device is driven by the layers, tables, and configuration in the web map. From ArcGIS documentation, we know that a single web map can support up to 16 offline map areas. With 100 areas to define, a minimum of 7 web maps is needed (100 / 16 = 6.25) if the areas were broken up evenly. In reality, it took 13 web maps as the ease of use for the field worker was a major factor when making assignments. Managing 13 web maps can become overwhelming pretty quickly when a single change would need to applied 13 times. The recommendation is to create a “Master” web map using ArcGIS Pro that includes the facilities, landbase features, the offline enabled base map, and other pertinent data feeds. Once the configurations like scale ranges, pop-ups, labels, locators, etc. are set, publish the web map to Portal using the Share Tab à Save as Web Map button. Then the “Master” web map can be copied and given a new name as many times as needed to support our offline map areas requirements. NOTE: If using ArcGIS Enterprise 10.9.1 or 11.1, don’t include Subtype Group Layers in your web maps. The offline map area creation tool will fail because it doesn’t understand Subtype Group Layers. Creating the Offline Map Areas There are 2 approaches to create the offline map areas. The Field Maps Designer web application could be used to manually define polygon extents, or the ArcGIS API for Python could be used to programmatically generate the polygon extents based on the operating area polygon layer most utilities already have. In the Field Maps Designer web app, there are tools to draw a rectangle or a polygon to define the extent and then enter information about the area (name, how often to refresh, levels needed in the basemap). For a POC, this approach is quick and easy but doesn’t scale well when moving into production mode. The programmatic approach uses a Python script to automate the creation of offline map areas. Attached to this blog is a script that creates offline map areas using a polygon feature’s geometry. The script uses a configuration file that contains your ArcGIS Enterprise credentials, the URL of your polygon layer of operating areas, Web Map IDs and the instructions for creating the offline map areas. In this sample configuration file, there are 2 web maps that are defined in the offlineAreaConfig section. When the script runs, offline map areas will be created for the polygon features that satisfy the “polygonWhereClause”. In both instances, both Virginia and Colorado had less than 16 polygons. If these states contained more than 16 areas, the where clause would need to be redefined. Knowing that every utilities’ data model will be slightly different, multiple entries in the configuration file are used. This gives the script flexibility to name the offline map areas using the fields in your data along with allowing the script to run in your Dev, Test & Production environments without making code changes. Another benefit of using the Python script is the log file. This provides documentation as to when each offline map areas was created and the duration it took. Reporting on the Offline Map Areas After setting up the configuration file and running the python script, 100 offline map areas in 13 web maps have been created. A simple way to report on these offline map areas is now needed. The log file has good information but wasn’t meant to be a report. There is another attached python script that generates a CSV file containing details about web maps with offline map areas. The CSV identifies the contents of each offline map area and the files sizes of every Vector Tile package and SQLite database. Not only is this report useful for knowing what exists in your Portal but also show the amount of data to be downloaded to a mobile device. What needs to occur if I need to change the web map? Your Utility has been in production for a few months now and the Field workers are actively using the offline capabilities. To be more efficient, they are asking for a couple of changes to the web map. Specifically, a labeling change on a linear feature, a change to the order of attributes on a specific device and the need to capture a new attribute on a structure. Making those changes the web map is easy enough but that won’t initiate the change to the offline map areas. Once an offline map area is created, it’s schema and it’s corresponding web map is locked down. New web maps need to be created to get these changes available to the field worker. To apply schema or map changes, the workflow would be something like this: In Portal, existing web maps with offline map areas should be renamed and/or deleted Make the necessary schema changes (ex. Add a field) Using ArcGIS Pro, update the “Master” web map in Portal with the labeling and pop-up configuration Make the necessary copies of the “Master” web map Re-run the Python script to create offline map areas Notify field users that existing offline map areas should be deleted from their device and that newly created offline map areas are available for download As you can see, this python script not only helps with the initial creation and deployment of offline map areas, but it also streamlines the ongoing support and propagation of changes. Simplifying a GIS Administrators Job The recommendations in this blog are the real-world lessons we learned from assisting a large utility customer with deploying offline map areas. From creating the master web map to using the Python API to automate the creation of the offline map areas. These learned lessons removed the GIS administrator cringe of doing time-consuming manual processes for the deployment, and instead provided an automated process allowing our servers to continue working while we went home. If you are interested in using the python scripts described in this blog, they are available on Esri GitHub so that you don’t have to start coding from scratch. About this Blog Series This is the fourth blog in our series on offline map areas. In future blog articles we will continue to explain decisions an administrator will need to make during deployment. The first blog provided and overview of offline map areas. The second blog provided details on preparing your data for offline usage. The third blog will provide details and options for publishing the selected data repositories. The fifth and final blog will provide details on the deployment and management of offline map areas for a large mobile workforce. PLEASE NOTE: The postings on this site are our own and don’t necessarily represent Esri’s position, strategies, or opinions.
... View more
10-09-2023
05:46 AM
|
3
|
2
|
1460
|
Title | Kudos | Posted |
---|---|---|
1 | 10-17-2024 12:11 PM | |
6 | 07-10-2024 07:26 AM | |
3 | 06-09-2024 07:57 AM | |
1 | 07-19-2023 05:36 AM | |
2 | 04-29-2024 06:17 PM |
Online Status |
Offline
|
Date Last Visited |
Saturday
|