POST
|
I appreciate all of the help you have provided, but I'm not sure how to go about querying for dirty areas. I am unable to find any Objects in the API documentation that that would grant that this capability. As far as I can tell there aren't any methods using the VersionManager, Version, or UtilityNetworkManager Objects that provides access to any properties related to dirty areas. Would you mind pointing me in the correct direction?
... View more
09-23-2024
09:03 AM
|
0
|
0
|
167
|
POST
|
That seemed to work at least for the "update is connected" and "update subnetwork" methods. The "validate topology" method still produced an error, but its unclear to me if that is an expected error if no recent edits were performed or any dirty areas are present in the UN. Exception: A dirty area is not present within the validate network topology input extent. A validate network topology process did not occur.
(Error Code: 0)
... View more
09-23-2024
08:37 AM
|
0
|
0
|
170
|
POST
|
Unfortunately, no I have a different error and I am unable to start an edit session using the start_editing() method (can start reading though). Maybe its something with my UN configuration or how I am trying to instantiate the VersionManager object. Exception: The operation is not supported by this implementation.
(Error Code: 0)
... View more
09-23-2024
07:42 AM
|
0
|
0
|
179
|
POST
|
Unfortunately, I tried something similar when troubleshooting earlier and still was unsuccessful. In both my earlier attempts and using the code block you provided, I still received the following error: Exception: Unable to resume a service session.
(Error Code: 0)
... View more
09-23-2024
07:22 AM
|
0
|
0
|
201
|
POST
|
I think my issue currently is that I am unable to instantiate the Version object using the provided UN information from earlier in the script. Every time I try to create the Version object, the script will create it, but when calling the Version object I get the following error: "AttributeError: 'PropertyMap' instance has no attribute 'versionName'" I also receive this same error when trying to access the utility property from the Version object.
... View more
09-23-2024
06:58 AM
|
0
|
0
|
252
|
POST
|
I am trying to automate some of the UN admin tasks using the Python API, but I am struggling a bit with this task, namely creating the UN object (and hitting the default version). Have either of @jclarke or @RobertKrisher had any success in developing this script? Below is what I have so far. import arcgis
from arcgis.gis import GIS
gis = GIS('https://myorg.gov/portal/','username', 'password')
#utility network URL
util_url = 'https://myorg.gov/un_fabric/rest/services/Utility_Network/Water_Distribution_Source/UtilityNetworkServer'
network_url = 'https://myorg.gov/un_fabric/rest/services/Utility_Network/Water_Distribution_Source/FeatureServer/9'
UN_url = 'https://myorg.gov/un_fabric/rest/services/Utility_Network/Water_Distribution_Source/FeatureServer'
UN_vers = 'https://myorg.gov/un_fabric/rest/services/Utility_Network/Water_Distribution_Source/VersionManagementServer'
#--------------------------------------------------------
UN_flc = arcgis.features.layer.FeatureLayerCollection(UN_url,gis)
vms_from_flc = UN_flc.versions
default_version = vms_from_flc.all[0]
default_guid = default_version.properties.versionGuid
#---------------------------------------------------------
#returns error "AttributeError: 'PropertyMap' instance has no attribute 'versionName'"
df_version = arcgis.features._version.Version(url=UN_url,flc=UN_flc,gis=gis,session_guid=default_guid)
UN = arcgis.features._utility.UtilityNetworkManager(url=util_url,version=default_version,gis=gis)
envelope = {
"xmin": 1018472.222250,
"ymin": 1819952.386750,
"xmax": 1105416.666750,
"ymax": 1940603.168750,
"spatialReference": {
"wkid": 3435,
"latestWkid": 102671
}
}
#returns error "Exception: Unable to resume a service session."
UN.validate_topology(envelope)
#returns error "Exception: Unable to resume a service session."
UN.update_subnetwork('Water','Water System',all_subnetwork_tier=True,continue_on_failure=True)
UN.update_is_connected()
... View more
09-23-2024
06:44 AM
|
0
|
0
|
256
|
POST
|
That is an interesting point and something I also thought about (re. rule runs server side so the result is a delay in the execution of the rule). I haven't been able to find anything in ESRI's documentation or other GeoNet posts for similar workflows (using a non-hosted Feature Service) that are having the same issues. I tried to formulate the attribute rule in the Form designer and the Field Maps designer, but in both cases I was unable to create an arcade expression to work correctly. In all cases the arcade expression couldn't be executed (error) or would not execute at all and return no value. After a bit more digging, I found an additional GeoNet post that was attempting to do what I was looking to implement. The post referenced the ESRI Github page with examples on different arcade expressions with one of them being exactly what I was trying to do (https://github.com/Esri/arcade-expressions/blob/master/attribute_rule_calculation/UpdateParentFeature.md). After doing some testing, this arcade expression found in the URL works perfectly and does not cause any issues or delays in posting data.
... View more
08-30-2024
08:37 AM
|
1
|
0
|
274
|
POST
|
I currently have published (non-hosted) feature service that connects to my enterprise geodatabase that is used to manage tree inventory data published using ArcGIS Server 10.9.1 and the feature service being added to ArcGIS Online. This feature service has three layers: 1 point feature class, 1 non-spatial related table, and 1 attachments table. The related table is a 1 to many relationship where 1 point (tree) can have many records (tree maintenance records). I have instituted an attribute rule where the latest (or last added) maintenance record's "condition" field is used to attribute a "condition" field in the parent point feature class. This attribute rule works fine in ArcGIS Pro and the results from the attribute rule can be seen nearly automatically. The issue I am running into is when trying to test the workflow in ArcGIS Field Maps of adding a point --> adding maintenance record (or just adding a maintenance record to an existing point) and the maintenance record's attributes not being saved / sent / submitted on the first try. Only after the second attempt is the maintenance record's attributes being saved / sent / submitted and thus the attribute rule being executed. It's unclear to me if its an attribute rule issue, a feature service issue, a data issue, or just a general bug / issue with my version of Field Maps.
... View more
08-29-2024
01:10 PM
|
0
|
2
|
327
|
POST
|
Thanks for the information. I was struggling to find anything in ESRI's documentation about how attribute rules would be affected python data inserts.
... View more
08-29-2024
01:01 PM
|
0
|
0
|
180
|
POST
|
Looking to get some insight into how attribute rules (specifically calculations) are effected by the use of python. How are attribute rules treated when new records are added to a table (or feature class) using python? We have a fair number of process that truncate and append records from one table to another on a nightly basis and I was wondering if an attribute rule (calculation) would be adhered to when inserting records using the append GP tool through python? Would this be the case to have the attribute rule only set for bulk execution at a later time (sometime after the python script runs)?
... View more
08-28-2024
07:52 AM
|
0
|
2
|
246
|
POST
|
In case anyone else is having this same issue, I believe I found a solution to this problem by using the "Control Event Volume" tool in a separate Real Time Analytic and ingesting the output from the RTA into a the main RTA. See below for the generic workflow I have employed for my own project: Real Time Analytic (RTA) "Get First Value": Input --> Live Feed (AVL data from snow plows). Control Event Volume --> set to as long as RTA is running and "Max Events Per Interval" equal to 1 Output --> Feature Layer "First Value" Real Time Analytic (RTA) "Ingest First Value": Input --> Live feed (AVL Data from snow plows) & Feature Layer "First Value" Join Live feed to "First Value" based on TrackID Summarize joined "First Value" fields by "MAX" value Calculate new running total fields by using "Material Value" - "First Value" = "Material Total" Output --> Feature Layer "Tracks"
... View more
01-17-2024
08:54 AM
|
0
|
0
|
925
|
POST
|
Hi @DanielFox1, Thanks for this information. Relating to the "Parse Dates" toggle, is there a way to parse on weeks? I have an end user that is trying to visualize data that is recorded on a weekly basis and the format of the date is Year-Week.
... View more
12-15-2023
09:12 AM
|
0
|
0
|
610
|
POST
|
Yes, I would agree that 7 days may be a little extreme. I initially set it for 7 days as more of a fail safe to always capture the starting record regardless of how long the event typically lasts. I may end up changing it back to may be 1-2 days to see if that changes anything. Regarding the reset of values, I agree that it may not be the "Detect Incidents" tool thats causing the resetting of the starting values. It may be how I have the subsequent Winter Fleet Tracking RTA set up to ingest the "First Record" stream or feature layer as this is where the summation calculation is completed and where I noticed the "resetting" of the zero value. It was unclear to me what type of input I should use in the Winter Fleet Tracking RTA for the "First Record" data (stream layer or feature layer). Do you have any suggestions? My main goal in this post was to try and find if any other users were having similar issues with their data streams from their AVL systems not being "zeroed" out at the beginning of each "event" and if any other users had set up similar workflows to record the "First Record".
... View more
12-06-2023
11:36 AM
|
0
|
0
|
1090
|
POST
|
Jeff, Thanks for the quick response. I forgot to mention in my original post, but the "Target Time Window" of 10 minutes is actually an adjustment I made after the first snow event / live test event. The original "Target Time Window" was set at 7 days in order for the auto close/end condition wouldn't be met until after the event ended. Even with the much longer "Target Time Window", I found that values were be reset after what seemed to be a few hours. Unfortunately, I also think having a long "Target Time Window" may affect the performance of the RTA due to the Stateful configuration settings and too many features being stored in memory, although this is only an assumption, but I am not positive on this aspect. According to the ESRI documentation, (https://doc.arcgis.com/en/iot/analyze/perform-real-time-analysis.htm), the "Target Time Window" does effect how many features are stored in memory and used in the comparison process, but I couldn't find any information about when the features are purged if they get set back to their original state (in the case of Detect Incidents, if this would mean the first feature following a purge would have a status of "Started").
... View more
12-06-2023
09:31 AM
|
0
|
0
|
1138
|
Title | Kudos | Posted |
---|---|---|
1 | 08-30-2024 08:37 AM | |
2 | 12-12-2022 06:14 AM | |
2 | 12-02-2022 12:49 PM | |
1 | 05-05-2022 08:41 AM | |
1 | 08-10-2021 11:44 AM |
Online Status |
Offline
|
Date Last Visited |
10-01-2024
02:38 PM
|