|
POST
|
I also highly kudos the methods @abureaux mentioned. I will add this: it is extremely helpful to include the Flow Run URL in the email you send yourself so you can one-click jump to that flow history and see what failed. The method get this URL is thankfully very simple. Just put this in a compose or variable or just plug the expression into your email action directly. concat('https://make.gov.powerautomate.us/environments/', workflow().tags.environmentName, '/flows/', workflow().name, '/runs/', workflow().run.name) Replace 'https://make.gov.powerautomate.us/environments/' with whatever your own power automate domain is. That will be the direct URL to review that specific flow run. FYI this is the raw output of the workflow() expression. The concat() expression above just grabs the relevant bits to assemble the flow run URL correctly after your power automate domain. { "id": "/subscriptions/cc97105b-731d-474d-b89d-251e6750824e/resourceGroups/05B5E599E0E84C08A2129FA883E62107-75C096716CAFE86D98E4DB2EC4DC0B17-ENV/providers/Microsoft.Logic/workflows/af484700-3606-4385-8c8c-c1fe190fdf2c", "name": "af484700-3606-4385-8c8c-c1fe190fdf2c", "type": "Microsoft.Logic/workflows", "location": "usgovtexas", "tags": { "flowDisplayName": "test: arcgis rest API", "capabilities": "Premium", "environmentName": "75c09671-6caf-e86d-98e4-db2ec4dc0b17", "logicAppName": "af484700-3606-4385-8c8c-c1fe190fdf2c", "environmentFlowSuspensionReason": "75c09671:2D6caf:2De86d:2D98e4:2Ddb2ec4dc0b17-None", "state": "Enabled", "createdTime": "7/3/2025 6:38:25 PM", "lastModifiedTime": "7/24/2025 6:23:00 PM", "createdBy": "6e1af543-02dd-4eff-979a-c0184fa8e143", "triggerType": "Instant" }, "run": { "id": "/subscriptions/cc97105b-731d-474d-b89d-251e6750824e/resourceGroups/05B5E599E0E84C08A2129FA883E62107-75C096716CAFE86D98E4DB2EC4DC0B17-ENV/providers/Microsoft.Logic/workflows/af484700-3606-4385-8c8c-c1fe190fdf2c/runs/08584482255021274964740499500CU62", "name": "08584482255021274964740499500CU62", "type": "Microsoft.Logic/workflows/runs" } } The concat() expression creates this URL: "https://make.gov.powerautomate.us/environments/75c09671-6caf-e86d-98e4-db2ec4dc0b17/flows/af484700-3606-4385-8c8c-c1fe190fdf2c/runs/08584482255021274964740499500CU62"
... View more
07-24-2025
11:15 AM
|
0
|
2
|
471
|
|
POST
|
I have found ArcGIS connectors unreliable -- lots of authentication errors in the middle of hundreds of other successful runs. This is a separate issue from the 2-week re-authentication issue. I have changed my use of ArcGIS connector actions to using HTTP GET or POST actions using ArcGIS REST API instead. I'm not sure if this is possible for your workflows but I found it to be more reliable. This relies on getting an AGOL token first to use in the API call. The query output body has a "features" array that contains one or more features (if your query where clause had a match). Get AGOL token child flow (I use the Solution feature of Power Automate so I can run child flows and have a migration pipeline from a development/test to production environment. Lemme know if you're curious about that). But this all that the child flow is doing -- another API call to get AGOL token: The actual query API call Compose attributes of the first result. Or you could do a For Each... action and do something with each feature if you expect to return more than 1.
... View more
07-24-2025
11:06 AM
|
0
|
3
|
1918
|
|
POST
|
If you "include survey info" option for the webhook, the outputs of the automated webhook trigger from a survey should include most of the attributes, but NOT the standard AGOL hosted feature layer CreationDate and the other "editor tracking" fields. However, you could use the time of the webhook received as the created time, although it may be off technically by 1-3 seconds for the time it took to get the webhook to PA. If you use Survey123 connect, you can specify in the XLSform that the survey should include the built-in survey "start" and "end" types. You can 'name' these fields to be whatever attribute name you want them to be in the hosted feature layer; these are separate from CreationDate although often the survey "end" (i.e. when user hit the submit button) datetime should be basically the same as the CreationDate time. They do not show up to the user. The way I do it is I use the webhook as the trigger, but then use the objectid from the survey to query the hosted feature layer for that objectid so I get the 'full' set of attributes, including CreationDate and EditDate. This would be necessary later on anyway if you have a custom field to store an 'edited' datetime instead of using the native EditDate editor tracking field. Lemme know if any of that doesn't make sense I can try to explain further.
... View more
07-24-2025
10:07 AM
|
0
|
0
|
442
|
|
POST
|
There's a method to do this with python and UpdateCursor, since you can read/write the special SHAPE@ geometry 'field' like you would for other attribute fields. You will need a unique ID or other value to positively match features from one feature class to the other to transfer the SHAPE. Do you have a such an attribute in each feature class? The python workflow would be something like this: import arcpy # Define the paths to your feature classes source_fc = "path_to_source_feature_class" target_fc = "path_to_target_feature_class" # Define the fields unique_id_field = "your_unique_id_field" # This is the common unique ID field geometry_field = "SHAPE@" # For accessing geometries # Create a dictionary from the source feature class geometry_dict = {} # Use a SearchCursor to populate the dictionary with geometries from the source_fc with arcpy.da.SearchCursor(source_fc, [unique_id_field, geometry_field]) as search_cursor: for row in search_cursor: unique_id = row[0] geometry = row[1] geometry_dict[unique_id] = geometry # Use an UpdateCursor to update the geometries in the target_fc with arcpy.da.UpdateCursor(target_fc, [unique_id_field, geometry_field]) as update_cursor: for row in update_cursor: unique_id = row[0] if unique_id in geometry_dict: # Update the geometry in the target_fc if the unique ID matches row[1] = geometry_dict[unique_id] update_cursor.updateRow(row)
... View more
07-22-2025
07:31 AM
|
1
|
1
|
1897
|
|
POST
|
I'm not sure if this is relevant, but AGOL stores dates as epoch milliseconds. I noticed your math is using 7 days in seconds, 604800. I wonder if you would need to add three zeroes to that, otherwise you are only subtracting 0.007 days?
... View more
07-22-2025
06:42 AM
|
0
|
0
|
512
|
|
POST
|
I think we need more information: I added a jpeg of a building plan to the satellite imagery, and I published the map. This is too vague I think, can you add more detail about your process?
... View more
07-18-2025
09:51 AM
|
0
|
0
|
254
|
|
IDEA
|
I wonder if you might want to use the REST API with an HTTP connector (premium license required) using the GET method. Here's an example of how I query an AGOL hosted feature layer. The REST_url is just the Feature Service or Hosted Feature Layer REST endpoint that includes the /0 on the end to specify the relevant layer id within your service. The output JSON 'body' of this action will have a 'features' value, which is an array. Each item in this array will be the JSON object for a feature that is returned that matches your where query parameters; each of these feature items includes an 'attributes' JSON object and a 'geometry' JSON object. In my case I'm just getting the attributes for a single, specific objectid, so my 'features' array has only 1 item that I need to parse for further processing. But your where query could include both the objectid(s) from the webhook trigger, AND if it matches your Needs_GISupdating = 'Yes' requirement. If there are multiple features, you can create an Apply to Each condition to process each feature or send an email for each feature. Or you can parse each item and append it to an array or something if you want to send one email with information about all of the relevant features at once. I can try to assist more if you want to go this route to query your data. "features": [ { "attributes": { "objectid": 48847, "otherattributes": "yada yada" }, "geometry": { "x": -8771208.883218553, "y": 4241626.251971537 } }, { "attributes": { "objectid": 48849, "otherattributes": "blah blah" } ... and so on ]
... View more
07-18-2025
09:08 AM
|
0
|
0
|
599
|
|
POST
|
If I'm not mistaken (i.e. in my testing/experience in ArcGIS Pro (I'm on 3.2)), the Field Calculate tool is a kind of 'brute force edit' that will not validate values with a domain before applying updates, both for gdb and AGOL sources. EDIT: I see now that the Field Calculate in ArcGIS Pro has an 'Enforce Domains' parameter you can check. That seems to work for me to prevent writing any value that is not in the domain code list. Trying to write the correct 'description' value will not work, it seems to only allow the code (case sensitive).
... View more
07-17-2025
08:38 AM
|
1
|
0
|
1887
|
|
POST
|
I can confirm this is something I do. I use Survey123 Connect. I have 3 surveys that are for different 'audiences' that technically point to the same hosted feature layer. The hosted feature layer has at least 1 field with domain/list of code/description values. One of my survey versions has different 'description' values set and it does show a warning when republishing that these do not match the domain/list schema of the layer, but it can be ignored since the code-values are the same and essentially just provides a different description visually for the form user.
... View more
07-17-2025
05:50 AM
|
2
|
0
|
1907
|
|
POST
|
Thanks for the secondary datapoint, I am planning on doing the same strategy as you. Our survey has a hidden field 'Status' that is populated with 'Submitted' by default; this is changed to 'Received' if my flow runs successfully, and 'Error' if the flow fails for some reason. So I think I can just compare if any record is 'Submitted' and the 'survey_completed_datetime' field (after converting epoch millisecond time to UTC) is more than X minutes from utcNow() then I can consider it an orphan 😞 and needs to be processed. I sure do wish AGOL had some kind of logging. I wonder if they do on the backend that we don't have access to.
... View more
07-15-2025
12:12 PM
|
1
|
0
|
929
|
|
POST
|
In our case the residents of the Town are only 'provided' access to the survey form through an embedded iframe on our Town website, so I'm not sure if there is really any visible mechanism for them to open the survey in the Survey123 app unless they somehow yoinked the url for the survey from the HTML source or browser devtools. We don't provide any link "to open this in a separate page" or anything like that. So I feel like this is not likely our issue, but I can't guarantee it.
... View more
07-15-2025
06:30 AM
|
0
|
0
|
953
|
|
POST
|
I don’t think any of that is relevant to me unfortunately — the webhooks are firing successfully about 50 times each day, when a new form is submitted. It seems that one or two simply don’t fire off the webook for some reason. To follow up in you’re first sentence actually: if a user uses Survey123 app, the client sends the webhook, rather than AGOL? If using a browser, is the browser client also sending the webhook payload? I assumed this would be coming from AGOL and not the clients. Actually it must— the json includes the objectid which isn’t created until the submission hits the AGOL server and registers a new record in the hosted feature layer?
... View more
07-14-2025
06:55 PM
|
0
|
2
|
971
|
|
POST
|
I have a handful of Survey123 forms (AGOL hosted) that have webhooks registered to deliver a payload for a Power Automate cloud flow trigger. 95% or more of the time everything works fine. But I'm noticing this week that there are many that simply don't fire the webhook payload to Power Automate. For example, a form submitted at 1:32PM triggered correctly a form submitted at 1:36PM - no record of any trigger a form submitted at 1:37PM triggered correctly There have been 6-7 of these instances in the past couple days. I hate that I have to baby this thing and make sure stuff is coming through. The Survey123 forms are for our Town residents to request services; I process them in Power Automate to ingest them into our Town's work order management system, and the customer gets a confirmation email, etc. Is anyone else experiencing something similar now or in the past? I started a ticket with ESRI just now but wanted to reach out to the community as well. Thanks in advance! P.S. I actually have two identical webhook payloads for each form, one goes to our Testing environment flow (which doesn't follow through on processing requests), one to our Production environment flow. Neither one triggers in these instances.
... View more
07-14-2025
11:26 AM
|
1
|
6
|
1016
|
|
POST
|
lyr = FeatureLayer(url=layer_url, gis=gis) # establish a FeatureLayer object from AGOL source attachment_manager = lyr.attachments attachment_list = attachment_manager.get_list(objectid) # this is a list, and each item in the list is a small dict of attachment attributes if attachment_list: # ignores a blank (empty) list for attachment in attachment_list: # download attachment attachment_path = attachment_manager.download(objectid, attachment["id"]) # this returns a list but it is a single string (path) I've tried to boil down my code I'm using to the above. I am experiencing an issue where, for example, a Survey123 form submission (saves to an AGOL hosted feature layer) has 4 attachments and it is taking > 15 minutes to download all of the photos. Each photo is about 9mb. Each one takes > 3 minutes to download. This seems extremely slow. I haven't tried @sakurai workaround but I just wanted to put in my experience about the slow download behavior.
... View more
03-06-2025
06:53 AM
|
1
|
1
|
1845
|
|
DOC
|
@JordanCarmona I still agree and I'll stay it again since ESRI still hasn't provided a satisfactory response on this issue. The best I got was a suggestion to write custom command line scripts to parse the update tool's -c text output. The -i flag is for installing and is an optional argument as the referenced documentation mentions. If you only intend to check for available patches you would not specify that flag. The -c flag is for running the tool in console mode to write output to the terminal. You would need to parse the output with a Batch or PowerShell script then subsequently send an email based on the logic you desire. The other flag of note is -o and has a default value of 'never', meaning it would never delete installed patches after they were downloaded and applied to the machine. You could also provide 'always' to clean up afterwards and conserve disk space. I feel like the need to write custom code to parse the unstructured text response from the command line tool for each Enterprise machine is a needlessly complicated burden that ESRI is putting on the user to create and maintain, when just signing up for email notifications seems like such an obvious and normal solution to reach end users for security (and other) patches. This is surely not reinventing the wheel. ESRI previously had email blasts going out based on check-boxes about what ESRI components you are interest in, in order to receive updates via email (https://go.esri.com/emailPreference[...]). I don't think this URL works anymore. That may have been a recent change. Here's what it used to look like last year. If users writing custom code is the only sure-fire solution that ESRI can offer (the mobile app is definitely not a proper solution), then we might as well write code that directly checks the whole structured .json for changes for all of our desired components at once. Why doesn't ESRI just write a simple tool that does that and provide it to everyone? Are these decisions just geared to forcefully funnel people to the app, despite all of its known limitations?
... View more
01-23-2025
05:43 AM
|
0
|
0
|
14477
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-28-2026 08:15 PM | |
| 1 | 03-08-2026 12:16 PM | |
| 1 | 07-22-2025 07:31 AM | |
| 1 | 12-02-2025 03:04 PM | |
| 1 | 11-19-2025 05:45 AM |
| Online Status |
Offline
|
| Date Last Visited |
Sunday
|