IDEA
|
Hi @KevinWyckoff1, Thank you for the idea and taking the time to submit it. Another approach for now is to use the Python API to create the task for the data pipeline. This provides more granularity by supporting cron expressions. Once created, you will be able to use the task like other tasks in the data pipelines web app - you can view it, stop it and start it, review the results, etc. The interval properties just may not populate correctly if you try to edit them. Here's a blog post with more information. Also, here's an example Python script. Note that the expression is in Coordinated Universal Time (UTC), so you may need to adjust the days and times for your time zone. For example, the East Coast is currently 4 hours behind UTC, so the hours would be 12-20. from arcgis.gis import GIS
gis = GIS("home") # ArcGIS Online Notebook initialization
pipeline = gis.content.get("{data pipeline item ID}")
gis.users.me.tasks.create(pipeline, "*/15 8-16 ? * 1-4", "RunDataPipeline", title="Daily workday updates")
# CRON explanation
# */15 = every 15 minutes
# 8-16 = 8am to 4pm UTC
# ? = every day of the month
# * = every month
# 1-4 = Monday (1) to Thursday (4)
... View more
2 weeks ago
|
0
|
0
|
108
|
POST
|
Hi @michelle-maps , the apostrophe being removed is a bug, thank you for taking the time to report it! And I'm glad you found a workaround, thanks for the help @VenkataKondepati . We have an internal issue tracking this but if you'd like to create a Support case as well, that'd be helpful. The case will generate an official bug that others can reference and use to track the status. Here's the Support contact page.
... View more
3 weeks ago
|
0
|
0
|
58
|
POST
|
Hi, thanks for posting! This should work and may be a bug. Would you mind opening a case with Support so they can investigate the data pipeline further? Here is their contact page. I did try to reproduce and it works as expected for me. The attached screenshot filters US states first to those that start with "M" then to those in the Northeast, and it shows "Maine" and "Massachusetts". I also tried using the extent of another dataset. Here's the full data pipeline if it's helpful, maybe there's a difference between our input datasets or tool configurations.
... View more
3 weeks ago
|
0
|
0
|
80
|
POST
|
Hi @Amanda__Huber , I'd recommend opening a tech support case here to investigate the issue. In general, the error implies that the feature layer queries made by Data Pipelines are failing. Given that it is working in other apps, my hunch is that the layer server does not have enough resources to respond to the queries, especially if the layer has many features with complex geometries. This may not happen in other apps because they only need to query subsets of the data. For example, Map Viewer has optimizations to only query the data for the current map extent with features simplified for the current scale, while Data Pipelines processes the entire layer with full resolution geometries. That said, it'd be good to work with support to troubleshoot the specifics. I would also try adding a tool like Select fields - if the fields show up in the form, this verifies that Data Pipelines can at least access the layer and it is related to the underlying feature queries.
... View more
07-09-2025
02:10 PM
|
1
|
1
|
225
|
POST
|
Hi @IB3 , from the Open API specification in the documentation you linked, it looks like the API key needs to be included in a header named "X-Api-Key". You can do this by following the documentation here, and specifying the following parameters for the service connection: Authentication type: API key Parameter location: Header Parameter name: X-Api-Key Then for the geometries, do you know what the data schema is, or could you provide an example? It looks like the response format is JSON and I'm wondering what specific fields and data types that JSON contains. You can get the full schema by adding the URL input, clicking "Preview", then clicking "Schema", then clicking "Expand all". There's a clip of this in the release blog under the "Enhanced schema preview" section.
... View more
07-08-2025
09:49 PM
|
0
|
0
|
106
|
POST
|
Hi @BlakeMorrison, one-to-one joins create summarized values for all "Join" dataset records that match a single "Target" dataset record, so it doesn't return the "Join" attribute values themselves. By default, the only summary statistic is the count, but more can be added with the "Summary fields" parameter. Then one-to-many joins create a new record for each matching "Join" dataset record, but they don't summarize so they don't include the count. A couple ideas to accomplish what you described: For the 0 match case, use "One to many" and "Left join", then use the Filter by attribute tool to filter to the records where the joined fields are empty (no match) or where they are not empty (match) For the generic case, add a "One to one" join to create the count, then add a "One to many" join with the original dataset and "One to one" output as inputs. This should add the count back to the original records Let us know if this helps! And thank you for the question
... View more
03-18-2025
05:41 PM
|
0
|
0
|
533
|
IDEA
|
Hi @RyanKelley_NA - we're still looking at this and have made some foundational improvements but don't have a timeline. What tools and processing are you using in your data pipelines? The trickiest part is defining if and how attachments propagate through integration tools such as Join and Merge. There is some nuance for the other tools as well. Any details you can share about your workflows and desired functionality would be greatly appreciated! Please feel free to reach out via email as well.
... View more
02-18-2025
05:28 PM
|
0
|
0
|
577
|
POST
|
Hi Royce, thanks for the question! We made some related updates in the release this week:
The "Create" method now has an "Overwrite" option that, when selected, will recreate the entire hosted feature layer if it already exists (doc & considerations)
When switching from "Create" to "Replace" after the data pipeline is run, the "Replace" layer will automatically populate with the output from "Create" (this was inspired by your post)
Our recommendation is to use "Create" while authoring and "Replace" for automated workflows. "Replace" is quite optimized for automation; it preserves the schema to avoid breaking downstream apps, it rolls back on failures, and it has negligible disruption (via a table swap). That said, "Create" with "Overwrite" enabled can also work if you want the layer schema to change. I would author this all within one data pipeline because it will be easier to switch from "Create" to "Replace" for the newly created layers. Hope this is helpful and please continue to share feedback as it comes up! Thank you, Max
... View more
11-14-2024
05:30 PM
|
1
|
0
|
387
|
POST
|
Hi Royce, thanks for the question!
To run a subset of the outputs in a data pipeline, you can select the desired outputs then click the "Run" button in the context menu that appears over the output element (red arrow).
Multiple outputs can be selected by holding "Shift" and clicking on each one individually, or by dragging the mouse on the canvas over the elements with the "Select elements" interaction enabled (gray cursor button in the top right).
... View more
10-18-2024
01:09 PM
|
1
|
0
|
532
|
POST
|
Hi, thanks for the question! To get at this data, use "data" as the root property, then add two Unnest tools to flatten out the fields. The root property currently does not support accessing array entries (for example, "data[1]"). Here's a screenshot:
... View more
07-06-2024
03:34 PM
|
1
|
1
|
581
|
IDEA
|
Hi @Levon_H , thank you for the idea! Where are your file geodatabases stored? We are actively working on support for reading feature classes from file geodatabases. Like other file inputs, the files will need to be stored in an accessible and supported location (AmazonS3, Azure Storage, or uploaded locally to Online), so I'm curious if this will fit your use case? Also, if there are more details you can share about your current workflow (is there intermediary processing, is it updating multiple layers in a single service, etc), that would be very helpful as well!
... View more
05-02-2024
12:51 PM
|
0
|
0
|
548
|
IDEA
|
Hi @maranlk & @ShanaCrosson2 , we're starting to investigate this feature. Would you be willing to meet and share more about your use cases? If so, please email me and we can schedule a time, mpayson@esri.com, thank you!
... View more
05-01-2024
05:01 AM
|
0
|
0
|
1140
|
IDEA
|
Hi @maranlk , thank you for the idea, and for all the context! Do you have additional ideas or requirements for managing attachments from within Data Pipelines, and the experience for this? For example, in previews, would you expect to be able to view the attachments, or would it be sufficient to have an indication that there are attachments? Any feedback here is greatly appreciated!
... View more
01-29-2024
06:43 PM
|
0
|
0
|
1367
|
POST
|
Hi Romain, Thanks for the question! Can you share a link to the public data so we can investigate the structure? Is it a single file, with a top-level array of feature collection objects? Such as: [
{"type": "FeatureCollection", "features": []},
{"type": "FeatureCollection", "features": []}
] Best, Max
... View more
12-08-2023
07:35 AM
|
0
|
1
|
2331
|
Title | Kudos | Posted |
---|---|---|
1 | 07-09-2025 02:10 PM | |
1 | 11-14-2024 05:30 PM | |
1 | 10-18-2024 01:09 PM | |
1 | 07-06-2024 03:34 PM | |
4 | 10-20-2017 02:37 PM |
Online Status |
Offline
|
Date Last Visited |
2 weeks ago
|