Hi Data Pipelines Team,
Due to the extremely large amount of data in the Living Atlas' iNaturalist observations, we're currently trying to query the data to just our park boundaries to reduce the amount of load time/drawing issues for our wildlife management team.
We've set up our Data Pipeline to filter by our park's extent then additionally by specific attributes. When working with the setup of this pipeline, we're seeing extreme latency and loading issues. I'm guessing this is due to how large the iNaturalist observations dataset is. Is there a Data Pipeline solution/workflow for working with large datasets like these? For example, I'm not even able to load the fields for the Filter parameters, it just spins:

When trying to preview, it also just spins.
How does Data Pipelines work with these types of large datasets when trying to configure? Is it trying to load all the data in addition to the fields?
Thanks for any insight!
Best,
Amanda Huber