How to create a time-enabled layer to work with GeoAnalytics Desktop tools for a standalone Python script

1452
10
Jump to solution
04-21-2021 03:17 PM
ET
by
New Contributor

Hello - Any guidance on how to create a time-enabled layer to work with GeoAnalytics Desktop tools (ArcGIS Pro 2.7.0) in a standalone Python script? I see the code samples for the Reconstruct Tracks and Find Dwell Locations tools, but when I try to run these samples, cannot seem to find a way to set or make the input layer time-enabled. I'd like to run my script without having to open a map.

Any suggestions? 

0 Kudos
1 Solution

Accepted Solutions
by Anonymous User
Not applicable

Hey ET,

Thank you for your question.

I've added a sample script below that will set time on the input features you want to use. However, you will have to open your map to prepare the data initially. After that first step in the map, you'll be able to use the data in a standalone Python script.

import arcpy

# To create the lyrx: add your layer to a map, open the layer properties and enable time, then right-click on the layer and choose to Share As Layer File
input_lyrx = r'C:\data\layer_files\L0ubersf_subset.lyrx'

# Make Feature Layer converts the lyrx to features
make_feature_layer_output = arcpy.MakeFeatureLayer_management(input_lyrx, "make_feature_layer_output")

# Apply symbology sets the time on the feature layer based on the lyrx file definition
arcpy.ApplySymbologyFromLayer_management(make_feature_layer_output, input_lyrx)

# Now you can use the make feature layer output as time enabled points
arcpy.gapro.FindDwellLocations(input_features=make_feature_layer_output,
                               output=r'C:\data\output.gdb\MyOutputLayerName',
                               track_fields="id",
                               distance_method="PLANAR",
                               distance_tolerance="250 Feet",
                               time_tolerance="3 Minutes",
                               output_type="DWELL_CONVEX_HULLS",
                               summary_statistics="latitude SUM;latitude VAR")

 

Please let me know if this works for you or if you have any questions.

Thank you!

Bethany

View solution in original post

10 Replies
by Anonymous User
Not applicable

Hey ET,

Thank you for your question.

I've added a sample script below that will set time on the input features you want to use. However, you will have to open your map to prepare the data initially. After that first step in the map, you'll be able to use the data in a standalone Python script.

import arcpy

# To create the lyrx: add your layer to a map, open the layer properties and enable time, then right-click on the layer and choose to Share As Layer File
input_lyrx = r'C:\data\layer_files\L0ubersf_subset.lyrx'

# Make Feature Layer converts the lyrx to features
make_feature_layer_output = arcpy.MakeFeatureLayer_management(input_lyrx, "make_feature_layer_output")

# Apply symbology sets the time on the feature layer based on the lyrx file definition
arcpy.ApplySymbologyFromLayer_management(make_feature_layer_output, input_lyrx)

# Now you can use the make feature layer output as time enabled points
arcpy.gapro.FindDwellLocations(input_features=make_feature_layer_output,
                               output=r'C:\data\output.gdb\MyOutputLayerName',
                               track_fields="id",
                               distance_method="PLANAR",
                               distance_tolerance="250 Feet",
                               time_tolerance="3 Minutes",
                               output_type="DWELL_CONVEX_HULLS",
                               summary_statistics="latitude SUM;latitude VAR")

 

Please let me know if this works for you or if you have any questions.

Thank you!

Bethany

RainbowUnicorn
New Contributor III

Hey Bethany - is this solution still valid?  Do I *really* have to go in to the application to set time on my layers first in order to get reconstruct tracks to work in an arcpy script? 

I'm using 2.9.3 and was hoping against hope that I could use something as simple as this:

for lyr in maps.listLayers():
    lyr.enableTime("DateTime")
maps.isTimeEnabled = True

to skip having to set time on nearly 100 layers of AIS data.  I want to avoid all that clicking, but alas if that's what's required, I guess I'm launching the program instead of working in my notebook.


0 Kudos
by Anonymous User
Not applicable

Hey @RainbowUnicorn (great username!),

Yes as far as I know this is still the required workflow.

I'm wondering if you could tell me more about the layers you're working with? Maybe we can find a better solution!

- Are they shapefiles or geodatabase feature layers?

- Did the AIS data come from CSV files originally and do you still have them?

- Is it required for your workflow that you run analysis on each layer separately?

These questions will help me to determine if you could use a Big Data Connection (BDC) instead - it will save you tons of time by accessing all of the CSV or shapefiles at once, instead of multiple shp/FGDB layers as you describe in your workflow. Most importantly, big data connection datasets can have time set on them directly (no need for extra time-enabling steps in your script, you can just point to the dataset).

Here is some doc on Big Data Connections (conceptual): https://pro.arcgis.com/en/pro-app/latest/help/data/big-data-connections/big-data-connections.htm
And the following doc outlines how to create one: https://pro.arcgis.com/en/pro-app/latest/help/data/big-data-connections/new-big-data-connection-dial...

Thanks!

Bethany

0 Kudos
RainbowUnicorn
New Contributor III

Thanks for the user name compliment - I totally just threw the first thing that came to mind in the user name field...it being pride month and all might have been a factor.

Sooo...there's a lot of backstory on how I got to these 91 features.  But to answer your questions first:

- Are they shapefiles or geodatabase feature layers?

  • They are feature layers in a file geodatabase.

- Did the AIS data come from CSV files originally and do you still have them?

  • Yes and yes.  I have the cleaned source data in csv format.

- Is it required for your workflow that you run analysis on each layer separately?

  • This is a little harder to answer.  My analysis is more focused on visualization of vessel tracks (point and track densities) in relation to current Aids to Navigation (ATON) and established shoals (Hankerchief Shoal, Bearse Shoal) as well as a discrete area of interest census of vessels and their characteristics in "the vicinity" of certain ATON or defined channels/vessel routes.  To do this, the products I'm working to deliver are heat maps and PowerBI histograms/bar charts. 

A little more backstory on my work flow - I began with AIS from Marine Cadastre and from my own internal USCG process (which only provides AIS up to 3 years ago on a rolling basis.)  I mashed those files up in to a uniform schema, ran some validation against authoritative sources for ship type and then built a BDC for my project years - 2015 through 2021. 

The BDC object(s) served me well until I tried to filter and export points by ship type which wound up choking my machine (soooo slow) to the point where I opted to mash up the csvs in the BDC source directory using pandas and exporting the filtered annual csvs to a new directory and importing them back in to the gdb.

Everything was proceeding well when I hit the snag with having to enable time on the layers for Reconstruct Tracks to work.  And here's the other part - I've got an open support ticket in on this issue related to Reconstruct Tracks yelling at me that my input features do not have time enabled.  I can get Reconstruct Tracks to work with small files or big new files, but the second I try to feed it older (projects that have maybe been saved more than a dozen times) or big features (like I'm doing now) it seems like it times out when trying to get a grasp on the time extent of the large files.

Always open to smarter ways of accomplishing what I need to deliver to the operational commanders...hoping you might have a silver bullet!

Thanks,

Amilynn 

(aka SemperGumby, BohemianUnicorn, RainbowUnicorn...)

0 Kudos
RainbowUnicorn
New Contributor III

I think I might have found something buggy when it comes to enabling time on a bunch of layers which perhaps, just maybe, could possibly have something to do with why Reconstruct Tracks behaves in an intermittent fashion for me...which hopefully solves an annoying bug while leaving a moderate inconvenience wrt the ability to enable time in arcpy.  Check out the time properties for these layers in my map:

RainbowUnicorn_0-1654798427074.png

and

RainbowUnicorn_1-1654798482427.png

 

Where's the Time Format field in the Tanker layer?  Hmmmm...

0 Kudos
by Anonymous User
Not applicable

Ahh good observation. This isn't a bug, although I can see how it may appear that way. The Format parameter is hidden for Date type fields. Date fields "just know" the format so no need to specify it.

0 Kudos
RainbowUnicorn
New Contributor III

Thank for the insight.  The thing I still struggle with is why the time format property shows up randomly on some layers and not others when all the files have the same "DateTime" field and same DTG format.  Shouldn't the behavior of the time properties be uniform/consistent across the layers since the source files are consistent?

0 Kudos
by Anonymous User
Not applicable

Hmm. So my following comment is regarding this portion of the workflow:

> The BDC object(s) served me well until I tried to filter and export points by ship type which wound up choking my machine (soooo slow) to the point where I opted to mash up the csvs in the BDC source directory using pandas and exporting the filtered annual csvs to a new directory and importing them back in to the gdb.


What if, instead of exporting by ship type, you just added a filter to the BDC dataset for ship type and run Reconstruct Tracks from there?

You can add filters to BDC dataset by using the tool Update Big Data Connection Dataset Properties (ultra-long tool name - we know haha).

For your ship type specific workflow, you can update the filter for each run. Or, if you prefer to maintain a dataset for each filter, you could use Duplicate Dataset From Big Data Connection and have a dataset for each filter.

With this workflow you will be able to:
- skip the export to fgdb portion and use the ship type subsets (your duplicated BDC datasets) for analysis

- skip the extra steps for setting time 

Let me know if that does/doesn't make sense for your workflow, or if you have any questions.

0 Kudos
RainbowUnicorn
New Contributor III

Thanks for the tip. I hadn't considered that route and I'll definitely go check it out. 

Gut feeling - this may or may not be more cumbersome since the filters are somewhat complex thanks to the wonky ShipType's assigned to various vessel types by the international authorities who developed the ship type encoding and the need to translate that encoding to the commonly accepted groupings of vessel types according to US maritime convention.  For instance - fishing vessels are easy, ship type code 30.  Pleasure vessels can generally be lumped together as 36 (sailing) and 37 (motorboat), but "work boats" include 50, 51, 53, 54, 55, 58 and 59.  

At the end of the day, I'm just trying to provide waterways trend analysis with a maximum amount of detail for an audience who just wants to see visualizations in order to make decisions without having to invest too much effort in learning how to filter data and creating views themselves.