Creating a pretty simple GP service that returns select records from a large dataset. Only need about 8 of the 25 or so columns from the data in the result.
All of the intermediate datasets, and output dataset, are just in memory feature layers.
I'm trying to find a way to have this final output feature layer only include the 8 necessary columns. I've tried setting the unneeded fields to hidden, which is ok up until I look at the json return from the GP service - the hidden fields are there. Delete fields only works on feature classes/shape files etc - this kind of data is not needed for our purposes and requires extra code to ensure they are truncated/appended at each run.
Is there a way using MakeFeatureLayer_management (or anything else that can be in_memory) to simply drop fields, or explicitly say what fields to include?
As you know you cant delete hide the fields, since sql queries on limit the rows exposed not the columns.
In ArcGIS Pro, the process of deleting fields is now greatly simplified since you can batch delete just by providing the field names and without having to cycle through a list
http://pro.arcgis.com/en/pro-app/tool-reference/data-management/delete-field.htm
The only other thing I can think of numpy/pandas solutions but it depends on the geometry type you need as to whether it would be speed enough and json output would take some work
Are you using a FieldInfo object and it's still keeping the fields around in the final JSON?
Hi Erica -
Yes, that's correct. I have published the GP service from the python script where the MakeFeatureLayer_management includes "Field_info=..." and lists all of the fields, including setting the unneeded ones to Hidden.
Looking at the rest URL, where it lists the fields, it appears to respect the 'Hidden' settings, displaying only those we need. However on running the tool via REST, the output json returned includes all fields.
I won't bother linking you to a FieldInfo help page, then We use FieldInfo as part of a process for exporting shapefiles with a subset of fields, first making the feature layer and then copying it out to a shapefile (after a couple other layer-based manipulations that aren't relevant here).
So, what may work for you is creating an intermediate feature class (in_memory) from the feature layer once you have hidden the fields you don't want. Then, turn that feature class back into a feature layer. The hidden fields will go away when you make it into a "permanent" feature class, and then should not exist once it's converted back to a feature layer. (I'll admit that I have not tested this, AND it's a sort of funky workaround... but maybe worth a shot?)
So ......
arcpy.FeatureClassToFeatureClass_conversion("temp_layer", "in_memory", "temp_fc")
arcpy.MakeFeatureLayer_management("in_memory\\temp_fc", "final_layer")
Thanks Erica -
I'll give the in memory FC a shot. Main issue is performance - since it's a GP service we're looking to maximize speed and keep things lightweight. Will follow up with results.
Hi Erica -
I've implemented this in a draft GP service.
It's super straightforward, and it works. However there are a couple of things to report back -
1 - it does indeed add time to the GP tool returning results - went from 3.5 seconds to about 6 seconds in our case.
2 - in this case, I actually needed to use 2 FeatureClassToFeatureClass_conversion tools - the first allows you to delete fields from it. However - when publishing the GP service with this as the output dataset, it still returns all of the fields, including those that were deleted. Again, looking at the GP service REST end point, it does look correct - the deleted fields are not shown, but they are in the JSON results. So, you actually need to create another in_memory feature class after dropping the fields - this one will be 'clean' and when used as the output of the GP tool only includes the desired fields.
3 - Even though the feature classes are in_memory, they are 'sticky' as far as holding on to data from a previous run of the tool. So, for each in_memory feature class, I added:
if arcpy.exists(in_memory_fc):
Delete_management(in_memory_fc)
these are at the beginning of the script, to ensure the tool does not actually just return the same results from the last run.
Thanks -
Allen
Update -
While the tool worked fine in ArcMap, I had a hell of time getting the GP service to work correctly. Ended up using CopyFeatures_management to create the desired in_memory feature classes instead of FeatureClasstoFeatureClass_management.