Scripting (Python) Feature Class content into Shapefiles based on an UTM data structure

3058
21
Jump to solution
04-26-2016 02:49 AM
EduardoAbreu-Freire
New Contributor III

Consider a Feature Class (part of an Enterprise GDB content) and two of its fields are "UTM_grid" and the "OID".

Scripting with python 2.7 for ArcGIS 10.3.

"UTM_grid" field was broken down into "UTM_block" + "UTM_sheet" and then using a dictionary and list structure UTM_block-->UTM_sheet-->OID was organized with this syntax:

{ 'UTM_block_1' : { 'UTM_sheet_A' : [ oid_1 , oid_2 ] }, { UTM_sheet_2 : [ oid_3 , oid_4, oid_5 ] } }

We need to create **shapefiles** with features (identified by the object value - oid) based on the follow data structure (OIDs will be organized regarding an UTM/location based structure):

- UTM_block_* will be a folder

- UTM-sheet_* its subfolder.

- List of objects (OIDs) will populate the shapefile.

---

Another point we have to deal with is that the shapefile should only have some fields of the original Feature Class:

We have used fieldmappings and arcpy.FeatureClassToFeatureClass_conversion() to create a non-UTM-organized shapefile from a Feature Class with the fields that we need.

---

Now the problem is how to process the **feature export** to get UTM-organized shapefiles! How can we manage it?

Bellow is what we have now inside a function:

    with arcpy.da.SearchCursor(feature_class, ['UTM_grid , 'OBJECTID']) as cursor:

        dic = dict()

        for row in cursor:

            UTM_grid_value = row[0]

            oid_value = row[1]

            try:

                b_s_split = UTM_grid_value.split('_')     # split into block & sheet

            except Exception, e:

                pass

                print(UTM_grid_value)

                print(e)

            else:

                # create folders & subfolders (blocks & sheets) in a directory

                dic.setdefault( b_s_split[0], {} ).setdefault( b_s_split[1], [] ).append( oid_value )

                dir_path = os.path.dirname("E:\\")     # full path to directory

                dirs = [ [b_s_split[0]] , [b_s_split[1]] ]

             

                for item in itertools.product(*dirs):

                    if not os.path.isdir(os.path.join(dir_path, *item)):

                        os.makedirs(os.path.join(dir_path, *item))

0 Kudos
21 Replies
EduardoAbreu-Freire
New Contributor III

Blake Terhune​​

Missed the "_" in the field.

FOL_250K returns the correct value.

If you run the snippet above with that variable/FC, fol_250k_value = B3_SULC33U

0 Kudos
BlakeTerhune
MVP Regular Contributor

Haha, you're killing me! So if it should look like B3_SULC33U and you want to parse on the first underscore, then you can replace that last section with this (untested):

# Create folders and export features to shapefiles for each UTM value

for utm in distinct_utm:

    delim = "_"

    utm_parse = utm.split(delim, 1)

    if len(utm_parse) == 2:

        block, sheet = utm_parse

        sheet_path = os.path.join(root_dir, block, sheet)

        if not os.path.exists(sheet_path):

            os.makedirs(sheet_path)

        arcpy.FeatureClassToFeatureClass_conversion(

            feature_class,  ## in_features

            sheet_path,  ## out_path

            "{}_{}_{}".format(block, sheet, os.path.basename(feature_class)),  ## out_name

            "FOL_250K = '{}'".format(utm)  ## where_clause

        )

    else:

        raise ValueError("Could not parse {} with {}".format(utm, delim))

0 Kudos
BlakeTerhune
MVP Regular Contributor

Is this close? I'm assuming that you can parse based on the index of "S" in FOL_250K. I'm also assuming you want all fields in the feature class exported to the shapefile.

import arcpy

import os

def main():

    # local variables

    root_dir = r"C:\temp\Msg605039"

    gdb = os.path.join(root_dir, "gdb2shp_selected_fields.gdb")

    feature_class = os.path.join(gdb, "gab_und_crt_dic")

    # Get list of distinct UTM values

    utm_field = "FOL_250K"

    sql_prefix = "DISTINCT {}".format(utm_field)

    sql_suffix = None

    distinct_utm = [

        i[0] for i in arcpy.da.SearchCursor(

            feature_class, utm_field, sql_clause=(sql_prefix, sql_suffix)

        )

    ]

    # Create folders and export features to shapefiles for each UTM value

    for utm in distinct_utm:

        delim = "S"

        delim_index = utm.find(delim)

        if delim_index == -1:

            raise Exception("Could not parse {} with {}".format(utm, delim))

        else:

            block = utm[0:delim_index]

            sheet = utm[delim_index:len(utm)]

            sheet_path = os.path.join(root_dir, block, sheet)

            if not os.path.exists(sheet_path):

                os.makedirs(sheet_path)

            arcpy.FeatureClassToFeatureClass_conversion(

                feature_class,  ## in_features

                sheet_path,  ## out_path

                "{}_{}_{}".format(block, sheet, os.path.basename(feature_class)),  ## out_name

                "FOL_250K = '{}'".format(utm)  ## where_clause

            )

if __name__ == "__main__":

    main()

View solution in original post

EduardoAbreu-Freire
New Contributor III

Blake Terhune

Made 2 adjustments to get what we need.

Line 22

delim = "_"

Line 28

delim_index+1

0 Kudos
BlakeTerhune
MVP Regular Contributor

See my edited reply here.

0 Kudos
EduardoAbreu-Freire
New Contributor III

Blake Terhune

That is it thank you

I am glad you could survive to it

--> Another question-opinion related with this geoprocessing.

The real FCs we have in the SDE have more fields other than those in the file attached to this post.

The most complex FC has a relation with a Table. We investigated FC and Table attributes to get those to build the join. Creating a lyr of the FC we processed the join of it with the Table.

Then using fieldmappings we defined the fields we wanted to keep in the final FC (the one attached) as this snippet shows :

ws = "path to workspace"

fieldmappings = arcpy.FieldMappings()

fieldmappings.addTable(os.path.join(ws, gab_und_crt_dic[0]))

for inputfield in fieldmappings.fields:

    if inputfield.name not in ["ID_UND_CRT","st_UND_GEOL","DSCR_UND_CRT","DSCR_UND_CRT_LG","FOL_250K","ID_UND_LITO",

                                  "DSCR_UND_GEOL", "DSCR_UND_LITO", "IDE_CRON_INF", "IDE_CRON_SUP"]:

            fieldmappings.removeFieldMap(fieldmappings.findFieldMapIndex(inputfield.name))

  

    # export FC with selected fields to shp

    arcpy.FeatureClassToFeatureClass_conversion(gab_und_crt_dic[0], gdb2shp_selected_fields.gdb, "gab_und_crt_dic", field_mapping = fieldmappings)

Would you take this approach or another way?

0 Kudos
BlakeTerhune
MVP Regular Contributor

From the code you posted, you're not actually joining a table, only limiting the fields that are exported from whatever gab_und_crt_dic[0] is. If that's the case, then your code is fine. An alternative method would be to only add the field maps you want from the start instead of adding everything, then going through each one to remove what doesn't belong.

ws = "path to workspace"

fc_path = os.path.join(ws, gab_und_crt_dic[0])

out_fields = [

    "ID_UND_CRT",

    "st_UND_GEOL",

    "DSCR_UND_CRT",

    "DSCR_UND_CRT_LG",

    "FOL_250K",

    "ID_UND_LITO",

    "DSCR_UND_GEOL",

    "DSCR_UND_LITO",

    "IDE_CRON_INF",

    "IDE_CRON_SUP"

]

# Build field mappings with only desired fields

fieldmappings = arcpy.FieldMappings()

for field in out_fields:

    fm_obj = arcpy.FieldMap()

    fm_obj.addInputField(fc_path, field)

    fieldmappings.addFieldMap(fm_obj)

# Export feature class with field mappings

arcpy.FeatureClassToFeatureClass_conversion(

    bla,

    bla,

    bla,

    bla,

)

EduardoAbreu-Freire
New Contributor III

Blake Terhune

Taking the feature class to shapefile processing organized by utm (sheet) location ( folder: block, subfolder: sheet ) in the case we need to filter the exported output, in other words, create only shapefiles of a specific location (e.g. Block 1: "B1"), what would be the way to accomplish it? That filter-variable would be given as input to the script.

0 Kudos
BlakeTerhune
MVP Regular Contributor

Will you only ever filter on block or do you want to also filter on sheet or something else? Will it be a single filter value or a list of filter values?

0 Kudos
EduardoAbreu-Freire
New Contributor III

Blake Terhune We could manage a filter on block introducing an if condition inside  the loop "for utm in distinct_utm".

At a block filter level makes sense to use a single value.

Would be interesting contemplating also a filter on sheet. How could we manage this? Thank you

0 Kudos