Iterate through SDE to find and export FCs with Attribute Rules with python?

2438
18
Jump to solution
03-16-2022 08:56 AM
Amarz
by
Occasional Contributor II

Hello all,

I am wondering if you can use python to iterate through an SDE to locate feature classes that have attribute rules, then export them to a folder for safe keeping?

A follow up to this will be to remove all attribute rules from said feature classes for different workflows.

I appreciate any help, I could only find documentation on how to describe the properties within a single FC.

Thank you.

0 Kudos
18 Replies
Amarz
by
Occasional Contributor II

Apologies, I edited the initial follow up because I got the script to run as the sde connection in the og folder was removed on accident, adding it back in there fixed that problem.

As for the rest,

I updated line 13 from 'fcPath' to 'item'

attempting to change out 

os.remove(ruleFile) 

into  

 arcpy.env.overwriteOutput = True

 still gave me error of: 

...Failed to execute. Parameters are not valid.
..._AA.csv already exist.

 print(ruleFile) returns 

NameError: name 'ruleFile' is not defined

arcpy.Describe(aaPath) returns 

<geoprocessing describe data object object at 0x000002734511E690>

Again, your help is greatly appreciated

0 Kudos
Brian_Wilson
Occasional Contributor III

I think it's time for you to paste your whole script in here again.

 

0 Kudos
Amarz
by
Occasional Contributor II
import arcpy
import os

gdb
 = r"C:\data\gis.sde"
aaPath = r"C:\data\AttributeRules"

def export_attributes(item, ruleFile):
    attRules = arcpy.Describe(item).attributeRules
    if len(attRules) == 0:
        arcpy.AddMessage("No rules found in %s" % item)
        return True # Nothing to do

    if os.path.exists(ruleFile):
        os.remove(ruleFile)
    try:
        print(ruleFile)
        arcpy.management.ExportAttributeRules(item, ruleFile[67:])
    except Exception as e:
        arcpy.AddMessage("FAILED to create %s, %s" % (ruleFile, e))
        return False

    arcpy.AddMessage("SUCCESSFULLY created %s" % ruleFile)
    return True


arcpy.env.workspace = gdb
arcpy
.Describe(gdb)
for ds in arcpy.ListDatasets("*"):
    for fc in arcpy.ListFeatureClasses("*", feature_dataset=ds):
        fcPath = os.path.join(ds, fc)
        rules = os.path.join(aaPath, fc+'_AA.csv')
        export_attributes(fcPath, rules)

for fc in arcpy.ListFeatureClasses("*"):
    rules = os.path.join(aaPath, fc+'_AA.csv')
    export_attributes(fc, rules)

for table in arcpy.ListTables("*"):
    rules = os.path.join(aaPath, table+'_AA.csv')
    export_attributes(table, rules)

Only thing that is not working, is placing the CSVs in the correct location, and overwriting them upon re-running the script. 

So close.. 

Again, thank you so much for your insight.

0 Kudos
Brian_Wilson
Occasional Contributor III

Remove the bracketed expression from line 18, "[67:]"

Make it just say 

arcpy.management.ExportAttributeRules(item, ruleFile)
0 Kudos
Amarz
by
Occasional Contributor II

The bracketed expression is only there to limit the file name. Otherwise it would attempt to write the exported Attribute Rule CSV as 'C:\data\gis.sde\<datalayername>_AA.csv' which wont work because invalid characters. With the index, it removes all the path before the <datalayername>.

I ended up just adding a shutil.move onto the end which moves it all now and overwrites existing.. not sure why that was needed, but adding it to the bottom of the above script works.

 

pattern = "\*.csv"
files = glob.glob(src_folder + pattern)

# move the files with csv extension
for file in files:
    # extract file name form file path
    file_name = os.path.basename(file)
    shutil.move(file, aaPath + file_name)
    print('Moved:', file)

There is probably a cleaner way to do this, but for now this is working.

Thank you so much for your help! This simple script will save me hours or work!

 

0 Kudos
Brian_Wilson
Occasional Contributor III

I assumed something like that and should have said you want to use os.path.split() for that. Also os.path.splitext()

>>> import os
>>> p=r'C:\path\to\file\my name.csv'
>>> os.path.split(p)
('C:\\path\\to\\file', 'my name.csv')
>>> os.path.splitext(p)
('C:\\path\\to\\file\\my name', '.csv')

 

I usually do something like this.  A lot of tasks need to convert a file from say a TIFF to a PDF so I often need to cleanly pull the extension off of the filename and then put the new file in a new place...

def myfunc(pathname, targetdi):
  (path, filename) = os.path.split(pathname)
  (name, ext) = os.path.split(filename)
  newname = os.path.join(targetdir, name, '.csv')

It's also completely OS independent since there are no slashes or upper or lower case checks or drive letters involved. I usually ignore checking for really weird invalid paths, and use "try" blocks to catch those or just let it fail and fix the problems as the come up.

 

Sometimes on big jobs I use a "try" block catch all the weird missing or broken files in a list and print them at the end, it's faster to process say 10000 files and then look at a list of 5 broken ones than it is to run the job 5 times dealing with one broken file at a time.

0 Kudos
Amarz
by
Occasional Contributor II

Hello @Brian_Wilson BrianWilson7, your script is an amazing help to myself and reduces the time spent on this task greatly!

I was wondering is it possible to take the list that finds and exports all the '_AA.csv' and make a companion script to go back through the list and import where applicable?

For example, search through the folder for 'Zoning_AA.csv' and import the attributes into the 'Zoning' feature class?

0 Kudos
Brian_Wilson
Occasional Contributor III

Thinking out loud here (sort of), the CSV filenames won't have a dataset  so you don't need to worry about that (probably), so you just need to find all the *_AA.csv files in a folder and run an import?

from glob import glob
import arcpy

for csv in glob('*_AA.csv'):
  table = csv[:-7]
  print("Importing \"%s\" to \"%s\"." % (csv, table))
  arcpy.management.ImportAttributeRules(table, csv)

 

NOTE I did not test this... I don't even know if it compiles.

I would comment out the line 7 and run it once to make sure the table name is correct.

It will look in the current folder for everything so it's up to you to change directories or point it at a database or whatever you need.

0 Kudos
JoeGuzi
New Contributor II

Greetings All,

      Thank you for this post. I used everything I learned from this post and made this Jupyter Notebook to audit the attribute rules in a gdb. I hope this helps, and feel free to let me know if you have any questions. thank you for everything and have a delightful day.

Sincerely,

Joe Guzi

0 Kudos