Select to view content in your preferred language

Accessing a POSTGRES database from a stand alone python script

1595
2
12-14-2020 07:48 AM
DavidMetzler1
Emerging Contributor

Hello,

 I have a python process that I am converting from Oracle to POSTGRES and I seem to be having issues accessing the POSTGRES database multiple times in the same python action. 

For example I can run a search cursor on a table inside the POSTGRES database one time and get results (this search cursor is executed with a WITH clause). when I try to run another search cursor on the same table i get a RUNTIME error.

If I skip the second cursor and move along in my process I get another RUNTIME error when I attempt an arcpy.da.Walk on the database itself. 

To me, and I am a complete novice, it seems like accessing the database multiple times is causing the error, perhaps because of some sort of lock placed on it by POSTGRES.

If anyone has any advice i would apricate it. 

Thanks

 

Example

here is a search cursor function that works every time i run it in my python script. this is the first time the database is accessed.

 
def _select_rows_from_table(self, sde_connection_path, table_name, fields, where_clause):
   
    row_array = []

    with arcpy.da.SearchCursor(os.path.join(sde_connection_path, table_name), fields, where_clause) as table_cursor:
        for a_row in table_cursor:
            row = dict()
            idx = 0
                
            for field_name in fields:
                row[field_name] = a_row[idx]
                idx = idx + 1
                
            row_array.append(row)

    return row_array

 I have tried to run many different functions after this and I always get a runtime error. here is an example of one that does not work. the workspace passed is the exact same as the one passed in the first function. 

    def _get_table_mapping(self, workspace):
        mapping = dict()
        feature_datasets = list()
        feature_datasets.append(None)
        dataset_walker = arcpy.da.Walk(workspace, datatype=['FeatureDataset'])
        for unused1, feat_datasets, unused2 in dataset_walker:
            for fds in feat_datasets:
                feature_datasets.append(fds)
        for a_feat_dataset in feature_datasets:
            wspace = workspace

            if a_feat_dataset is not None:
                wspace = os.path.join(workspace, a_feat_dataset)

            walker = arcpy.da.Walk(wspace, datatype=['FeatureClass', 'Table'])

            for junk1, junk2, things in walker:
                for a_thing in things:
                    pieces = a_thing.split('.')
                    base_name = pieces[len(pieces) - 1].lower()
                    mapping[base_name] = a_thing

        return mapping

 it may also help to know that i can execute all of these against the POSTGRES database inside of ArcGIS. 

Also as far as I know the database is geodatabse enabled. I can see all of the features in ArcGIS and have published a few services against the data. 

0 Kudos
2 Replies
VinceAngelo
Esri Esteemed Contributor

It would help if you provide the Python code you are using, and specify whether the PG database is enterprise geodatabase enabled.

- V

0 Kudos
DavidMetzler1
Emerging Contributor

I have modified my question based on your suggestions, thanks! 

 

0 Kudos