Select to view content in your preferred language

GP Tool - Schema Locking Error with Geocoding Process

1944
5
Jump to solution
05-25-2014 12:06 PM
IanPeebles
Frequent Contributor
I have a question. I have created a python tool that has two input parameters. These parameters include (input initials) and (input date). After these parameters are entered in the tool runs and maps building inspections. The output results show a colored point wiwith the matching colored label based on the input initials. The tool runs fine in desktop and publishes to ArcGIS Server.

In ArcGIS Server, the tool runs well, but when another user runs the tool simultaneously, both user get schema locks in regards to the geocoding service causing the tool to fail.


How can I prevent these schema locks? The weird thing is that this tool worked before. I am running ArcGIS Server 10.1 SP1.

This is how my locator is called out within the tool:

AddressesWithBldgNo = r"e:\locators\AddressesWithBldgNo"

scratch workspace is set when I run the tool = e:\outputworkspace

I have a share set up. Why would I get a schema lock? How can I use the address locator without getting a schema lock when multiple users are hitting GP service. Any help will greatly be appreciated.

Here is a screenshot of the error message I am getting in the Silverlight Application.[ATTACH=CONFIG]34071[/ATTACH]

Ian
Tags (2)
0 Kudos
1 Solution

Accepted Solutions
OwenEarley
Frequent Contributor
I don't think the issue is with the locator, it looks like the schema lock is happening when the second process attempts to copy rows to the DBF file:

CRWInspectionLogCopyRows = os.path.join(scratch,CRWInspectionLogCopyRows.dbf) ... if result == 0:    arcpy.AddWarning("Process terminated."); sys.exit() else:    # The next line is probably the cause of the schema lock:    arcpy.CopyRows_management(CRWInspectionLog, CRWInspectionLogCopyRows, "")


The DBF file will be locked by the first process and the second process will not be able to write to the same file while it is open. In your script this file would be the same for all users:

E:\geoprocessing\Building\Outputworkspace\CRWInspectionLogCopyRows.dbf

To avoid this on the server you should remove the line that specifically sets the scratchWorkspace:

# 2. Set geoprocessing environment scratch = arcpy.env.scratchWorkspace arcpy.env.scratchWorkspace = r"E:\geoprocessing\Building\Outputworkspace" # - REMOVE this line arcpy.env.overwriteOutput = True


Change to:

# 2. Set geoprocessing environment scratch = arcpy.env.scratchFolder arcpy.env.overwriteOutput = True


Then when simultaneous users run the script they each get a system generated job/scratch directory and the DBF files can be created without any locking issues.

Alternatively, you could use some random temp file name for each DBF if you want to keep your specific scratchWorkspace. This way each process is writing to a different DBF file.

Hope this helps.

Owen
www.spatialxp.com.au

View solution in original post

0 Kudos
5 Replies
OwenEarley
Frequent Contributor
When running a GP service on the server, the scratch workspace is created under a job directory. For example:

http://[server]/arcgis/rest/directories/arcgisjobs/lotexport_gpserver/j52e4e3ec1ff741e9aad19b8263615232/scratch/WebMap_12345.pdf
                                                                                       ^
                                                                                   (Job ID)


Your script may be overriding this and attempting to use the same scratch workspace for simultaneous users - leading to the schema lock.

The code that produced the URL result above just uses the scratchFolder as the directory in which to create the PDF file:

fn = 'WebMap_{0}.pdf'.format(str(id))
out_fn = os.path.join(arcpy.env.scratchFolder, output)


If you create your temporary data in the arcpy.env.scratchFolder then the server should create unique datasets and avoid any schema locks.

Owen
0 Kudos
IanPeebles
Frequent Contributor
When running a GP service on the server, the scratch workspace is created under a job directory. For example:

http://[server]/arcgis/rest/directories/arcgisjobs/lotexport_gpserver/j52e4e3ec1ff741e9aad19b8263615232/scratch/WebMap_12345.pdf
                                                                                       ^
                                                                                   (Job ID)


Your script may be overriding this and attempting to use the same scratch workspace for simultaneous users - leading to the schema lock.

The code that produced the URL result above just uses the scratchFolder as the directory in which to create the PDF file:

fn = 'WebMap_{0}.pdf'.format(str(id))
out_fn = os.path.join(arcpy.env.scratchFolder, output)


If you create your temporary data in the arcpy.env.scratchFolder then the server should create unique datasets and avoid any schema locks.

Owen



I was also thinking, the locator I am using is not in SDE.  It is a standalone locator that pulls data from SDE.  Could that be part f the problem? Should I point to the locator from SDE rather than a standalone locator on the server?  Here is my code. What would need to be changed?

# 1. Import arcpy module
import arcpy, os, sys, datetime
from arcpy import env

# 2. Set geoprocessing environment
scratch = arcpy.env.scratchWorkspace
arcpy.env.scratchWorkspace = r"E:\geoprocessing\Building\Outputworkspace"
arcpy.env.overwriteOutput = True

# 3. Specify Date and Time
now = datetime.datetime.now()

# 4. Input Parameters
Initials = arcpy.GetParameterAsText(0)
InspDate = arcpy.GetParameterAsText(1)
if InspDate == "":
   InspDate = now.strftime("%m-%d-%y") #This will return today's date.
arcpy.AddMessage("Searching for scheduled inspections...")
Xpression = """ "INSPECTOR" = \'%s\' AND "SCHEDULED_DATE" = \'%s\' """ % (Initials, InspDate)

# 5. Local variables:
sde_sdegis_VCOE_CRWINSPECTIONLOG = r"e:\connections\AGSGIS_TO_SDEGIS_COE.sde\\sde.sdegis.VCOE_CRWINSPECTIONLOG"
CRWInspectionLog = "CRWInspectionLog"
CRWInspectionLogCopyRows = os.path.join(scratch,CRWInspectionLogCopyRows.dbf)
AddressesWithBldgno = r"e:\locators\\AddressesWithBldgNo"
ScheduledInspections = os.path.join(ScheduledInspections.shp)

# 6. Make Query Table and Copy Rows if Result is greater than 0
arcpy.AddMessage("Selecting scheduled inspections...")

arcpy.MakeQueryTable_management(sde_sdegis_VCOE_CRWINSPECTIONLOG, CRWInspectionLog, "NO_KEY_FIELD", "", "", Xpression)

result = int(arcpy.GetCount_management(CRWInspectionLog).getOutput(0))
arcpy.AddMessage("Inspections Found: " + str(result))
if result == 0:
   arcpy.AddWarning("Process terminated."); sys.exit()
else:
   arcpy.CopyRows_management(CRWInspectionLog, CRWInspectionLogCopyRows, "")

del CRWInspectionLog

# 7. Geocode Inspections
arcpy.AddMessage("Geocoding inspections, please wait...")
arcpy.GeocodeAddresses_geocoding(CRWInspectionLogCopyRows, AddressesWithBldgno, Street SITE_ADDR VISIBLE NONE;City <None> VISIBLE NONE, ScheduledInspections, "STATIC")
result1 = int(arcpy.GetCount_management(ScheduledInspections).getOutput(0))
arcpy.AddMessage("Inspections Geocoded: " + str(result1))
if result1 < result:
   arcpy.AddWarning("Not all inspections were mapped.")
else:
   arcpy.AddMessage("All inspections mapped.")

# 8. Scheduled Inspections Output
arcpy.SetParameterAsText(2, ScheduledInspections)


How would I add in your code to stop the schema lock?
0 Kudos
OwenEarley
Frequent Contributor
I don't think the issue is with the locator, it looks like the schema lock is happening when the second process attempts to copy rows to the DBF file:

CRWInspectionLogCopyRows = os.path.join(scratch,CRWInspectionLogCopyRows.dbf) ... if result == 0:    arcpy.AddWarning("Process terminated."); sys.exit() else:    # The next line is probably the cause of the schema lock:    arcpy.CopyRows_management(CRWInspectionLog, CRWInspectionLogCopyRows, "")


The DBF file will be locked by the first process and the second process will not be able to write to the same file while it is open. In your script this file would be the same for all users:

E:\geoprocessing\Building\Outputworkspace\CRWInspectionLogCopyRows.dbf

To avoid this on the server you should remove the line that specifically sets the scratchWorkspace:

# 2. Set geoprocessing environment scratch = arcpy.env.scratchWorkspace arcpy.env.scratchWorkspace = r"E:\geoprocessing\Building\Outputworkspace" # - REMOVE this line arcpy.env.overwriteOutput = True


Change to:

# 2. Set geoprocessing environment scratch = arcpy.env.scratchFolder arcpy.env.overwriteOutput = True


Then when simultaneous users run the script they each get a system generated job/scratch directory and the DBF files can be created without any locking issues.

Alternatively, you could use some random temp file name for each DBF if you want to keep your specific scratchWorkspace. This way each process is writing to a different DBF file.

Hope this helps.

Owen
www.spatialxp.com.au
0 Kudos
IanPeebles
Frequent Contributor
I don't think the issue is with the locator, it looks like the schema lock is happening when the second process attempts to copy rows to the DBF file:

CRWInspectionLogCopyRows = os.path.join(scratch,CRWInspectionLogCopyRows.dbf)
...
if result == 0:
   arcpy.AddWarning("Process terminated."); sys.exit()
else:
   # The next line is probably the cause of the schema lock:
   arcpy.CopyRows_management(CRWInspectionLog, CRWInspectionLogCopyRows, "")


The DBF file will be locked by the first process and the second process will not be able to write to the same file while it is open. In your script this file would be the same for all users:

E:\geoprocessing\Building\Outputworkspace\CRWInspectionLogCopyRows.dbf

To avoid this on the server you should remove the line that specifically sets the scratchWorkspace:

# 2. Set geoprocessing environment
scratch = arcpy.env.scratchWorkspace
arcpy.env.scratchWorkspace = r"E:\geoprocessing\Building\Outputworkspace" # - REMOVE this line
arcpy.env.overwriteOutput = True


Change to:

# 2. Set geoprocessing environment
scratch = arcpy.env.scratchFolder
arcpy.env.overwriteOutput = True


Then when simultaneous users run the script they each get a system generated job/scratch directory and the DBF files can be created without any locking issues.

Alternatively, you could use some random temp file name for each DBF if you want to keep your specific scratchWorkspace. This way each process is writing to a different DBF file.

Hope this helps.

Owen
www.spatialxp.com.au


Owen, so you think removing the second line will solve the issue?  This tool worked before.  When I first run the tool in ArcGIS Desktop, I don't need to set a scratch workspace?  Everything else looks correct?

The output references a layer file on the e drive.  That should be acceptable correct?
0 Kudos
IanPeebles
Frequent Contributor
Owen,  your support solved my issue.  Thank you very much!
0 Kudos