POST
|
Good point, Marco, about the lock topic. I've pulled the SQL for the LOCK_UTIL package and listed it below. When it gets to the object locks section, I can see that it is supposed to add an object lock and subsequently delete one thereafter. It's possible that I may need to seek some help eventually in inserting some debug code as you suggested. I went ahead and wrote a script to iterate through all of my feature classes to determine whether there were in NORMAL or LOAD_ONLY mode. All of them came back as NORMAL. I will continue to troubleshoot and reply if I identify a fix/solution. Unfortunately, I am not having much success with Esri tech support on this issue at the moment. We have a few more steps to try. Thank you again, Marco. Your input is much appreciated. SQL Code from package is below... ***************************************************** CREATE OR REPLACE PACKAGE SDE.lock_util IS /* Type definitions. */ SUBTYPE layer_lock_t IS SDE.layer_locks%ROWTYPE; SUBTYPE layer_id_t IS SDE.layer_locks.layer_id%TYPE; SUBTYPE state_lock_t IS SDE.state_locks%ROWTYPE; SUBTYPE state_id_t IS SDE.state_locks.state_id%TYPE; SUBTYPE table_lock_t IS SDE.table_locks%ROWTYPE; SUBTYPE table_id_t IS SDE.table_locks.registration_id%TYPE; SUBTYPE object_lock_t IS SDE.object_locks%ROWTYPE; /* Constants. */ -- The following constant defines the release of lock_util and is used by -- the instance startup code to determine if the most up to date version of -- the package has been installed. C_package_release CONSTANT PLS_INTEGER := 1008; -- Constant names for autolock parameters. C_is_autolock CONSTANT CHAR(1) := 'Y'; C_is_not_autolock CONSTANT CHAR(1) := 'N'; -- Constant names for lock types. C_shared_lock CONSTANT CHAR(1) := 'S'; C_exclusive_lock CONSTANT CHAR(1) := 'E'; C_marked_lock CONSTANT CHAR(1) := 'M'; C_shared_lock_all CONSTANT CHAR(1) := '-'; C_exclusive_lock_all CONSTANT CHAR(1) := 'X'; /* Procedures and Functions. */ -- The following functions perform operations for layer locks stored in -- the SDE.LAYER_LOCKS table. Each operation is an autonomous transaction. PROCEDURE add_layer_lock (layer_lock IN layer_lock_t); PROCEDURE delete_layer_lock (sde_id IN pinfo_util.sde_id_t, layer_id IN layer_id_t, autolock IN VARCHAR2); PROCEDURE delete_layer_locks_by_sde_id (sde_id IN pinfo_util.sde_id_t); PROCEDURE update_layer_lock (layer_lock IN layer_lock_t); -- The following functions perform operations for state locks stored in -- the SDE.STATE_LOCKS table. Each operation is an autonomous transaction. PROCEDURE add_state_lock (state_lock IN state_lock_t); PROCEDURE delete_state_lock (sde_id IN pinfo_util.sde_id_t, state_id IN state_id_t, autolock IN VARCHAR2); PROCEDURE delete_state_locks_by_sde_id (sde_id IN pinfo_util.sde_id_t); -- The following functions perform operations for table locks stored in -- the SDE.TABLE_LOCKS table. Each operation is an autonomous transaction. PROCEDURE add_table_lock (table_lock IN table_lock_t); PROCEDURE delete_table_lock (sde_id IN pinfo_util.sde_id_t, table_id IN table_id_t); PROCEDURE delete_table_locks_by_sde_id (sde_id IN pinfo_util.sde_id_t); -- The following functions perform operations for object locks stored in -- the SDE.OBJECT_LOCKS object. Each operation is an autonomous transaction. PROCEDURE add_object_lock (object_lock IN object_lock_t); PROCEDURE delete_object_lock (object_lock IN object_lock_t); PROCEDURE delete_object_locks_by_sde_id (sde_id IN pinfo_util.sde_id_t); -- The following procedures delete layer, table, state, object locks -- stored in SDE.LAYER_LOCKS, SDE.TABLE_LOCKS, SDE.STATE_LOCKS, -- SDE.OBJECT_LOCKS respectively within a single autonomous transaction. PROCEDURE delete_all_locks_by_sde_id (sde_id IN pinfo_util.sde_id_t); PROCEDURE delete_all_locks_by_pid (pid IN pinfo_util.sde_id_t); PROCEDURE delete_all_orphaned_locks; END lock_util;
... View more
07-02-2013
01:32 PM
|
0
|
0
|
20
|
POST
|
Thank you, Marco. This is helpful in troubleshooting further. I had also performed an Oracle trace at the same time as the SDE intercept. When reviewing the trace file via OraSRP, I was able to better identify what I was seeing with the lock identifier syntax that you commented on earlier. In general, below is how I believe the identifier works: Lock <sde_id,version_id,object_type,application_id,autolock> as translated from Lock <220772,22580,1,999,N> The types of values described above were used in the following SQL as shown by my trace file: DECLARE object_lock SDE.lock_util.object_lock_t; BEGIN /* ArcSDE plsql */ object_lock.sde_id := :sde_id; object_lock.object_id := :object_id; object_lock.object_type := :object_type; object_lock.application_id := :application_id; object_lock.autolock := :autolock; object_lock.lock_type := :lock_type; SDE.lock_util.add_object_lock (object_lock);:sql_code := SDE.sde_util.SE_SUCCESS; EXCEPTION WHEN OTHERS THEN :sql_code := SQLCODE; :error_string := SQLERRM; END; I agree in that it seems as though an attempt is made to delete an object lock but the lock is not there. What is really interesting to me is that the trace file contains a few sde_id values throughout but never the '220772' as found in the SDE intercept files .
... View more
07-02-2013
06:03 AM
|
0
|
0
|
20
|
POST
|
If you perform the unregister on the replica as the replica owner (user) it works cleanly. If you do this as any other user (even as SDE) it may not get deleted entirely, requiring the use of SDE command line to delete sync send versions. That's been my experience at least...
... View more
07-02-2013
03:09 AM
|
0
|
0
|
15
|
POST
|
Within the same folder as your geodatabase, add an already-existing file such as a text or MS Word file; then try to edit it and save changes. Do you get similar results when attempting to save edits to that file? It may be that your entire volume is read-only despite what the security settings indicate within Windows.
... View more
07-01-2013
07:10 PM
|
0
|
0
|
266
|
POST
|
If you are looking to autopopulate a field based on the value of OBJECTID, your best bet is to create a Trigger in my opinion. When a new record is added and the OBJECTID value is autogenerated by ArcSDE, your trigger will be able to use that value to determine the value of your custom field. However, the trigger will need to fire after the OBJECTID trigger from ArcSDE fires. There is some great online help available for creating basic triggers like this for both Oracle and SQL Server depending on which RDBMS you are using. If you are not using an RDBMS, your options are much more limited and you'll need to take a different approach... most likely using a Python script which is run on a periodic basis.
... View more
07-01-2013
06:57 PM
|
0
|
0
|
9
|
POST
|
From the RDBMS directly, you can use the following queries: To get a list of all tables registered with the geodatabase, use: select table_name from sde.TABLE_REGISTRY; To get a list of all layers (i.e., all GDB-registered tables which have a spatial column) within SDE, use: select table_name from sde.LAYERS; If you are wanting to know more spatial-specific information, the following Python script has worked for me (substitute your own info): import os, subprocess, arcgisscripting, string, sys, os gp = arcgisscripting.create(9.3) gp.workspace = r"C:\Users\User1\AppData\Roaming\ESRI\ArcCatalog\test.sde" server = 'server' def describeFeatures(sdeWorkspace,logWorkspace, logName,instance,user, psswd): logfile = open(os.path.join(logWorkspace, logName), 'w') datasets = gp.listdatasets("","") for dataset in datasets: gp.workspace = sdeWorkspace + os.sep + dataset for fc in gp.ListFeatureClasses(): args = ['sdelayer', '-o', 'describe_long', '-l', str(fc) + ',SHAPE','-i', instance, \ '-u', user, '-p', psswd, '-s', server] p = subprocess.Popen(args, stdout=subprocess.PIPE) output = p.stdout.read() logfile.write(fc) logfile.write(output) logfile.write("\n") print fc, output logfile.close() if __name__== "__main__": #connection to database sdeWorkspace = gp.workspace logName = "FeatureInfo.txt" logWorkspace = r"C:\Users\user1\Documents\python\DescribeFeatureClasses" instance = 'esri_metrorep' user = 'sde' psswd = 'sde' describeFeatures(sdeWorkspace,logWorkspace, logName,instance,user,psswd)
... View more
07-01-2013
06:26 PM
|
0
|
0
|
87
|
POST
|
I have a one-way FGDB replica that is not applying changes after successfully being registered and created initially. Once edits are made in the master GDB (an Oracle 11G R2 9.3.1 SP2 geodatabase) to any of the participating object classes, the file timestamp of the FGDB replicate never updates. In reviewing the replication log for the replicate via ArcCatalog, I see a GENERAL SYSTEM FAILURE error. I then decided to perform an SDE intercept while attempting synchronization and below is what I found in the SDE intercept output: NString: "Error executing stored procedure sde.lock_util.delete_object_lock" NString: "ORA-20048: Lock <220772,22580,1,999,N> not found, not deleted." There is not much that can be found on this issue within the Esri forums, and Oracle does not need to have much of anything on this particular error number. I have confirmed that all GDB permissions are correct, all object classes participating in replication are registered as versioned, and everything exists as high precision. What do the above errors actually mean? We have 17 other one-way FGDB replicates that update just fine, one of which is based on different feature classes of the same Oracle database mentioned above. Thanks for any help you can provided!
... View more
07-01-2013
06:08 PM
|
0
|
6
|
1525
|
POST
|
I am using Data Reviewer 10.1 and I have created an RBJ file to perform checks against a wealth of versioned feature classes within our 10.1 Oracle geodatabase. After the process finishes (or at least after it runs for quite a while) with the various checks that I have configured, I receive the attached error (see screenshot) which states "Unable to start editing on batch run table". Looking back at some outdated forum posts, I believe someone had encountered the same error and their solution was to register one or more tables as versioned. That being said, all of my spatial data is already registered as versioned so I assume that the previous post may have been made with respect to the batch run table itself (REVBATCHRUNTABLE). So, my question is: does the REVBATCHRUNTABLE need to be deliberately registered as versioned before the table can be written to during a Data Reviewer session? If not, then what could be causing this error to appear?
... View more
06-11-2013
12:40 PM
|
0
|
1
|
1896
|
POST
|
When I look at this error and thinking back about my experience with Geoportal 1.1.1, to me it sounds like something is configured wrong within the GPT.XML file in the Synchronizer Parameters section. Another possibility is that your Geoportal host machine has ports blocked due to firewall issues. I would try taking down the Windows Firewall (and any others like McAfee or Norton) first, and then if that doesn't resolve the issue I would review the Synchronizer Parameters in the GPT.XML file. Is it possible that your ArcGIS Server services are secured? Start there and let us know what you find out.
... View more
11-02-2011
07:48 AM
|
0
|
0
|
4
|
POST
|
I've been trying to change the initial default extent of the search map in Geoportal such that it begins at the contiguous US rather than the entire world. I'm using one of Esri's online map services and I've read in other places that adding the following line to the InteractiveMap tag within the gpt.XML file allows for doing this: initialExtent="<xmin;xmax;ymin;ymax>" I have tried it with decimal degrees (lat/long) and also with the coordinates reflective of the basemap's WKID. The Geoportal application seems to ignore this entry, though. Reference: http://sourceforge.net/tracker/?func=detail&aid=3142986&group_id=306452&atid=1291154 Any thoughts on how to change the initial extent?
... View more
10-31-2011
06:05 AM
|
0
|
0
|
1319
|
Online Status |
Offline
|
Date Last Visited |
Wednesday
|