POST
|
SDE Enterprise 10.2 (Oracle11gR2) and Desktop 10.2. I will use FME to edit the primary SDE. We don't use replication since it has so much overhead. It requires versioning, and globalid's and the schema's have to match between databases (yuck!). Just give me a tool to read a UNIQUE_ID column and a date_field that holds the last time each feature was updated from both databases and figure out the inserts/updates/deletes/unchanged records. Plus FME has much better scheduling abilities. On the publishing SDE I want the geometric network available so our engineers can use it for tracing. It needs to have network flow set to the digitized direction, and this is the one step that forces me to version the data, no matter how hard I try to avoid it. Here's the workflow I was originally going for: I originally planned for the dataset on the primary viewing SDE to be unversioned for simplicity. Our GIS administrators are very leery about having a versioned dataset on such a heavily used database. 1) Editors on the editing SDE make edits in versions. Custom tools populate a UNIQUE_ID field based on an oracle sequence and update a MODIFIED_DATE field. 2) Editor versions are posted to default when their complete 3) Each night, FME workspace runs and compares ID's/dates and pushes inserts/updates/deletes. The FME SDE30 writer will edit the feature classes directly, bypassing edit session and versioning requirements. 4) Once FME script is done all feature classes have the latest data 5) Recreate the geometric network using the arctoolbox tool, since FME doesn't update the background tables that drive the geometric network. 6) Version the dataset 7) Run the set flow toolbox tool to get digitized direction flow 😎 Unversion dataset and choose to have edits in Default version compress to base. This workflow would prevent the database from having to be versioned as little as possible, however steps 5,6, and 8 will definitely require no schema locks to exist. I saw that at 10.2.1 the 'Rebuild' and 'Repair' network tools are in the toolbox now so I guess I dont need to delete and recreate the network after all. To avoid making the schema changes each night I was thinking of instead leaving the dataset on the primary SDE permanently versioned. Then I'd use the slower FME writer that writes to versioned SDE's using proper ESRi edit sessions. It would right directly to the default version. Then I'd probably need to run the Repair and 'Set Flow' tools on the network since I don't think FME is great at handling these while writing features. The last big step is the compress to make sure the edits make it into the base table so the database is nice and clean. If there are users and web maps looking at the default version when the compress happens at night, will it still compress the edits I just made to base?
... View more
02-05-2014
10:45 AM
|
0
|
0
|
491
|
POST
|
I am trying to come up with a compress workflow. I was hoping someone had clarification for what happens to states when users or web map services are looking at the data when a compress is run. Here is the scenario: We have a primary SDE that all general users and all web services access to use GIS data. Then we have an editing SDE where GIS editors modify the data. That data is then synced to the primary SDe nightly. Currently the primary SDE is entirely unversioned, however now we want to host a dataset with a geometric network. To update this dataset the primary SDE will need to have versions(we'd probably just edit default directly) and once the sync is complete we'd need to a compress. However its my understanding that if there is a lock on states in an SDE then the compress will ignore the lineage for the state that's being viewed. So I'm not sure we'd get an effective compress if lots of users and web maps are looking at DEFAULT. I'd like to avoid kicking everyone off the server if I can, even if its at night since the web services are accessed external to our organization. There really isnt any other editing on the server... only nightly python/FME data pushes through non-versioned operations. Thanks for any help you can provide
... View more
02-05-2014
09:23 AM
|
0
|
2
|
1113
|
POST
|
I'm having trouble getting a SELECT query in PL/SQL to actually return records from a version in my Oracle SDE. I tried with a query layer too but I just can't seem to get the EXEC sde.version_util.set_current_version('') command to work. I'm using a 10.1 SP1 client. My test SDE database is 10.2 Here's how to recreate my little test environment: 1. In ArcCatalog, create a new point feature class on the SDE named 'MY_FC' 2. Version the feature class 3. Right click on the feature class, then click Manage, and then Create Versioned View. This created a view named MY_FC_VW 4. Create a geodatabase version on the SDE named 'MY_VERSION'. It will be a child of the default version 5. Open ArcMap, add the feature class, point it to the version, and add one point to it, then save edits and stop editing So now the goal is to run a query from PL/SQL that will return that new point record while it is in that child version, and this is where I get lost. I'm using the ESRI help as a guide (http://resources.arcgis.com/en/help/main/10.1/index.html#/in_Oracle/006z0000000v000000/) 6. Open PL/SQL and connect to the database 7. Open a SQL window and run the command EXEC sde.version_util.set_current_version('MY_VERSION') -This is when I get an error ORA-00900: Invalid SQL Statement. -A coworker recommended trying to run this command in the PL/SQL command window as the following: BEGIN EXEC sde.version_util.set_current_version('MY_VERSION') END; -But then I couldn't figure out how to run the select statement and see a returned record. Anyone have any ideas? I ultimately just need to run a query external of ArcGIS that returns all the records in a feature class for a particular version in the geodatabase, and then extract those records to a csv file.
... View more
09-12-2013
08:39 AM
|
0
|
2
|
3088
|
POST
|
I have just upgraded to Workflow Manager 10.1 I am trying to test out the new "Export Job to File" function as a way to archive old jobs that no longer need to be in our system every day. However, when i select a job from a query and then right click on it, the "export job to file" option is greyed out. Is there something I have to do after upgrading to 10.1 to enable this option? I have every security privilege. Thank you
... View more
06-11-2013
06:30 AM
|
0
|
1
|
1701
|
POST
|
Anyone running into problems with compatibility of model builder models between 10.0 and 10.1? We are in the process of upgrading our power users to 10.1 for testing and I noticed the problem below when trying to work on a new model. I created a model in an SDE toolbox at 10.1. When users with 10.0 clients try to browse for it, the toolbox is present, but the model is not visible. I was looking for a way to downgrade my model, but there is no option for that. In addition, if I create a model in an SDE using a 10.0 client, and then modify that model with a 10.1 client, the contents of the model are not visible from the 10.0 PC. So not sure if this is a fluke or by design.
... View more
08-22-2012
08:42 AM
|
0
|
1
|
2598
|
POST
|
Whats the latest version of ArcGIS Diagrammer for 10.0 and where is it housed now? I have 10.0.1. Reason I ask is I've run into a problem publishing the schema for a feature class that contains subtypes. I've narrowed the problem down to either the import of the XML into Digrammer, or the publishing from Diagrammer to XML based on the test below... 1. Have an existing working schema on an SDE 10.0 SP3(Oracle11g) database which contains subtypes. 2. Export to xml schema with ArcGIS tool. -I've attached the XML created for my problem layer at this point. 3. Just to prove this schema is valid, try reimporting into SDE and it works. 4. Now open that XML from step 2 into ArcGIS Diagrammer, and immediately publish it to another XML. 5. When you import this second XML into SDE, I get an error about reading a string at column 80,232 of line 1 or something like that. Feature classes without subtypes assigned do not create errors when put through the same test above. Thanks for any help you can provide -Andrew
... View more
07-12-2012
12:36 PM
|
0
|
0
|
532
|
POST
|
forgot to say, I'm at ArcGIS Desktop(and Data Reviewer) 10, sp3. SDE is Oracle11g
... View more
06-13-2012
07:26 AM
|
0
|
0
|
288
|
POST
|
When I try and do a table to table check and try to it up to compare a date field on either table, all date fields are removed as options in the drop-down list. Is there a reason date fields can't be used with this field like numeric fields? I'm using the 'Comapre attributes' option on the check so that I can just select the two or 3 fields I want to use for the comparison. Ultimately what I'm trying to do is find differences in dates between two copies of the same feature class on different SDE's. I have a detention pond polygon feature class on one server, and a published copy on another. Both have a UNIQUE_ID field and a DATE_UPDATED field. For the check, I want to match up the records on both sides using the UNIQUE_ID field, and then look to see which records have the DATE_UPDATED field value greater one one server.
... View more
06-13-2012
07:25 AM
|
0
|
2
|
1726
|