POST
|
Hi folks, I am trying to determine if this is just a dead-end of an approach. I have a fairly convoluted deployment workflow whereby various actors test a given MXD and deploy to varied AGS servers. As the trustee of the MXD I have little direct control over these resources though I can make suggestions. One approach which has worked okay with registered folders is to have a common AGS data store name that everyone agrees represents a project's data source. Each AGS server can set this up as needed under the rubric of that name. Then I have an ArcPy deployment script that I provide to these folks that (via an administrator connection) reads the data store publisher folder location and swaps the correct folder path in the MXD before deploying to AGS. This works okay. Ideally it would be grand to do the same with SDE connections. However as one might expect the management of the passwords is the issue. As the AGS administrator, one can run arcpy.ListDataStoreItems and get the SDE credentials for a given data store (only interested here in publisher). But the password is returned as "ENCRYPTED_PASSWORD". In the current security model for 10.2.2 is there any arcpy method whereby I can take that encrypted password and create a new SDE connection with it to then install into a MXD? When using arcpy.CreateArcSDEConnectionFile_management it wants the clear text version of the password and not the encrypted text string. Is there a way to provide this method the encrypted string or am I just going off into dodgy security territory? I have no need of the actual password itself but I suppose I am moving the credential out of AGS and into the MXD (just for the deployment) and I can see that might be a tad insecure. However, as this is run through an AGS administrator connection I would think that is within my scope. I am not quite sure. Any feedback would be appreciated. Thanks, Paul
... View more
12-04-2014
05:00 AM
|
0
|
0
|
1797
|
POST
|
Hi Laureano, Try the following noting the equals sign. SDO_INDEX_SHAPE "TABLESPACE = SDE_INDEX" See http://webhelp.esri.com/arcgisserver/9.3.1/java/index.htm#geodatabases/about_o-659090405.htm Cheers, Paul
... View more
07-29-2011
06:36 AM
|
0
|
0
|
169
|
POST
|
Hi Nick, Sorry, if you are only working with SDE.ST_GEOMETRY I don't think there is a way to do it on the sql side of things. Converting the data to SDO_GEOMETRY is the only way I know of to expose the innards of a geometry. What happens when you run SDE.ST_AsText on an ST_GEOMETRY with curves? I've never tried. Cheers, Paul
... View more
06-21-2011
12:38 PM
|
0
|
0
|
712
|
POST
|
Hello, I do this all the time. I would think you've discovered that Oracle's R-Tree Spatial Index does not support curves for geodetic coordinate systems and you need to get rid of them, correct? While ArcCatalog won't load the layer, you can load it with sdeimport though the indexing step will then fail. However you will have a layer - the SDO_GEOMETRY will contain the curve features and the SE_ANNO_CAD_DATA field will contain the mysterious ESRI blob data (be not NULL). At this point you can inspect the SDO_GEOMETRY for the curve features but you can't do much about them as the Oracle Spatial utility SDO_GEOM.SDO_ARC_DENSIFY does not work in a geodetic context and Oracle will not allow you to transform them into a projected system. But you could identify them here if you just want to delete them or complain to the source about them. So to densify them you need to backup a step and import your curves into a projected coordinate system that works for you in Oracle. Maybe a nice Albers Equal Area or such, your call. Anyhow once you load them in this manner you will again have the curves in both SDO_GEOMETRY and SE_ANNO_CAD_CATA. Now you can densify the SDO_GEOMETRY AND remember after you do so to set SE_ANNO_CAD_CATA to NULL as if you don't ArcSDE will keep reading the curves out of the BLOB. I believe the SDO_GEOMETRY is ignored when the BLOB field is not null. Hope that helps. Cheers, Paul
... View more
06-21-2011
04:53 AM
|
0
|
0
|
712
|
POST
|
Hi Stefano, Funny how this topic only has suddenly come up in the last couple of weeks as 11gR2 has been out a while. This past weekend I put out a post on the matter over on the Oracle Spatial OTN. http://forums.oracle.com/forums/thread.jspa?threadID=2212186&tstart=0 I am just in the early stages of testing and far as I know no one else other than this guy http://forums.oracle.com/forums/thread.jspa?threadID=2211540&tstart=0 has ever made a peep on the matter on a public forum. My initial testing seems to be that ArcSDE is oblivious to the change in the data type definition. A big issue is being able to move data between standard and uber SDO_GEOMETRY instances. I found that sdeimport and sdeexport seemed to work just fine between the two (kind of obvious but you never know). With datapump being out of the question and only the unsupported exp/imp tools available for moving stuff around, the sde tools and ArcCatalog might be a very good choice for such tasks. Now its another question whether the folks at ESRI will officially support uber SDO. Yet to some degree it really shouldn't matter. So I can't answer your question as to whether it is "safe". I am not at the point yet to make a recommendation. On one hand just what are we supposed to do with these big polygons? Sure, I know what ESRI recommends of course, but I have lots of Oracle Spatial applications in the pipeline can cannot utilize SDE.ST_GEOMETRY for them. On the other hand once you "go large" suddenly you lose all the easy interoperability that makes SDO so much nicer than SDE.ST_GEOMETRY. Datapump is out, database links are out, transportable tablespaces (I think) are out. I am still sitting on the fence and very much interested in other folks' comments. I might ask, why do you want to do this? I work in environmental science so I am using datasets that mainly model hydrology. I wonder sometimes if its just me and my datasets that are the problem. Everyone else modeling tax parcels and such just think this is all nuts. Ideally we should try to keep this thread tied to the ArcSDE questions. Do post the Oracle specific questions over on OTN if you have em. Cheers, Paul
... View more
04-25-2011
04:31 AM
|
0
|
0
|
114
|
POST
|
Hi folks, I am stymied trying to find out how I recalculate a feature class extents in a database using the python geoprocessor. Is there an online example somewhere? Thanks, Paul
... View more
11-23-2010
05:23 AM
|
0
|
1
|
2462
|
POST
|
Thanks! I can wrap that up into an exists function. Cheers, Paul
... View more
10-27-2010
02:51 AM
|
0
|
0
|
451
|
POST
|
Hi folks, Simple question, I cannot figure out how to gracefully check if a domain exists in a file geodatabase using Python with ArcGIS 9.3.1 SP2. I can try to create the domain afresh and then catch the error but seems like something I should be able to test for. Thanks, Paul
... View more
10-26-2010
08:46 AM
|
0
|
7
|
1539
|
POST
|
Hi Nicholas, No one is jumping in to answer your question. I've never tried this myself but I think the answer is "Probably No". You need to first check to see if both your instances involved in the database link have the same type oids for all the types and subcomponents of SDE.ST_GEOMETRY. If you installed ArcSDE on your Oracle instance before 9.3 the values for the type oids were arbitrary and subsequent upgrades would not change them. See http://resources.arcgis.com/content/kbase?fa=articleShow&d=34928 So none of my production servers have the same OIDs as they date back to 9.1 days. Attempting to move SDE.ST_GEOMETRY from one to the other is futile as the servers will not recognize the other server's types as even being SDE.ST_GEOMETRY. It might work if both servers were fresh installs of ArcSDE. I looked at my 11gR2 test machine on which I installed 9.3.1 fresh and those types match the ids in the above mentioned article. My production machines are slated for an 11gR2 upgrade but even then I think the type OIDs will stay the same. Until someone rebuilds my production servers from scratch, I am stuck. Secondly your ST_SRID coordinate system values from SDE.ST_SPATIAL_REFERENCES must match between servers. Otherwise one server will have no idea what coordinate system the geometry from the other server has. We've talked about this issue in the forum and how difficult this would be conquer. See http://forums.arcgis.com/threads/3464-How-to-ensure-consistent-ST_Geometry-SRIDs-when-importing-feature-classes So the geometry on server A might have ST_SRID 66 meaning NAD83 but server B probably will have ST_SRID 66 assigned to some other coordinate system. How do you bring this in harmony? I have no idea. Thirdly, what are you really after in terms of using this database link? If all you want to do is pull geometries from one server to another, then you might get things to work with some (a lot?) effort. But if you expect to "work" across the database link with queries and such, you can forget it. Domain indexes such as the SDE.ST_SPATIAL_INDEX don't work across database links. This is the same for Oracle SDO_GEOMETRY. You might want to look at http://forums.oracle.com/forums/thread.jspa?threadID=375036 Anyhow, that's my two cents and feel free to correct anything I misstated. To my mind its just not worth the effort to try this. If you are using Oracle and ArcSDE, why not overcome the first two issues by using MDSYS.SDO_GEOMETRY storage rather than SDE.ST_GEOMETRY? Cheers, Paul
... View more
08-05-2010
04:29 AM
|
0
|
0
|
468
|
POST
|
Paul �?? are you sure the Spatial Index is getting updated correctly when using a TRUNCATE? Hi dhuhkosi, Your question is most interesting but I admit I have always just trusted Oracle and ESRI to do the truncation correctly. Basically every domain index has "events" that watch for changes to the host table. http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10800/dcidmnidx.htm The one we are talking about here is ODCIIndexTruncate(). Now the ESRI folks store the procedure in the SDE schema in the body of the SDE.ST_DOMAIN_METHODS type. So we can look right at the PL/SQL code that fires and its just 47 lines of code. So to summarize it checks first to see if two specific conditions are true in which case it does nothing, else it then determines the name of the domain index table and truncates it. I just tried your experiment. I loaded 443,154 points into a layer. I checked that the domain index had 443,154 records. Then I truncated the host table and found the domain table now had zero records. So I am not seeing what you are seeing. However, as I said there are two conditions in the code whereby the truncation does not take place. Perhaps your situation is encountering those conditions? I'd say your next step is ESRI support unless someone else can succinctly explain those conditions. Please tell us what you find out as now I am curious. Cheers, Paul
... View more
06-16-2010
03:25 AM
|
0
|
0
|
543
|
POST
|
Hi dhuhkosi, The SDE.ST_SPATIAL_INDEX index type is a database domain index just like MDSYS.SPATIAL_INDEX or other types of domain indexes like Oracle Text. So Oracle does the backend work of noting you've truncated the table and also truncating the spatial index to match. So nothing special that you need to do (well, do bear in mind sometimes you get orphans - http://resources.arcgis.com/content/kbase?fa=articleShow&d=34324 - but that's some kind of hiccup in ArcSDE where the index id value increments without getting rid of the indexes for the old value). One thing you might well WANT to do is drop the spatial index entirely for the duration of the load and recreate it after your load is complete. This will speed up your load for your larger tables. You can either drop the index via SQL or just set the layer to load_only_io via sdelayer or python. Then when the load is complete set the layer to normal_io. I don't believe that ArcSDE tracks the status of the domain index apart from just checking it realtime when it needs to. So you can drop and recreate the spatial index either via SQL or via ESRI without any problems in my experience. The locking users out is a good idea. How does that reflect through ArcMap? Does the user get a nice informative message as to why they cannot connect? And how do the middleware servers handle the locked condition? For example just recently we found AGS 9.3.1 creating dozens and dozens of inactive sessions to the point where eventually the database hit the upper limit of sessions. Turns out the db account being used had gone into that Oracle "password warning mode" saying that the password was about to expire. Once the warning was removed, it all went back to normal. The middleware login was just not liking that warning message and connecting over and over. And what about users that are already connected at the time of the lockout? You are okay with the effort to manually track down and evict users at the start of the load process? You can walk over to these folks or phone them and nicely tell them to get out? But you just don't want anyone new to come on board, right? I'd very much like to hear how this goes for you. Please send us the results of your testing. Cheers, paul Pdziemiela: 1) The python route will work or easier still is to do a TRUNCATE on the table via the database and then sdeimport -o append into the empty table (don't delete the rows with DML - slow). �??to do a TRUNCATE on the table via the database�?� �?? do you have to TRUNCATE both the �??business�?� table and the �??spatial index table�?� for this to work? 2) The best and nicest thing to do is to keep everyone out during this time of data instability. Yeah, this is what I believe too. Hence my thought about generating a Oracle sql script that LOCKS all Users expect the one which is loading the data. 3) Again not sure what part of things you want to improve. Sorry if I was not too clear. I think it is the idea of stopping and starting ArcIMS and ArcGIS Server. However, after reading both your posts and from some Testing here, I don�??t really want to start very slow running DELETE statements on the data. Finally, one point, if you could also clarify if you received similar results to me: If I try and delete 50,000 records from a Feature class using the geoprocessing objects, it takes approximately 10+ minutes to run. If I try and delete 50,000 records from a Feature class using a simple sql statement from Oracle SQL Developer, it takes less than a minutes to run.
... View more
06-14-2010
03:08 AM
|
0
|
0
|
543
|
POST
|
Hi dhuhkosi, I am not exactly clear about what the problem is with your current procedure. Is it the time expended to do the update or is it exclusion of the users and shutdown of the middleware servers? The way you are doing things seems very "safe" to my mind. I assume you also shut down all the editing users during the sdeexport step. An hour seems like nothing to me. I often times measure loads or processing steps in terms of days. As kreuzrsk mentioned, if you don't mind the possibility that AGS and IMS users may not find their data or just part of the their data during the time of the load, you can load the data while the servers are active. The python route will work or easier still is to do a TRUNCATE on the table via the database and then sdeimport -o append into the empty table (don't delete the rows with DML - slow). Nothing in ArcSDE can stop or hinder the truncation no matter who is logged in or how. But your connected users could end up looking at either no data, partial data or simply crash. The best and nicest thing to do is to keep everyone out during this time of data instability. It makes sense to me to try the switcheroo idea. Have an exact copy of the table. You need to make 100% sure the copy is exactly the same coordinate system as the master and that the ST_SRIDs match. Load the copy and then rename as the master via the database "behind the back" of ArcSDE. But your objectids will mostly likely change and you will hose anyone currently attached to that table. If time is the essence you could load all the copies at your leisure and then force everyone out over lunch and swap the table names at that time? Again not sure what part of things you want to improve. Cheers, Paul
... View more
06-11-2010
09:53 AM
|
0
|
0
|
543
|
POST
|
Hi nwingfield, You don't mention the database you are using and its hard to guess now that everything is lumped together into this single "geodatabase" mish-mash of a forum. My experience is on Oracle so I hope it applies to your situation. The SDE.ST_GEOMETRY is just a sequence number that starts with 1 and increments each and every time a new coordinate system with new attributes is encountered by a given installation of SDE. This goes far beyond just the projection basics such as NAD83 or WGS84 but also covers the offsets, the grid size and the extents (check out the SDE.ST_SPATIAL_REFERENCES table for an idea of the possible things SDE tracks). So taking a brand new empty installation of ArcSDE, let's say you run sdeimport and load some NAD83 data as the SDE.ST_GEOMETRY datatype. ArcSDE will store the import's cs details and assign the ST_GEOMETRY SRID to be 1. Then you next come along with some WGS84 data - okay that becomes SRID 2. You might then come along with more NAD83 data and you'd imagine this layer would be loaded and assigned to be the previously mentioned SRID 1. But that's only if EVERY gritty detail matches. As mentioned, any difference in offsets, scale or the extent will cause ArcSDE to say that the cs is new and assign it as SRID 3. I just looked at one of my ST_SPATIAL_REFERENCES table and I have 56 versions of GCS_North_American_1983. 🙂 Now how do you keep SRID 1 on server A equal to SRID 1 on server B? Other than cloning the entire SDE schema there is no way I know of to do this. As far as I know you need to use ESRI tools (ArcCatalog, sdeimport, sdeexport, shp2sde, etc) to properly move things back and forth amongst servers. They do the work of figuring out what SRID 1 on server A is equal to on server B and possibly as a result creating a new SRID on server B if there is no match. I've often thought that possibly you could define a universal coordinate system: cs, offset, scale, extents - whatever you chose would need to work for ALL your data universally. Then you could reassign that SRID from 1 to say, 4269. Then you would need to carefully police all your SDE data to fit in those parameters (easier said than done). You would then do this on all your servers. Again you'd need to be vigilant to always make sure data came into your system with the exact parameters preset to match up with your stable 4269. One degree of difference in the extent or using an offset of -180 instead of -200 and you will instead get 4 or 37 or whatever is next in the sequence. Honestly this seems like more trouble than its worth. Anyone else have any thoughts? As mentioned I've always followed the party line on this one. I have five servers that I commonly move the same data back and forth on. There is nothing in common between any of them in terms of ST_GEOMETRY SRID values. In fact recently we rebuilt the layers to use the ArcCatalog-ish -200 offsets and a smaller XYscale. So that createda whole new sets of SRIDs. I don't think you can do much about it. Cheers, Paul
... View more
05-06-2010
02:37 PM
|
0
|
0
|
372
|
POST
|
Hi Humphriesg, I just tried it quick on both 10gR2 and 11gR2 with 9.3.1 and the registration worked fine for me. What's the deal with the "-C ArcID"? Is that a legacy way to say "-C ID,SDE"? Interesting. The Abstract Data type error usually means that you have an nonsupported field or two geometry fields or also when ArcSDE is just confused. Say it could be searching for ST_GEOMETRY and finding SDO_GEOMETRY. But your "-t" parameter should make it clear. Sorry I can't help more but definitely something weird on your end. As an aside, you might want to consider standardizing your geometry column names to "shape" and your id column names to "objectid". You don't have to but you might want to just as it seems to avoid some bugs (take NIM042583 for example). Cheers, Paul p.s. does anyone else really dislike this new forum? Any compelling reason why all the ArcSDE posts are all lumped together? I only use Oracle and PostgreSQL with ArcSDE so I never monitored the SQL Server posts. It was nice when things were all broken up into separate groups. Now its like a big jumble and my motivation to sift through it is lessened.
... View more
04-26-2010
03:20 AM
|
0
|
0
|
172
|
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|