|
POST
|
Direct Connect is certainly your easiest option, but you can always try reading the documentation on how to connect using an application server. In the future, please start a new thread when you want to ask a question unrelated to the previous topic. - V Thanks for that. Suggestion also noticed.
... View more
06-26-2014
05:33 AM
|
0
|
0
|
985
|
|
POST
|
Hi all I am a bit lost. I have an SDE 10.0 MSSQL Server 2008 database with services enabled. Can I connect to it via services in GIS 10.2.1? If so how? I am not interested in direct connect at this time.
... View more
06-24-2014
01:59 AM
|
0
|
0
|
985
|
|
POST
|
Christopher: How about opening the connection to SQL Server and then see if the version manager appears? I had a similar issue with Oracle SDE connections that I copied from v10.0 to v10.2. I was unable to see some of the connection menus in v10.2 until I opened the SDE connection. What do you mean by 'opening the connection to SQL Server and then see if the version manager appears?' How do I do that?
... View more
11-21-2013
06:02 AM
|
0
|
0
|
1203
|
|
POST
|
I have another issue with connecting desktop 10.2 to SDE MSSQL 10: [ATTACH=CONFIG]29268[/ATTACH] Can I connect to SDE 10 from 10.2, or I need to upgrade the SDE first?
... View more
11-21-2013
05:40 AM
|
0
|
0
|
1203
|
|
POST
|
Hi all I am not sure what will happen if I do something in a replicated SDE DBs. This is what I have now. SDE1 with FC1 and FC2 SDE1 has a replica (SDE1 is the parent DB) to be synched to SDE2 (one way SDE1 > SDE2) FC1 and FC2 get edits in SDE1 and after editing them SDE1 is compressed. After that by right clicking to SDE1 and selecting 'synch... ' SDE1 is synched to SDE2 and the edits in FC1 and FC2 are now in the FCs in SDE2. My question is what will happen if I completely delete FC1 in SDE1 and re-import another same FC1 (same FC schema) and register it as versioned. If I go for synching using the above routine will the changes from the (new) FC1 propagate to FC1 in SDE2?
... View more
05-15-2013
08:55 AM
|
0
|
0
|
438
|
|
POST
|
Is your networking 128 kiloBITS/sec or 128 kiloBYTES/sec? Frequency of update is the other big factor. It's certainly possible to do change detection as a application solution, but doing ...... How much time do you have to dedicate to getting this solution working? - V Hi The connection speed is in KBYTES with Y. Actually I have seen it even faster (~2 Mbytes/s - but I assume this is just a peak/burst), so lets consider it slow. Also I have seen very fast networks and data import into SDE in such a network still takes way more time than if you just copy the same amount of data over in windows. That is understood though. I dont think I will need to update (and replicate) data more than once a week. In any case I think if I run the copying process from SDEDB1 to SDEDB2 over night it will be finished in the morning. There are some other consideretion here - there are some relationship classes in SDEDB1. I cant seem to be able to copy RCs between two databases. Is this right? I am not quite clear what your idea is about change detection - could you elaborate more. How do you implement it (python?). All the tables are fairly small, so if I get to understand what you are saying I might give it a try, regardless of timing.
... View more
04-10-2013
01:14 AM
|
0
|
0
|
533
|
|
POST
|
ArcGIS replication uses versioning. If your layer gets completely replaced with each update, you should consider using a different update methodology. Then again, 100 features is a trivial number, so the bandwidth and processing cost is negligible. I would recommend that if you want to use replication, that you use it per design (edit the existing table), rather than breaking the replication by dropping and re-adding the feature class. - V Thanks, I know replication uses versioning. Didn't mention it in the example to keep it simple. Also for that the example is for 1 FC of 100 features. In reality it will be around 100 FCs totaling some 500 MB of data. So it seems bandwidth might be an issue (although the link speed is ~128 KBytes/s). Just out of interest is there something that will merge FC1 to the new FC1, passing the changes from the new FC1 to the old one? And the old FC1 to be replicated. The only other option I see is scripted copying of FC1 from SDEDB1 to SDEDB2. Am I missing some options?
... View more
04-09-2013
03:15 AM
|
0
|
0
|
533
|
|
POST
|
Hi all, We have a practical problem here with some SDE DBs replication/synchronization. here is what we have: Office1 has SDEDB1 with FC1 with 100 features, this is the parent DB, running on oracle. Office2 has SDEBD2 with no FCs to start with. Office 1 and 2 are in different countries but have some (slow speed) lan connection. between. What happens is that FC1 in SDEDB1 doesn't get edited, i.e we don't change the number of features in it or edit features attributes. From time to time we receive a completely new FC1. Here are some questions: 1. If FC1 participate in a replica, and after the initial synch between SDEDB1 and SDEDB2, SDEDB2 should have the same FC1 in it. Is this the case? 2. If we delete FC1 from SDEDB1 and import another, new FC1, and resynchronize what will be sent to SDEDB2? 3. What is the best way to have SDEDB1 and SDEDB2 in synch when we substitute FCs and dont edit FCs? Thanks.
... View more
04-09-2013
01:30 AM
|
0
|
5
|
638
|
|
POST
|
Hi Vasil, This is currently not supported through arcpy.mapping, I would suggest voting this idea up as it addresses what you are looking for. Best, Melanie S. Thanks for that Melanie. I wasn't particularly meaning arcpy.mapping. I was more general - can this be done using arc objects and c#?
... View more
03-27-2013
02:31 PM
|
0
|
0
|
1246
|
|
POST
|
Hi, I have a style file with few hundred items in it (points, lines and polygons). All my data have a code field and each feature in it has a code that links to the style file. Normally I apply the styles by clicking properties>symbology>categories>match to symbols in a style, select the style file, then select the code field and press match symbols. It all works great for small number of TOC items. My question is is it possible a tool (button of the toolbar) to be created to do the above automatically for all TOC layers. Or even better automatically at the time of adding new data to the TOC? Any suggestions welcome.
... View more
03-27-2013
08:38 AM
|
0
|
3
|
1353
|
|
POST
|
Hi all I have a script which adds a layer file to an mxd, sets labels on and should label features with the first 10 characters of a attribute. the labeling part is [PHP]for i in ll.labelClasses: i.expression ="[FC_NAME] .substring(0,10)[/PHP] This code sets the expression OK, I can see it in the FC properties (labels) manually in TOC. The problem is that the parser is set to VBScript, and the above expression is Jscript. How do I set the parser in my code to be JScript?
... View more
01-18-2013
01:06 AM
|
0
|
2
|
778
|
|
POST
|
Hi all, I have similar issue to this one. I will need to upgrade both sde and oracle From Sde 10 to sde 10.1 oracle 10g (10.0.2...) to oracle 11g r2 Is there a step by step instruction what is getting updated first? Or another topic on that scenario?
... View more
11-19-2012
01:54 AM
|
0
|
0
|
470
|
|
POST
|
Hi all I have a folder with shp files and geo tiffs. What I want is to create a new shp file and insert all other files' extent envelopes in it. What I can do using python is: I can get the extents of each file. I can create a shp file and add fields to it. The question is how do I create and add the geometry of each feature to the new shp file? Ideas appreciated.
... View more
09-03-2012
07:42 AM
|
0
|
1
|
842
|
|
POST
|
How large is the shapefile (storage of .shp/.shx/.dbf combined)? How large is the table (incuding the related LOB table for SDO_GEOMETRY storage)? Databases have a *lot* more overhead to implement ACID (atomicity, consistency, isolation, durability) on each query, while a flat file has none. This is why flat files are very nearly always faster on a full table scan. Note that this is not an "ArcSDE performance" issue -- it's a *database* performance issue. If you want to improve database performance, you can follow this simple rule: Never draw all objects in large tables There are many ways to implement this: You can make the table thinner by generalizing the geometries of objects with many vertices. You can make the table shorter by unioning rows by attribute. You can avoid querying the table by setting a scale dependency in the client application. Or some combination of two or three. Since the data is basemap information you have additional options, including storing the data in a different storage format, and using a map cache to avoid repeated rendering. - V Thanks a lot. The combined size of shpfile is 190 MB. Size of the table in SDE 130MB. I can see the table for the FC - only one table as SHAPE field is SDO_Geometry. Is there suppossed to abe another LOB table for SDO_geom? If yes how can I find it? This data represents a very detailed world coasline so I guess all your suggestions for making the table lighter will work. In fact I have set scale dependancies last Friday 🙂
... View more
07-04-2011
06:41 AM
|
0
|
2
|
995
|
|
POST
|
It would be very rare for a full table scan query in a database to outperform a local flat file. I've only seen that kind of performance from a DB2 database (which somehow returned 3.7M point features [with ~1K of attributes] in under four seconds). It is impossible to control order of presentation out of databases without providing an explicit ORDER BY clause. Generally, they will present the data in the order of the driving table or index, but the optimizer and cache have free will without an ORDER BY (which usually slows down performance, since the data is copied to TEMP and sorted before return). Generally speaking, SDO_GEOMETRY will be slower than ST_GEOMETRY or SDELOB/SDEINARY on a full table scan query, simply because the Esri types use a compression algorithim that reduces storge significantly -- less data pages == faster transfer. Keep in mind that ArcSDE only returns the rows the database provides (in the order it provides them, subject to omission for failure to meet spatial filter criteria), so nothing in ArcSDE tuning can change SDO_GEOMETRY return order. You can only change return order of Esri storage types by specifying an SM_ENVP_BY_GRID search filter (and even that has not been reliable the last few releases). ArcSDE also has an optimizer that determines whether to query the table directly or use the spatial index (the threshold is based on comparison of the envelope of the search filter to the envelope of the layer, so changing the layer envelope can impact whether an explicit spatial constraint is applied [ArcSDE will filter geometries in the result stream either way]). I usually go out of my way to load data in an order which will permit the fastest possible spatial search performance, which involves exporting all rows in spatial index order and reloading them so that spatial fragmentation is kept to a minimum. - V Thanks Vince, I just checked and it is even less than 300 000 record in the table. It has 219 000. I am really puzzled that SDE is not capable to return that number as quick as a shp file. By the way the shp file was on the network, so it wasnt local, but still outperformed SDE by much. Since it is a global file used for background mainly, waiting for 40 seconds every time you refresh the map is annoying. Is there any way at all I can remedy this? What is the deal with the spatial query views?
... View more
07-04-2011
02:27 AM
|
0
|
0
|
995
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 05-16-2022 01:50 AM | |
| 1 | 11-09-2023 03:08 AM | |
| 5 | 06-01-2022 04:50 AM | |
| 1 | 02-27-2020 06:42 AM |
| Online Status |
Offline
|
| Date Last Visited |
04-02-2025
02:07 AM
|