|
POST
|
If you could run: select count (distinct transtech) from tmp_union_driver; select count (distinct maxaddown) from tmp_union_driver; select count (distinct maxadup) from tmp_union_driver; select count (distinct hoconum) from tmp_union_driver; select count (distinct hoconame) from tmp_union_driver; I could simulate a rough approximation of your dataset (uniform distribution across all columns) If you can zip up an ASCII dump of tmp_union_driver and have it added to the incident, I can generate a random spatial component for the actual data distribution. - V
... View more
02-24-2011
12:01 PM
|
0
|
0
|
1574
|
|
POST
|
OK, so 300k vertices is quite large (as an average) and 16 million vertices (434k *parts*) is likely to break things (including the 500k 2-D vertex limit for SDO_GEOMETRY). It would probably be worthwhile to break out how many attribute combinations exceed num_rows of 14169, since you'll need to handle them delicately. - V
... View more
02-24-2011
11:48 AM
|
0
|
0
|
1574
|
|
POST
|
How many distinct values in each of your aggregation columns? How many rows in the tmp_union_driver table? I could generate a random dataset for reproduction of this task. If you file an incident on the failure of dissolve, I can give your TS analyst a couple of tables to use. - V
... View more
02-24-2011
10:59 AM
|
0
|
0
|
1574
|
|
POST
|
You'd probably want to try it in both environments. I started writing some accumulator capabilities into command-line tools, but got distracted by something for a client and never got back to it. It would probably be under 150 lines of code to just write some Python to query the driver table in a loop, query based on the combination of attributes in the big table, then collect the single-part points in a nested loop, then export the attributes with a buffer of the point array. - V
... View more
02-24-2011
10:38 AM
|
0
|
0
|
1574
|
|
POST
|
120k vertices is a lot, but it's smaller than Russia in most 1:1m COUNTRIES tables. I wonder if you can't try the processing the other direction, and union the points into multi-part shapes, then buffer the points. - V
... View more
02-24-2011
10:11 AM
|
0
|
0
|
4175
|
|
POST
|
I was afraid there would be 361 vertices per circle. Should the circles with similar attributes overlap, or are you making multi-part features? You can estimate the worst-case vertex count by doing a: CREATE TABLE tmp_union_driver AS SELECT transtech,maxaddown,maxadup,hoconum,hoconame,count(*) num_rows FROM address_extract_fc_buffer_ri GROUP BY transtech,maxaddown,maxadup,hoconum,hoconame ORDER BY transtech,maxaddown,maxadup,hoconum,hoconame; then SELECT min(num_rows)*37 min_verts, avg(num_rows)*37 avg_verts, max(num_rows)*37 max_verts FROM tmp_union_driver; - V
... View more
02-24-2011
09:49 AM
|
0
|
0
|
4175
|
|
POST
|
How many vertices were generated for each polygon? How many features (min/max/mean) are unioned by the compound key (and how many vertices exist in the unioned shapes)? Is this simply a too-many-vertices issue? - V
... View more
02-24-2011
09:05 AM
|
0
|
0
|
4175
|
|
POST
|
Oracle 10.2.0.1 is not supported by any ArcSDE release. The minimum Oracle 10g release for ArcSDE 10 is Oracle 10.2.0.3. Oracle XE is not officially supported, but others appear to have used it as recently as ArcSDE 9.2. - V
... View more
02-23-2011
02:29 AM
|
0
|
0
|
965
|
|
POST
|
I generally spend my time tuning data, not ArcSDE. The only parameter I've ever tuned that improved performance was to increase the transmit buffer size when working with a satellite networking solution (and it gave the appearance of slower response to the local network folks). If you want to see where ArcSDE is spending its time, you can enable the SDETRACE environment, and run your connection test. Be warned: I did this with a 7500 table instance with 1250 feature datasets with 9.2sp4 and it took an hour to generate a 7Gb trace file. Parsing that trace was a major PITA. Be sure to disable the trace variables as soon as the test completes -- I managed to generate a 64Gb trace once (would have been larger, but the C: drive ran out of space). - V
... View more
02-16-2011
03:48 PM
|
0
|
0
|
2192
|
|
POST
|
Visibility is a property of table registration, but it's bitmasked into object_flags in the registry, so 'sdetable -o describe_reg' is the easiest way to check for "Visibility :Hiden". - V
... View more
02-16-2011
03:24 PM
|
0
|
0
|
1754
|
|
POST
|
It's possible that the tables contain datatypes which ArcGIS can't represent (like 64-bit integers), but which ArcSDE can, and it's possible that the layer was deleted and then restored in isolation (without restoring SDE user component references, corrupting the database), and it's possible that the GEOMETRY/GEOGRAPHY table was simply never registered with ArcSDE (but this doesn't sound like your issue). It's also possible that the table is reflected in the geodatabase dictionary and the ArcSDE table registry, but that the table is registered as HIDDEN (though last time I tried to do this, it didn't work). The easiest way to hide a table is to not grant SELECT access to the user from whom you want it hidden. - V
... View more
02-16-2011
12:07 PM
|
0
|
0
|
1754
|
|
POST
|
"Draw all tables at maximum extent" isn't the fairest of benchmarks available -- It generates the maximum possible load on the database server, which biases the test toward flat files. How fast is the network between the client workstation and the Linux server? If it's slower than gigabit speed (or is burdened with load), then this is another bias toward flat files. I wouldn't ever expect local flat files to have a longer draw time than an ArcSDE server in a single-user test, but if you had patched your server and client to at least 9.3sp1, you'd probably see better ArcSDE performance. 9.3.1sp2 is closer to current, and would likely give optimal 9.3 performance. How many feature datasets do you have in the server? ArcGIS 9.x connect time is closely bound to the number of feature datasets. Has any tuning ever been performed on the ArcSDE server? Was any tuning ever done during layer creation to optimize the coordinate referernces? - V
... View more
02-16-2011
04:14 AM
|
0
|
0
|
2192
|
|
POST
|
That '-i' syntax is for a Direct Connect connection; Direct Connect success indicates that your application server (giomgr daemon, started by 'sdemon -o start') is configured incorrectly. There isn't a whole lot of documentation on connections because there's not a lot of variation possible. All commands (and the two APIs) take five parameters (server, instance, database, user, password). The database parameter is mostly unused, since modern ArcSDE instances are either linked to a single database or the RDBMS doesn't have them (Oracle). The only real variation is for Direct Connect, where the server is ignored, the instance changes to a RDBMS-specific colon-delimited list (starting with "sde") and the password may require extra parameters (on Oracle). At this point I'd urge you to contact Tech Support; they can help you with establishing the correct parameters in $SDEHOME/etc/dbinit.sde so that this error is not generated. - V
... View more
02-12-2011
01:19 PM
|
0
|
0
|
1457
|
|
POST
|
I prefer to use the service SDEHOME for the actual upgrade (so the logs stay with the instance), but other than that, you've pretty much nailed it. Different sites have different situations, but If I'm retaining the custom SDEHOME path (vice tweaking the name to include the SPn), then I'll: + stop the service + rename the custom SDEHOME + copy the installation to the service SDEHOME path + copy the etc directory contents from the old SDEHOME + manually set SDEHOME to the new directory in old name + prepend %SDEHOME%\bin to the PATH + set the ORACLE_SID/LOCAL variable + grant the necessary privs (or DBA) to SDE + execute 'sdesetup -o upgrade' (or, at 10.x, the Catalog/Python equivalent) + revoke the unnecessary privs from SDE + restart the service + if all has gone well, either zip and burn the old directory(ies) to CD-ROM, or just delete the old directory If I chose to rename the service directory, I delete the service after stop and recreate it before start (renaming the directory is the only reason to drop the service). - V
... View more
02-10-2011
03:45 PM
|
0
|
0
|
602
|
|
POST
|
The "maximum" pyramid level occurs when the pixels used in the level 0 image are contained by one tile in the pyramid. Depending on the origin and dimensions of the image, there may actually be several tiles, and the AUTO pyramind option doesn't always generate a true "maximum". You could go higher than the maximum, but there's little computational benefit to doing so. You can't control the scale of ArcSDE pyramid levels (it's based on a series which doubles the pixel size). The renderer is likely to do futher interpolation to the image data to convert raster pixels to screen units anyway. You might chose a lower level if the imagery was configured to be scale dependent, and you knew the tiles would never be used, but the storage savings this represents is usually trivial -- stopping at the fourth level (vice letting it go to 7 levels) only saves (1/1024 + 1/4096 + 1/16384) of the base level storage (roughly - compression efficiency changes by level depending on the interpolation algorithm). Level selection by scale is dependent on the extent of the image, the tile size, and the pixel dimensions. I'd only choose a level explicitly if I had a custom 'C' app which was processing the data, or if AUTO chose a level which was impacting performance by being too small. The documentation does a good job of showing the details of how the pyramid construction process works. - V
... View more
02-10-2011
05:45 AM
|
0
|
0
|
922
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 10-10-2025 07:28 AM | |
| 2 | 10-07-2025 11:00 AM | |
| 1 | 08-13-2025 07:10 AM | |
| 1 | 07-17-2025 08:16 PM | |
| 1 | 07-13-2025 07:47 AM |