It would be very rare for a full table scan query in a database to outperform a local flat file.
I've only seen that kind of performance from a DB2 database (which somehow returned 3.7M
point features [with ~1K of attributes] in under four seconds).
It is impossible to control order of presentation out of databases without providing an explicit
ORDER BY clause. Generally, they will present the data in the order of the driving table or
index, but the optimizer and cache have free will without an ORDER BY (which usually slows
down performance, since the data is copied to TEMP and sorted before return).
Generally speaking, SDO_GEOMETRY will be slower than ST_GEOMETRY or SDELOB/SDEINARY
on a full table scan query, simply because the Esri types use a compression algorithim that
reduces storge significantly -- less data pages == faster transfer.
Keep in mind that ArcSDE only returns the rows the database provides (in the order it provides
them, subject to omission for failure to meet spatial filter criteria), so nothing in ArcSDE tuning
can change SDO_GEOMETRY return order. You can only change return order of Esri storage
types by specifying an SM_ENVP_BY_GRID search filter (and even that has not been reliable
the last few releases). ArcSDE also has an optimizer that determines whether to query the
table directly or use the spatial index (the threshold is based on comparison of the envelope
of the search filter to the envelope of the layer, so changing the layer envelope can impact
whether an explicit spatial constraint is applied [ArcSDE will filter geometries in the result
stream either way]).
I usually go out of my way to load data in an order which will permit the fastest possible spatial
search performance, which involves exporting all rows in spatial index order and reloading them
so that spatial fragmentation is kept to a minimum.
- V
How large is the shapefile (storage of .shp/.shx/.dbf combined)?
How large is the table (incuding the related LOB table for SDO_GEOMETRY storage)?
Databases have a *lot* more overhead to implement ACID (atomicity, consistency, isolation,
durability) on each query, while a flat file has none. This is why flat files are very nearly
always faster on a full table scan. Note that this is not an "ArcSDE performance" issue --
it's a *database* performance issue.
If you want to improve database performance, you can follow this simple rule:
Never draw all objects in large tables
There are many ways to implement this: You can make the table thinner by generalizing
the geometries of objects with many vertices. You can make the table shorter by unioning
rows by attribute. You can avoid querying the table by setting a scale dependency in the
client application. Or some combination of two or three.
Since the data is basemap information you have additional options, including storing the
data in a different storage format, and using a map cache to avoid repeated rendering.
- V
Same Problem. And I did tests in PostGis with same table, here are my results:
ArcSDE: 25 seconds
Shapefile: 5 seconds
PostGIS: 5 seconds
File Geodatabase: 25 seconds
Seems like a "problem" with Geodabase.
I would create a new thread question in https://community.esri.com/groups/geodatabase?sr=search&searchId=051cc68d-7216-4718-a424-42a740fe3d2... space for this as this original thread is about 6 years old.
I would provide all the information related to client, geodatabase, RDBMS versions and how large is the feature class you are testing, what is the geometry being used, is the data versioned, etc.