"Draw all tables at maximum extent" isn't the fairest of benchmarks available -- It generates
the maximum possible load on the database server, which biases the test toward flat files.
How fast is the network between the client workstation and the Linux server? If it's slower
than gigabit speed (or is burdened with load), then this is another bias toward flat files.
I wouldn't ever expect local flat files to have a longer draw time than an ArcSDE server
in a single-user test, but if you had patched your server and client to at least 9.3sp1,
you'd probably see better ArcSDE performance. 9.3.1sp2 is closer to current, and
would likely give optimal 9.3 performance.
How many feature datasets do you have in the server? ArcGIS 9.x connect time is closely
bound to the number of feature datasets.
Has any tuning ever been performed on the ArcSDE server?
Was any tuning ever done during layer creation to optimize the coordinate referernces?
- V