We have a number of MS SQL instances containing enterprise geodatabases. Our organization is spread out across our country and the server site is located at a single place. We are experiencing very slow initial connections to our databases through ArcGIS and the time it takes is increasing with distance from the servers. The Users are connecting through direct connect.
However. When a connection is established to a database, everything works fast. It seems its the initial "handshake" that takes time.
If i use a normal SQL database with feature classes (non enterprise geodatabase) the initial connection is very fast for every one.
Is there something we can do about this or is this how enterprise geodatabases works?
This is a big topic, and you haven't really provided much information. For example:
There is more I could ask, but let's start with the high level stuff first.
I will try to fill in;
OS; Windows Server 2012
Version of SQL Server; SQL 2014 (12.2.5000)
Version of ArcGIS Desktop; 10.5.0 (same result with 10.3 to 10.5.1)
Version of Enterprise geodatabase; 10.5.0
Everything runs on VM Ware virtual servers.
Example large database; 162 feature classes.
It takes about 16 seconds for me to connect to that database. I'm sitting about 150 km from server with my client. Users in our most distanced offices are about 1000 km from servers. It takes them about 60 seconds to access the same database.
In the test we are opening a new mxd and import 1 single .lyr-file that points to a feature class in that database.
I have read about that but i don't think thats the problem. We have the same connection time in brand new databases. Also, the connection time depends a lot from how far from the server you are.
It isn't the distance, per se, that is the issue, but the fact that increases in distance come with increases in latency, even at the lowest levels of the network. Connections to enterprise geodatabases (SDE/ArcSDE), especially for browsing data, are quite chatty. Just do a database connection trace on a simple connection to browse one of your enterprise geodatabases, you will see how many commands are executed. Chatty connections are more sensitive to network latency, and even increases in tens of milliseconds can add to the connection time for browsing enterprise geodatabases.
Beyond network latency, the size of the geodatabase can impact connection times for browsing data. In the organization I work for a 60 second connection time for browsing enterprise geodatabases is quite common, even considered good for some geodatabases, and this is from a virtualized client in the same data center as the geodatabases.
My organization has addressed this a few different ways. For some groups, they moved to web services, and consume and edit enterprise geodatabase data through feature and geodata services. For other groups, they have users use layer files to load data (layer files that are sourced to enterprise database data) instead of browsing enterprise geodatabases for data.
did u ever get this fixed? we have same issue with 10.7.1 version.
we have enabled logging to see what happens when connecting from arcmap, resulting in this file sde_client_clnt
what I seem to see is it does same calls multiple times even though we only have one layer in the mxd. for example the commands below occure 15 times with same results just to connect to one layer... i dont know if its normal or if its not normal or if there is any way to fix it? plz i need help.
[W 11:46:02.867] Command: TableGetColumnList
[W 11:46:02.867] NString: "MYDB.dbo.GDB_Items"
[R 11:46:02.867] Long: 0
[R 11:46:02.867] Short: 17
[R 11:46:02.868] Regist_List: 
As suggested above, for me removing the gp history from db worked wonders.
We also have this issue. We're currently running 10.7.1. The initial connection to the MSSQLSERVER DB (LGIM schema) takes 12 seconds. After that, opening a feature dataset takes an additional 7 seconds. It isn't a show stopper but the problem didn't exist in the 10.3 era. I've used UNC and DNS in my connections and there was no difference. I also tried compressing and rebuilding indices. That was interesting because the initial connection actually took 2 seconds longer but opening a feature dataset took 2 seconds less, so it was a wash. QGIS connects relatively quickly and is opening each dataset from the get go in about the same amount of time arc takes to make the initial connection. SSMS opens the database in under a second. I suspect it isn't a database issue. It is something that was introduced in later Arc versions (maybe at 10.5). Maybe a look at release notes between 10.3 and 10.5 might provide some insight. IDK.
Further testing showed that if you recursively list all feature classes in the database (regardless of whether or not they reside in a feature dataset) using arcpy it takes 41 seconds. So apples to apples, QGIS is connecting to these twice as quickly and SSMS is exponentially faster. I'd be kind of interested in seeing a comparison between 10.3 and 10.7. I might be curious enough to try it. I'll post results here if I do. The takeaway for me is that it's looking more and more like Arc is responsible for the delay.
Here's a script to test for yourself. The first time you run it, it will take awhile. Run it again right after that. It will run twice as fast because the initial handshake has already been made. At the very least, you'll be able to quantify the initial connection lag for your specific DB so you'll have a baseline.
import os, arcpy, time
gdb = r'%AppData%\ESRI\Desktop10.7\ArcCatalog\yourconnection.sde'
start = time.time()
arcpy.env.workspace = gdb
datasets = arcpy.ListDatasets(feature_type='feature')
datasets = [''] + datasets if datasets is not None else 
for ds in datasets:
for fc in arcpy.ListFeatureClasses(feature_dataset=ds):
path = os.path.join(arcpy.env.workspace, ds, fc)
end = time.time()
elapsed = (end - start)