Select to view content in your preferred language

File Geodatabase IOPS requirements on network storage server

3667
5
02-29-2012 03:10 AM
JeffVan_Etten
Frequent Contributor
Was wondering if anyone has found out or tested network storage solutions for file geodatabases using ArcMap?  If so, do you know what the IOPS (number of I/O requests) per second are? :confused:

I am looking at specifying a new server for storing file geodatabases and the IT chap is asking about IOPS and the requirements so that he can scale the disk spindles, type of disk (SSD, SATA, etc) and RAID configuration for optimal usage.  There are no plans to use ArcGIS Server, so this is purely a file based read / write option.

If it helps, we use Spatial Analyst all the time (Viewshed, Contour and Map Algebra in particular).  I have read a few posts that talk about file read/write options as not being a bottleneck, but want to ensure that the server performs the best it can to avoid any future problems!

Thanks for your help or experience.  If anyone has any setups that they are using and enjoying that would be great to hear too! 🙂

Regards
Jeff
Tags (2)
0 Kudos
5 Replies
AlexeyTereshenkov
Deactivated User
Hi Jeff,

Concerning the storage infrastructure, I would suggest you to check the Enterprise GIS resources which are available at: http://resources.arcgis.com/content/enterprisegis/10.0/infrastructure_storage, which can give you a brief summary of the different storage mechanisms that could be relevant regarding the performance of your ArcGIS implementation.

At some server machines I've been using SCSI for quite some time, specifically HP Smart Array P410i controller which works really well so far.
http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/smartarrayp410/index.htm...

Personally, I would not provide with any relevant information concerning the IOPS information, however I've seen Dell have done some internal RAID controller performance comparison which you may take a look at:
http://www.dell.com/downloads/global/products/pvaul/en/Dell6Gbps-vs-HP6Gbps.pdf
0 Kudos
JeffVan_Etten
Frequent Contributor
Alex

Thanks for the information and some very useful links!  Why do ESRI make some of the most useful information so hard to find!

Would still be interested in hearing from others of server setups that mainly use file geodatabases!

Will try to compile a list of information found at the end to help others if I get enough responses.

Thanks again
Jeff
0 Kudos
KennethEasterling
Deactivated User
Jeff I am going through the exact same thing with our little file server here (looking to upgrade).  I'm one user using an old server 2003 machine (5 yrs old!) to host all my shapefiles and all my photos and geo databases.

The specs on the server are a dual core Xeon (1.8Ghz) with 4GB of ram.  Storage is a 4x500GB SATA RAID 5 array on a 256MB 3ware 9650se card (with BBU)
Raw IOPS on this set up is about 300 IOPS.  But in reality with a 60/40 read/write split I'm getting about 75 IOPS.  The other part of the equation is that we have a bottleneck with the network, we are only going to get about 80-90MB/sec transfer rates between the server and your workstation with the 1Gb network link (lots of overhead in transfers).  (You can see if you can talk your IT guy into giving you a 10Gb NIC card then you would be off to the races).

My workstation is a new Xeon e3 3.2 Ghz with 8GB of ram and a SSD OS drive, so it's always waiting on the server.

If I am reading my performance monitor on the server correctly, IOPS are reaching 780 when I open a map file with aerials, contours (shapefiles), parcel data and various other pieces of watershed data.  Max transfer is hitting about 55GB/sec with a project open.  So ask your IT guy to watch the performance monitor on the server when you open up a project and run various  tools.  It should be the physicaldisk/disk transfers/sec and the average disk bytes/transfer should give him an idea of what you need.  I think you can run this same monitor if you have these files on your workstation so you can get an idea.

Ken
0 Kudos
JeffVan_Etten
Frequent Contributor
Ken

Great input and thanks for sharing!  When you say your workstation is always waiting on the server, do you know where the bottleneck is?  Disk access or network?

Do you know what the tools you are using to monitor the server/workstation are?  Is it just the Microsoft Resource Monitor or something else.

Thanks again!

Jeff
0 Kudos
KennethEasterling
Deactivated User
I'm thinking the bottleneck is my disks on my server.  Server can hold a steady 2-300 iops when querying shapefiles.  It's bursts to 7-800 on files opens (I'm thinking it's the cache on the raid card that's helping me here).  Max file transfer speeds are about 80-90MB/sec.

When I ran the performance monitor "perfmon.exe" (it's on the server and on windows professional) on my workstation I copied some files locally (onto my SSD drive -SATA III) and did a lot of the same opens and queries.  What I saw was when I opened ArcMap I was hitting about 1700 IOPS on the drive and it never got higher than this.  My thinking is that Arc being a 32bit single threaded app can only create about this many requests at one time.  I did see once the shapefiles were opened I would hit an IOPS of about 350, but that was just the shapefiles, and no aerials.  

So if everything is located on your servers (shapefiles mdb, imagery, etc) I would shoot for at least 1000 IOPS per user with at least a gigabit connection (at least that's what I'm planning on spec'ing out).  If you and your IT guy can budget it, try to get a pair of 10Gbs cards between your server and workstation.  Intel makes some pretty neat 10G cards that run on RJ-45 cat6 cables.

I tested my setup with a pair of 10GB infiniband cards running IPoIB and the latency was great, but it really maxed out my old server (it puts a huge strain on the CPU sometimes 100% on a file tranfer).    I was also limited in that I had the card in a 4x pcie slot on my workstation.  so max real bandwidth was probably 4Gigabit (IP running on Infiniband drops down to 8G and then with my 4x slot instead of 8x I dropped even more)  But I did see transfer speeds as high as 140MB/sec on my raid system.  Which the real limitation is my RAID/Sata drives.

Sorry for being long winded but this IT stuff is way outside of water resources for me.

I hope this helps.
Tell your IT guy I'm looking at the LSI 9260 raid cards with cachecade installed (it uses SSD's to cache the HDD arrays)

Ken
0 Kudos