Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64) Oct 28 2016 18:17:30 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows Server 2012 R2 Standard 6.3 <X64> (Build 9600: ) (Hypervisor)
I have separate drives from C through K, for system, datafiles, logfiles, tempdb, etc. Our database server is a VM, and the storage I am presented is from the underlying SAN storage of the VM. The drives were created with the MS default of 4K, but it has been suggested to me that our datafiles, logfiles and tempdb should have a block size of 64K. I don't see many references to this, nothing in ArcGIS documentation.
I suspect the answer is 'it depends' on data size and use.
Does anyone have any insights, experience, rules of thumb for this?
I used the default size personally and didn't have a problem. I found not go has high as 64K since if you need only 1 bit, then you must access the full 64k worth compare to only 4K or even 8K with smaller blocks.
I know that you are using SQL Server but check out https://community.esri.com/thread/115952
Hope that helps.
We'll still have an Oracle presence for our spatial data as well for the next five years or so. Over the years, we've had different iterations with 8K, 16K and even 32K (for rasters) db_block_sizes. For the last migration to Oracle 12.1 we went with 8K straight across the board, and haven't noticed any performance impacts, and have gained some space back. It's simpler as well.
I think blocks sizes for Windows volumes might be like that, where larger data would be stored in a filegroup with a larger block size, etc. But in the end, we have good hardware, a good SAN, and the added complexity maybe not worth it. If it gets to a point and we need to start tuning for performance we might consider looking at it. I came away from my conversation with an associate feeling like this was something I missed the boat on and really needed to change before we even get started, but I think we'll wait on it.