Storage consumed in Oracle and SQL Server for geodatabase

3859
16
10-17-2017 01:07 PM
SherrieKubis
Occasional Contributor

Oracle 12.1   SDE 10.4.2

and 

MS SQL 2016 SP1 

SDE 10.5.0 

As we migrate from Oracle to SQL Server I see that MS SQL takes more storage, but I don't see a pattern.  In one geodatabase Oracle consumed 10GB, SQL Server 15GB.  In our largest geodatabase, Oracle is consuming about 100GB, Sql Server is almost at 200GB and the data is less than half migrated.  That would make it about 5x more in SQL Server.  What am I missing?  I'm not as well versed in SQL Server, but it seems pretty straightforward.  

In either Oracle or SQL Server we don't change DBTUNE and take the defaults.  

Any insights or suggestions are appreciated. 

Sherrie 

Tags (1)
0 Kudos
16 Replies
KevinDunlop
Occasional Contributor III

Our Oracle is about 60 GB total.  The outside contract we have was in place before I took my job as the SDE admin.  That contract handles patches, back ups, and other software relate stuff for all Oracle in our organization (more than just the SDE Oracle).  So while I could manage the Oracle software myself, the policy is for the Oracle support contract to do it since the other area's DBAs are not interested in doing it themselves.  The decision to move all databases to SQL Server affects more than GIS especially since other 3rd party (non-GIS) applications are dropping Oracle integration support in favor of SQL Server.  But there is also a big cost savings which is the main reason.

0 Kudos
VinceAngelo
Esri Esteemed Contributor

It's really difficult to predict the difference in storage between RDBMS implementations. That said, I haven't noticed huge differences (except with respect to the RDBMS binary catalog overhead -- Oracle is a *huge* install).

Note that porting an SDO_GEOMETRY-storage geodatabase to Geometry-storage would produce different characteristics than an SDE.ST_GEOMETRY source (due to compression in the SDE.ST_GEOMETRY implementation).

- V

SherrieKubis
Occasional Contributor

Hello Vince! Oracle's a beast, but I'm focusing on geodatabases and the spatial implementation.  We use ST_GEOMETRY, nothing SDO.  In MS SQL, we are using the native spatial type.  In both Oracle and MS SQL we've not modified DBTUNE, but the default in there is PCTFREE 0.  We are using Oracle ASM and ASSM Tablespaces.   In MS SQL we also use the default DBTUNE and it uses FILLFACTOR 75.  I'm going to look in that direction, that seems a likely place.  

0 Kudos
TinaMorgan1
Occasional Contributor II

Take a look at these Microsoft resources discussing files and filegroups, as well as growing log files...

Using Files and Filegroups to Manage Database Growth 

Help! My SQL Server Log File is too big!!! - TechRepublic 

See the part about database recovery model in ^ article.

SherrieKubis
Occasional Contributor

Thanks Tina, it's really about the ArcGIS objects in the datafiles, logfiles aren't the issue.  I'm looking for other experiences when migration from Oracle to MS SQL with ArcGIS, and if anyone has seen a size difference -- good, bad, big or small.  There are things to do in MS SQL like FILLFACTOR for indexes, compression, revising datatypes,  shrinking the datafile to get to a target size, but the only think I'm looking at is FILLFACTOR, they others are not something I want to mess with.  To me, they are different RDBMS systems with different storage, and what is required, is, well, required.  But I can't help wondering why others haven't seen this, or at least I haven't found them.  

0 Kudos
SherrieKubis
Occasional Contributor

As it turns out, this issue is not caused an object's storage characteristics, but the datafile.  I created the datafile at a reasonable size and allowed autogrowth just in case, but at a percentage instead of size.  Each time space is added, it's an exponential size, but there was unused space in there, and each time it was more unused space.  Everything I've read MS SQL uses a Global Allocation Map to find an unused block, but that didn't seem to be happening, just a bunch of white space everywhere with bad performance.  

We started over and I made the datafile bigger with an autogrowth of 1MB and the behavior went away.  

This is staging, when we have a handle on how much storage is required for each of our databases, development through production will be adequately sized from the get go.  

The motto is don't use a percentage autogrowth size ... 

Sherrie 

VinceAngelo
Esri Esteemed Contributor

1Mb is probably too small, and will likely result in fragmentation if there is any other activity on that device.
Try using fixed 100Mb or 200Mb segments (this will also likely improve insert performance).

- V