File geodatabase is getting fat extremely

9125
32
Jump to solution
02-08-2017 03:58 AM
AhmadSALEH1
Occasional Contributor III

Hi,

File geodatabase are getting fat extremely.

I have a FGD with a name “U.gdb”, I have noticed that this DB has a size of 10 G.B, despite the fact that my data is not that big in size, so I decided to copy the F.C’s inside the U.gdb to a newly created geodatabase with a name “U2.gdb” the size of the newly created db decreased immediately to 148 MB.

The same data  are in both GDBs while the size is extremely different. Compact and Compress File Geodatabase Data fails to reduce the size of the geodatabase.

 

The source of this issue might be because I have an enterprise geodatabase (U.mdf) and regularly I used to delete all F.Cs from U.gdb and copy the mdf F.Cs  to U.gdb in order to update my data.

 

What should I do to maintain the same gdb size and avoid the over growing size if the GDB.   

Thanks

Ahmad

Tags (2)
32 Replies
JoshuaBixby
MVP Esteemed Contributor

asrujit_pb‌, your comment about truncating raises an interesting question, at least for me.  With enterprise database systems (Oracle, SQL Server, Postgres, etc...), there are notable differences between truncate and delete.  Truncate can remove large numbers of records much faster than delete, but it does so by bypassing some of the integrity mechanisms.  With the simplified database engine in file geodatabases, I don't really have any idea what the differences are between using delete and truncate.  I assume there are differences, but I have never seen them documented or discussed.  I might have to experiment sometime....

Asrujit_SenGupta
MVP Regular Contributor

Personally I have not observed any difference as such, other than the faster performance with Truncate Table tool.

MichaelVolz
Esteemed Contributor

Ahmad:

I would go with the truncate and append method to update your file gdb as that method bypasses locks from AGS mapservices.  I have been going with this methodology for years with success.

I would make rename your existing excessively large file gdb (backup) and create a new clean file gdb that you could then use in the new truncate and append script tied to a scheduled task.  This would clean-up your production file gdb and bypass AGS locks on the data at the same time.  The scripting is quite minimal.

AhmadSALEH1
Occasional Contributor III

Hi Michael,

 truncate and append seems to be a good choice, I have tried them truncate is very fast, I will shift my work procedure to use truncate rather than delete and append rather than copy.

but  I am also interested about the size  of the geodatabase U.gdb why its getting fat? is the normal delete procedures are not enough to clear the geodatabase!

Thanks

0 Kudos
Asrujit_SenGupta
MVP Regular Contributor

Will suggest contacting Esri Tech Support, so that they can take a look at the concerned File gdb and analyze it. We here can just guess what is wrong with it without taking a look at the geodatabase itself!

NicholasGraf
New Contributor II

I am noticing the same thing.  We have a File Geodatabase that is used as part of data sharing between agencies, and is updated by a script nightly.  What I have found is that even though I delete the previous feature class, the "record" stays in the file structure.  Looking at the files within the  gdb folder I see "a00000009.gdbtable", "a00000012.gdbtable", "a00000014.gdbtable" are all about the same size, but have a one day different in the date modified.  So every night by database basically get new data added, but the old data lives in the shadows. 
I have tried both compress, and compact and they make little to no difference. 

Has this been reported to esri?

NicholasGraf
New Contributor II

I have an update: I am using a script to update features in the FGBD that are used in services.  I was not stopping the services, as they were set to not create a schema lock.  Once I stopped all the services using the data, the FGDB remained small.  There does NOT seem to be a way to reduce the size once the "inflation" has occurred, other than creating a new FGDB

AhmadSALEH1
Occasional Contributor III

Nicholas,

I think this is the correct answer, I used to do the same as you deleting the F.C inside the geodatabase without stopping the services to avoid downtime for the services. it seems the the ArcMAP allows you to delete the F.Cs but they still alive in the background to feed the  services.

0 Kudos
MajdoleenO_A__Awadallah
Occasional Contributor III

Thank you Nicholas, and Ahmad,

Do you have any idea about the MDF itself? the issue is posted here by Jamal Numan: https://community.esri.com/message/678613-the-size-of-mdf-file-is-much-bigger-than-its-gdb-file 

I will try your suggestion and get back to you.

Thank you

Best,

Majdoleen

JamalNUMAN
Legendary Contributor

 

Hi All,

 

Do you mean that we need to stop the service in question before transferring the data from mdf for gdb? I think that this is a disadvantage which will cause inconveniences for users and minimize the chance to be 24 hours online with no downtime.

 

What do you think?

----------------------------------------
Jamal Numan
Geomolg Geoportal for Spatial Information
Ramallah, West Bank, Palestine
0 Kudos