File geodatabase is getting fat extremely

8742
32
Jump to solution
02-08-2017 03:58 AM
AhmadSALEH1
Occasional Contributor III

Hi,

File geodatabase are getting fat extremely.

I have a FGD with a name “U.gdb”, I have noticed that this DB has a size of 10 G.B, despite the fact that my data is not that big in size, so I decided to copy the F.C’s inside the U.gdb to a newly created geodatabase with a name “U2.gdb” the size of the newly created db decreased immediately to 148 MB.

The same data  are in both GDBs while the size is extremely different. Compact and Compress File Geodatabase Data fails to reduce the size of the geodatabase.

 

The source of this issue might be because I have an enterprise geodatabase (U.mdf) and regularly I used to delete all F.Cs from U.gdb and copy the mdf F.Cs  to U.gdb in order to update my data.

 

What should I do to maintain the same gdb size and avoid the over growing size if the GDB.   

Thanks

Ahmad

Tags (2)
1 Solution

Accepted Solutions
NicholasGraf
New Contributor II

I have an update: I am using a script to update features in the FGBD that are used in services.  I was not stopping the services, as they were set to not create a schema lock.  Once I stopped all the services using the data, the FGDB remained small.  There does NOT seem to be a way to reduce the size once the "inflation" has occurred, other than creating a new FGDB

View solution in original post

32 Replies
JoshuaBixby
MVP Esteemed Contributor

Compact doesn't do anything?  That seems odd.  What is the size after running compact?

Not to ask the obvious, but have you observed the Tip in the Compact file and personal geodatabases documentation:

If data from the geodatabase is open for editing, it cannot be compacted. To compact the database, remove it from the map.

Also, what about running compact after you delete the feature classes and before repopulating the geodatabase, does that make a difference versus running compact after the geodatabase has been repopulated?

Asrujit_SenGupta
MVP Regular Contributor

From the images, it does not seem that both the File Gdbs have the same amount of data in them.

U seems to have 2435 files as per your image, whereas U2 has only 116 files!

Analyze it more carefully and check what these are pointing to..

JayantaPoddar
MVP Esteemed Contributor

Looking at the number of files as mentioned by Asrujit SenGupta‌, I think there could be a lot of orphaned files in the FGDB. Not sure what's causing these files to "stay alive" even after deletion of associated features class/tables from the FGDB.



Think Location
AhmadSALEH1
Occasional Contributor III

Thanks guys,

Joshua compact fails to reduce the size even after deleting the F.C then compress and copy again! The reduction in size is about 100 MB after applying the compact.

 

Asrujit your right dispite the fact that thy have the same number of F.Cs, U.gdb have a lot more files inside it when you show the file properties in windows!

Jayanta your right, orphaned files is stayed alive after deleting the F.Cs ! in my daily scenario, 3 editors are connected to an enterprise geodatabase, at the end of the day I delete the old files in the U.gdb and I copy the new F.Cs form the enterprise to U.gdb. After a year my geodatabase became this big!

It seems that deleting F.C inside the geodatabase keeps some orphaned files a live and not deleted. How this can be solved.

 

Thanks

Ahmad

0 Kudos
Asrujit_SenGupta
MVP Regular Contributor

You should think about Replication...which will save you the time spent behind deleting and copying the FCs everyday.

If Replication is not what you want, then why not simply delete the Old File gdb and create a new File gdb with same name everyday, then copy the FCs. Hardly a minutes work.

MichaelVolz
Esteemed Contributor

Going with Asrujit's 2nd suggestion (not replication), you could script this out and run it everyday as a scheduled task.  One thing to note is if the data is being used by an ArcGIS Server mapservice as that could put locks on the data.

AhmadSALEH1
Occasional Contributor III

Thanks again guys,

Asrujit, Replication is not thing that I want to do in my case, it needs a lot of efforts and time and its useless, imagine that it fails to reflect the edits if they were more than 1000 ! this is one of many limitations that it has.

Michael you got the point, I run arcgis server so the data is locked you can’t delete the geodatabase without turning off the Arcgis server, then my entire system (web mapping application, arcgis server) will be down. What should I consider else?

Thanks

Ahmad

0 Kudos
Asrujit_SenGupta
MVP Regular Contributor

The Bug that you are talking about, that has been fixed a long time back. Replication was the best solution to your scenario, however as I mentioned, if that is not what you are comfortable with, then go for other solutions.

How many records do the FCs have? If they are not too big, then you can automate Truncate of the FCs(basically deleting all records from the FCs in the File gdb) and then load the updated data from the sde FCs.

AhmadSALEH1
Occasional Contributor III

Hi Asrujit,

Thanks for the prompt reply, glade to hear that this bug has been fixed, as I remember there was another 2 major issues with it.

My F.C has almost 100K records, I will try to Truncate of the FCs and see how it works, but I am still curious about the size issue I hope to find an answer let’s wait maybe one of esri’s developers can give a clear answer for this issue  

Thanks

Ahmad