How to reset HWM in Geodatabase

878
4
01-14-2014 05:52 PM
AmitGupta
New Contributor
Dear All,
I have some problem with reset HWM in Geodatabase. Currently I'm using ArcGIS10.1 and ArcSDE Geodatabase (Enterprise Geodatabase) is oracle 11.2g. After deleting many features (rows) in feature class, the file size is still same as before. I checked the used space in datafile is about 3.5G and actual file size is 7G. I tried to resize the datafile from sqlplus after deleting many rows but still not possible.
- I cleaned recyclebin and dba_recyclebin, but it doesn't work (I have more than 400 feature class in one dataset.)

- I used row enable and shrink space cascade, but the HWM is same.

if any body knows, How to shrink datafiles after deleting millions of row, I would be appreciate the help.

Thanks,
Amit
0 Kudos
4 Replies
VinceAngelo
Esri Esteemed Contributor
You really ought to include the text of acronyms before using them repeatedly.

The only way I know of to reset the high water mark of a table is to TRUNCATE
the table.  DROPping the table and recreating it would accomplish this as well.
From there you're into exotic solutions, like partitioning, where you can truncate
a portion of a table.

You'd probably be better off asking this in an Oracle forum.

- V
0 Kudos
AmitGupta
New Contributor
sir,
Thanks for responding, I used truncate command (using python script from ArcGIS) to delete all the tables, and proceed with compress and analyses from ArcGIS tool (Database Administration). But the result is same, the HWM is still at 7G.
Does it is possible to resize the datafiles belongs to GIS???, because oracle blog says LOB is not possible to rebuilt.
I checked all the possible ways, but I didn't get any help..

Thanks,
Amit
0 Kudos
VinceAngelo
Esri Esteemed Contributor
How do you know this is a high water mark issue, just from datafile size?   Any object could prevent shrinking a datafile. I would create a new tablespace and transfer the objects there at the database level (this is not a task for ArcPy), or just ignore the drop in a bucket difference of 4Gb on modern disks. - V
0 Kudos
AmitGupta
New Contributor
the data in particular tablespace is around 4G (used space), allocated space is 7G (Total space) and free space is 3G. I used the oracle script to know all these particulars. In case, if I have same problem with production environment then how could I solve?, my production environment data size is very big in TB's and there are many demo data is created to test the environment and I cant use the expdp/impdp because of existing size. That's why I prepared this small demo in test environment.
- how could I check, which particular is blocking to shrink the datafile??
- Does Expdp/Impdp is work with GIS schema??

Thanks,
0 Kudos