How large do archive feature classes become?

4385
6
Jump to solution
10-20-2014 05:38 PM
BrandonKeinath1
Occasional Contributor III

Can anyone help me understand how large an archive feature class could become over time?  We've been exploring collector and archiving feature classes and some are very large even prior to archiving.  Before I get a nasty-gram from our DBA's I thought I would do some research.  I would also appreciate any workflows related to archiving data and versioning.  Our production environment is a versioned SDE but our collector data will be living in another SDE instance that isn't versioned.  If my question needs clarification please let me know.

Best,

Brandon

1 Solution

Accepted Solutions
VinceAngelo
Esri Esteemed Contributor

I generally don't start to worry about tables until they have more than 20-25 million rows in them.  150K is only barely large enough to manifest signs of spatial fragmentation.

GIS > Managing Data seems like a good fit for data management Qs.

- V

View solution in original post

0 Kudos
6 Replies
VinceAngelo
Esri Esteemed Contributor

How many features are you adding?  Archiving storage is purely based on data volume.  If you're really headed into "very large" space, then using partitioning to lop off "oldest" data might need to be part of your maintenance design.

- V

PS: If you post questions "In a Place" rather in you own discussion, then you'll get a broader set of viewers.  You can move questions using the Move choice to the right of the discussion page.

BrandonKeinath1
Occasional Contributor III

Hi Vince,


Thanks for the reply.  Some of the feature classes we would be interested in archiving have in excess of 150,000 records currently.  Is the archive feature class something that our DBA's or J L would be able to view in the database and monitor the size while we test? 

Do you have a place suggestion for the discussion?

Brandon

0 Kudos
VinceAngelo
Esri Esteemed Contributor

I generally don't start to worry about tables until they have more than 20-25 million rows in them.  150K is only barely large enough to manifest signs of spatial fragmentation.

GIS > Managing Data seems like a good fit for data management Qs.

- V

0 Kudos
BrandonKeinath1
Occasional Contributor III

Funny.  I'll focus on other more pressing fears then.  Thanks for your help.

Brandon

0 Kudos
JoshLehn
New Contributor II

Hi Vince,

What is your favorite way to monitor the table sizes? Is there a way that Brandon Keinath‌ and I could keep an eye on that without having oracle sys access?

0 Kudos
VinceAngelo
Esri Esteemed Contributor

That really depends on the definition of "table size", but none of the queries require more than nominal owner permissions.

For the row count of a single non-versioned, unarchived table, the SQL would be:

SELECT count(*) as NumRows 
FROM   tablename
/

For the segment storage of a specific table, the SQL would be:

SELECT  bytes/1024/1024 as Mb 
FROM    user_segments 
WHERE   segment_name = 'TABLENAME'
/

The process becomes trickier when you want to include the spatial index organized table (IOT), or a feature class has geometries outside the inline storage clause size, is versioned, or is archived (or some combination thereof).  The naming for these objects is straightforward (well, except for the LOB storage naming; that's always been a black box to me), but there's lots of them,   If you place the tables you worry about in their own tablespace, you can cut to the chase with something like:

SELECT   sum(bytes)/1024/1024 as Mb,
         tablespace_name
FROM     user_segments
WHERE    tablespace_name like '%_TBLSP'
GROUP BY tablespace_name
/ 

- V