Select to view content in your preferred language

ObjectID field - what happens when it gets to high?

3361
5
Jump to solution
05-03-2013 05:43 AM
Jay_Gregory
Frequent Contributor
If I have some reoccurring scripts that keep appending to a feature class, eventually the objectID field will get too high - for example, over 1000 new records every 30 minutes.  Some of the old records are deleted to limit the size of the feature class, but the ObjectID field keeps getting higher and higher.  Theoretically, eventually, the ObjectID field will max out at 2,147,483,648.  What happens then?  Is there any way to reset the field programatically?
0 Kudos
1 Solution

Accepted Solutions
VinceAngelo
Esri Esteemed Contributor
At 2kF/h, you'd hit the 2^31-1 32-bit integer limit (...7, not ...8) in 122 years
(44739.24 days).  If only some of the features are deleted every hour, you'll
hit the effective use limit of 50M rows in a table in just over 3 months.  This is
probably of greater concern.

Sure, I've populated tables with 690M rows, and I've seen a table with over
a billion rows, but populating large tables (the former took 22 hours) isn't the
same as making effective use of them (the latter could only be queried for
the past 72 hours of data).

The key to making *any* use of very large tables is paying careful attention
to how they're populated, the indexing, and the use of partitioning.  Since
spatial fragmentation is very likely to occur, you'll also need to design a
defragmentation protocol. Once that's in place, the solution for rowid
numbering might be more acute (unless your defrag procedure doesn't
involve additional inserts).  But before that becomes a critical issue, Esri
will probably have implemented a 64-bit rowid option.

- V

View solution in original post

0 Kudos
5 Replies
DuncanHornby
MVP Notable Contributor
Compacting a Personal geoDatabase can reset as discussed here. Maybe you could export the table or have some code trigger at an appropriate value and direct the data to a new table?
0 Kudos
VinceAngelo
Esri Esteemed Contributor
At 2kF/h, you'd hit the 2^31-1 32-bit integer limit (...7, not ...8) in 122 years
(44739.24 days).  If only some of the features are deleted every hour, you'll
hit the effective use limit of 50M rows in a table in just over 3 months.  This is
probably of greater concern.

Sure, I've populated tables with 690M rows, and I've seen a table with over
a billion rows, but populating large tables (the former took 22 hours) isn't the
same as making effective use of them (the latter could only be queried for
the past 72 hours of data).

The key to making *any* use of very large tables is paying careful attention
to how they're populated, the indexing, and the use of partitioning.  Since
spatial fragmentation is very likely to occur, you'll also need to design a
defragmentation protocol. Once that's in place, the solution for rowid
numbering might be more acute (unless your defrag procedure doesn't
involve additional inserts).  But before that becomes a critical issue, Esri
will probably have implemented a 64-bit rowid option.

- V
0 Kudos
RobertBorchert
Honored Contributor
Interesting though.  I think if you had a database that large I think your processor would burn up before you even got close to that .

But yes you should  have a utility set up to compress your database every night.  Occasionally you may want to unregister and register as versioned if in SDE every now and then.

If I have some reoccurring scripts that keep appending to a feature class, eventually the objectID field will get too high - for example, over 1000 new records every 30 minutes.  Some of the old records are deleted to limit the size of the feature class, but the ObjectID field keeps getting higher and higher.  Theoretically, eventually, the ObjectID field will max out at 2,147,483,648.  What happens then?  Is there any way to reset the field programatically?
0 Kudos
VinceAngelo
Esri Esteemed Contributor
A compress on a versioned table will not change the rowid sequence in any way.

A table that volatile should not be versioned.

Even if all the rows from previous inserts were deleted, there is no geodatabase
tool to restart the sequence for values less than the current maximum.  You might
be able to alter the sequence via SQL (though this wouldn't be necessary in this
example, because a truncate woud not only be quicker, it would also reset the
rowid to start at 1 before loading the new features).  The best way to resequence
the rowids would be to export/re-import with truncate or to copy/delete/rename.

- V
0 Kudos
JoeBorgione
MVP Emeritus
We never worried about getting too high.  Oh wait, that was back in the 70's.... Never mind....
That should just about do it....
0 Kudos