The agency I work for maintains a number of enterprise geodatabases (egdb) which contain feature datasets and subsequent feature classes. With overnight python processes, we delete features in our 'public' egdb, and append 'fresh' data from the authoritative sources. I have noticed that the feature datasets have extents that are (excuse the expression) all over the map.
I can't remember when this was phased out, but it used to be when you created a feature class (fc) or feature dataset (fds), one could import the spatial reference from an existing fc or fds. Currently I can see no way to get a consistent extent across all of the feature datasets. This can be problematic as our overnight processes sometimes encounter feature classes that "don't fit" in the given extent. We've all seen bonkers extents where the xmin, ymin, xmax, ymax are way out of whack and the visualization of the data itself is goofy: there will be a cluster of features in one corner of your screen, and a few others else where. Often times this can be corrected by simply calculating a new extent for the problem child feature class.
My questions are:
Is there a method available to change the extent of an existing feature dataset?
Does the extent of a feature dataset even matter as long as the feature classes within all 'fit'?
My approach to resolve any extent issues is to examine the spatial extent of the fds and compare the extent of the incoming fc. If one or more values conflict, recalculate the extent of the incoming feature class. Has anyone else deployed such a practice?