Do not use the tablespace created for SDE metadata for any other purpose.
Best practice is to create one or more tablespaces to hold data, and one or
more users to own data, and then users to access data. Roles should be
created to manage access to tables, then users granted necessary roles.
I generally create several different tablespaces with different UNIFORM extent
sizes based on the "shirt size" of the table (small, medium, large, very-large).
This usually breaks down as "smaller than 5Mb", "5-20Mb", "20-200Mb", and
"200+Mb", but the number of available independent disks comes into play
I also create a nominal "home tablespace" for user's scratch tables
(all 'real' data gets stored via DBTUNE keywords to the data tablespaces).
I used to create a tablespace for non-BLK raster data and one tablespace for
each large raster mosaic or catalog, but nowadays best practice is to keep
rasters as files on disk, not in the database.
Ok, so I can create a tablespace for each user.
Unfortunately I know not the size of dataset that my users will load into geodatabase preventively. I think that I can start with a default value of 400MB for each user tablespace. What do you think about?
Sorry, but I don't understand what you want mean.
You can, but you shouldn't.
That's a common beginner mistake -- it's great way to maximize fragmentation and to assure
the worst possible performance.
If you use the system I recommended, then you only need to train your users (who absolutely
should know how large the data is at the time they load it) how to use the DBTUNE keywords
to load the data. It helps if you make it so not using keywords always fails immediately
(by having the DEFAULTS keyword refer to a non-existant tablespace, like USE_A_KEYWORD).
There's a heap of documentation on DBTUNE; you really need to understand it to be a good