CREATE DATABASE USER TOOL and tablespace best pratice

2302
5
01-29-2014 07:14 AM
Highlighted
New Contributor II
Hi,
I have setting up geodatabase on oracle11gR2.
I created geodatabase within a tablespace named SDE_TBS (the default name for sde user).

Now, I am creating new users using "create database user tool" and I can specify a tablespace name.

Are there best pratices for tablespace to use?

Which tablespace should i use for each new user?
- Oracle default tablespace
- SDE_TBS (same SDE USER)
- a new tablespace (<newusername>_TBS)

Thanks a lot
Giuseppe P.
Reply
0 Kudos
5 Replies
Highlighted
Esri Esteemed Contributor
Do not use the tablespace created for SDE metadata for any other purpose.
Best practice is to create one or more tablespaces to hold data, and one or
more users to own data, and then users to access data.  Roles should be
created to manage access to tables, then users granted necessary roles.

I generally create several different tablespaces with different UNIFORM extent
sizes based on the "shirt size" of the table (small, medium, large, very-large).
This usually breaks down as "smaller than 5Mb", "5-20Mb", "20-200Mb", and
"200+Mb", but the number of available independent disks comes into play
as well.  I also create a nominal "home tablespace" for user's scratch tables
(all 'real' data gets stored via DBTUNE keywords to the data tablespaces).

I used to create a tablespace for non-BLK raster data and one tablespace for
each large raster mosaic or catalog, but nowadays best practice is to keep
rasters as files on disk, not in the database.

- V
Reply
0 Kudos
Highlighted
New Contributor II
Hi Vice,
thank you for you reply. I would like to get other information about these points.
Do not use the tablespace created for SDE metadata for any other purpose.
Best practice is to create one or more tablespaces to hold data, and one or
more users to own data, and then users to access data.  Roles should be
created to manage access to tables, then users granted necessary roles.

Ok, so I can create a tablespace for each user.


I generally create several different tablespaces with different UNIFORM extent
sizes based on the "shirt size" of the table (small, medium, large, very-large).
This usually breaks down as "smaller than 5Mb", "5-20Mb", "20-200Mb", and
"200+Mb", but the number of available independent disks comes into play
as well.

Unfortunately I know not the size of dataset that my users will load into geodatabase preventively. I think that I can start with a default value of 400MB for each user tablespace. What do you think about?


I also create a nominal "home tablespace" for user's scratch tables
(all 'real' data gets stored via DBTUNE keywords to the data tablespaces).


Sorry, but I don't understand what you want mean.


I used to create a tablespace for non-BLK raster data and one tablespace for
each large raster mosaic or catalog, but nowadays best practice is to keep
rasters as files on disk, not in the database.

I haven't raster to store into geodb.

thanks
Giuseppe
Reply
0 Kudos
Highlighted
Esri Esteemed Contributor

Ok, so I can create a tablespace for each user.

You can, but you shouldn't.


Unfortunately I know not the size of dataset that my users will load into geodatabase preventively. I think that I can start with a default value of 400MB for each user tablespace. What do you think about?

That's a common beginner mistake -- it's great way to maximize fragmentation and to assure
the worst possible performance.

If you use the system I recommended, then you only need to train your users (who absolutely
should know how large the data is at the time they load it) how to use the DBTUNE keywords
to load the data.  It helps if you make it so not using keywords always fails immediately
(by having the DEFAULTS keyword refer to a non-existant tablespace, like USE_A_KEYWORD).


Sorry, but I don't understand what you want mean.

There's a heap of documentation on DBTUNE; you really need to understand it to be a good
ArcSDE administrator.

- V
Reply
0 Kudos
Highlighted
New Contributor II
You can, but you shouldn't.


That's a common beginner mistake -- it's great way to maximize fragmentation and to assure
the worst possible performance.

If you use the system I recommended, then you only need to train your users (who absolutely
should know how large the data is at the time they load it) how to use the DBTUNE keywords
to load the data.  It helps if you make it so not using keywords always fails immediately
(by having the DEFAULTS keyword refer to a non-existant tablespace, like USE_A_KEYWORD).


There's a heap of documentation on DBTUNE; you really need to understand it to be a good
ArcSDE administrator.

- V


Ok thanks.
About keywords, I updated the GEOMETRY_STORAGE only to SDO_GEOMETRY because all my data must be stored as oracle spatial geometry.

Now, my user will load own data using "import feature class" tool.

I created a user AAA (with a new tablespace TBS_AAA) and I loaded a feature class. I look TBS_AAA size (by oracle enterprise manager) and it decrease. Is arcsde work fine? Do I must change other keywords?

Giuseppe
Reply
0 Kudos
Highlighted
Occasional Contributor III
if your tablespace TBS_AAA is decreasing in size then increase the size in large amount enough, depending on your oracle database data file size and disk space.

you analysts should guide how much space their data is going to be as vince mentioned.

remark: you can enable "autoextend" on your tablespace which automatically increases the size until a maximum data file size depending on your OS block size where Oracle is residing.

personally i do auto-extend for specific tablespaces ONLY because of business needs for applications that their data is growing contentiously.
Reply
0 Kudos