BLOG
|
1. Oracle Tablespace Compression for OLTP is enabled ALTER TABLESPACE GIS_BDATA DEFAULT COMPRESS FOR OLTP; 2. Pro load featureclass into the geodatabase 3. Oracle SQL Developer check table is compressed --check if table is compressed
select table_name,tablespace_name,compression
from user_tables
order by table_name; 4. Pro add Global IDs and Enable Editor Tracking for the Featureclass 5. If try to enable Branch Version (aka. Register as Version) the Featureclass then it is expected to fail because the table is compressed Pro Error Message " Table is not suitable candidate for versioning [Table GIS.FIREZONES is compressed, and so cannot be registered as branch versioned : ORA-39726: unsupported add/drop column operation on compressed tables]" we cannot register the data as branch version if the table has Oracle Compression enabled branch version needs to create an Oracle Identity Field, and a column with Oracle Identity Field Type cannot be added when the table has Oracle Compression enabled ORA-39726: unsupported add/drop column operation on compressed tables if the table is decompressed then the Oracle Identity Field can be created and then the table can have Oracle Compression enabled again 6. the workaround is to use SQL to decompress the table alter table gis.FIREZONES move nocompress parallel 8; 7. when the table is decompressed it causes invalid indexes, the reason is because the indexes reference the table that was compressed, thus the indexes need to be rebuild to reference the table that is now decompressed SELECT * FROM USER_INDEXES d WHERE d.status NOT IN ('VALID','N/A'); 8. rebuild the invalid indexes --rebuild invalid indexes
select 'alter index '||ui.table_owner||'.'||ui.index_name||' rebuild online parallel 8;'
from user_indexes ui
WHERE ui.status NOT IN ('VALID','N/A') AND ui.index_name NOT LIKE '%_IX1'
order by ui.table_owner,ui.table_name,ui.index_name;
--spatial indexes '%_IX1' cannot be rebuild online
select 'alter index '||ui.table_owner||'.'||ui.index_name||' rebuild parallel 8;'
from user_indexes ui
WHERE ui.status NOT IN ('VALID','N/A') AND ui.index_name LIKE '%_IX1'
order by ui.table_owner,ui.table_name,ui.index_name; alter index GIS.R6517_SDE_ROWID_UK rebuild online parallel 8;
alter index GIS.UUID_6517 rebuild online parallel 8;
alter index GIS.A865_IX1 rebuild parallel 8; 9. check again for invalid indexes using SQL ( see step 7) 10. Pro enable Branch Version (aka Register as Versioned) for the featureclass works fine this time because table is decompressed 11. compress table again alter table gis.FIREZONES move compress parallel 8; 12. when table is compressed it causes invalid indexes ( see step 7) 13. rebuild the invalid indexes ( see step 7 ) alter index GIS.GDB_CT1_6517 rebuild online parallel 8;
alter index GIS.GDB_CT2_6517 rebuild online parallel 8;
alter index GIS.GDB_CT3_6517 rebuild online parallel 8;
alter index GIS.R6517_SDE_ROWID_UK rebuild online parallel 8;
alter index GIS.UUID_6517 rebuild online parallel 8;
alter index GIS.A865_IX1 rebuild parallel 8; 14. check again for invalid indexes using SQL ( see step 7 ) Case A: do not decompress table prior to disable branch version (aka unregister as versioned) 15. Pro disable Branch Version (aka Unregister as Versioned) works fine. remember the table was compressed before you disabled Branch Version 16. follow previous steps to decompress and rebuild indexes again 17. Pro try to enable Branch Version (aka Register as Versioned) fails with message Pro Error Message "Unique indexes cannot be specified for multiversion tables [Unique index GDB_CT3_6517 found while enabling multibranch for GIS.FIREZONES]" the featureclass got to a state that cannot re-enable Branch Version. if delete the unique indexes in the featureclass then get other ORA-xxxx errors, it never fixes the problem manually. Note: !!! this problem was fixed in Pro 3.0 !!! If you have Pro 2.9 then use the workaround below !!! Case B: decompress table prior to disable branch version (aka unregister as versioned) 18. Pro delete the featureclass and repeat the steps [1-13] above again to load the featureclass, disable compression, enable Branch Version and re-enable compression 19. Before disable Branch Version (aka Unregister as Versioned) first decompress the table, follow steps [6,7,8,9] 20. Pro disable Branch Version (aka Unregister as Versioned) works fine remember the table was decompressed before you disabled Branch Version 21. Re-compress the table, rebuild invalid indexes, check if table is compressed, check invalid indexes 22. decompress the table, rebuild invalid indexes, check if table is decompressed, check invalid indexes 23. Pro enable Branch Version (aka Register as Versioned) works fine! 24. Re-compress the table, rebuild invalid indexes, check if table is compressed, check invalid indexes
... View more
10-13-2021
09:41 AM
|
0
|
0
|
960
|
BLOG
|
Hello Paulina - @Paulakar , The best practice is for the Data Reviewer workspace to be store on its own separate enterprise geodatabase, without any other schema in that geodatabase. The recommendation is for each product workspace ( Data Reviewer, Workflow Manager, Product Library, NIS ) to have its own enterprise geodatabase, and if the NIS schema has different scales (e.g. 1:100K, 1:250K) then each scale shall have its own enterprise geodatabase as well. This will align the enterprise geodatabases deployment to be more scalable, flexible to manage, easier to upgrade and migrate, easier to configure, it enhances security and makes it easy to manage permissions, the list of benefits goes on, but this are the main ones. The physical design of the enterprise geodatabases also depend on the size of the data, number of users, type of infra structure, hardware, gis workflows, etc. The best practices above can be applied for small / medium / large / very large enterprise geodatabases deployments and will bring many benefits as I have described. I hope this clarifies. Thanks, | Marcelo Marques | Esri PS Products | Principal Product Engineer | | Cloud & Database Administrator | OCP – Oracle Certified Professional | | Esri | 380 New York St | Redlands, CA 92373 | USA | THE SCIENCE OF WHERE ®
... View more
05-19-2021
09:04 AM
|
1
|
0
|
803
|
BLOG
|
Hello Paulina - @Anonymous User , I appreciate you reaching out. Question: Could you elaborate on this statement? What is the reasoning behind this recommendation? [Marcelo]: Performance and maintenance are the main words here! each product workspace must have its own separate enterprise geodatabase, this way performance improves, there are separate datafiles on disk for each geodatabase, I/O improves as consequence, also makes maintenance tasks easier and faster, gather new statistics, rebuild indexes, execute sde compress, each geodatabase has its own versioning workflow that does not interfere with other gis data and product workspace, also facilitates future geodatabase repository upgrades, product workspace upgrades (workflow manager, data reviewer, product library - workspace upgrades) and also facilitates future rdbms migrations and rdbms upgrades, more on maintenance, over time after intense OLTP workloads ( inserts, updates, deletes) the statistics get old and indexes fragmented, hence the rdbms administrator must gather new statistics, execute sde compress, gather statistics again, rebuild indexes, and gather statistics once more, and this needs to be done often to maintain geodatabase performance, hence having separate geodatabases for each product workspace and gis data by cartographic scale improves significantly these tasks, and has even a larger positive impact on performance on very large enterprise geodatabases. Question: I'm particularly interested in Data Reviewer workspace. What are the negative consequences of having the reviewer workspace in a geodatabase together with other schemas? [Marcelo]: poor performance, difficult to manage and perform rdbms / gdb maintenance tasks, more difficult to upgrade gdb repository and review workspace repository, more rdbms data file fragmentation because all product workspaces and gis data are in the same geodatabase, leads to poor performance, poor disk I/O, all data store in same datafiles, might need to perform maintenance tasks more often than normal for some schemas than others, difficult to implement different geodatabase versioning workflows for each schema. etc. I hope this clarifies. Thanks, | Marcelo Marques | Esri PS Products | Principal Product Engineer | | Cloud & Database Administrator | OCP – Oracle Certified Professional | | Esri | 380 New York St | Redlands, CA 92373 | USA | THE SCIENCE OF WHERE ®
... View more
05-18-2021
09:12 AM
|
1
|
0
|
814
|
BLOG
|
--------------------------------------------------------------------------------------------------- How to move and upgrade a wmx repo eGDB? --------------------------------------------------------------------------------------------------- 1. use a database backup recommended, it is the easy way to move the geodatabase, especially if the geodatabase is quite large! or 2. copy / paste wmx schema create a new eGDB then use ArcCatalog or Pro to copy/paste the wmx schema or 3. WMX Administrator export/import the wmx configuration create a new eGDB then use the WMX Administrator to export / import the wmx configuration if you want to keep the jobs data then instead use the "Export Job Data" GP Tool. ArcMap WMX Classic: https://desktop.arcgis.com/en/arcmap/latest/tools/workflow-manager-toolbox/export-job-data.htm Pro WMX: https://pro.arcgis.com/en/pro-app/latest/tool-reference/workflow-manager/export-job-data.htm ---------------------------------------------------------------------------------------------------------------------------------- Example: Migrate and Upgrade the WMX eGDB to a new SQL Server Instance ---------------------------------------------------------------------------------------------------------------------------------- OLD DB Server: Windows Server 2012 R2 SQL Server 2012 SP3 Geodatabase Repo 10.5.1 WMX Repo 10.5.1 NEW DB Server: Windows Server 2019 SQL Server 2019 Geodatabase Repo 10.7.1 WMX Repo 10.7.1 System Requirements https://desktop.arcgis.com/en/system-requirements/latest/database-requirements-sqlserver.htm Install Latest SQL Server Cumulative Update: https://docs.microsoft.com/en-us/sql/database-engine/install-windows/latest-updates-for-microsoft-sql-server?view=sql-server-ver15 Best Practices: https://community.esri.com/t5/mapping-and-charting-enterprise/mapping-and-charting-solutions-mcs-enterprise-databases-best/ba-p/893351 - sde repo and wmx repo version need to match! - 1 gdb for wmx repo and another gdb for gis data! ---------------------------------------------------------------------------------------------------------------------------------- Steps: ---------------------------------------------------------------------------------------------------------------------------------- Note: use ArcCatalog 10.7.1 1. restore SQL Server database backup in the new SQL Server instance - note: do not change the SQL Server db name or the sde repo breaks - remap the db users to the external logins 2. change SQL Server database compatibility level if different from the old server 3. create new sde connection files 4. upgrade sde repo grant sde db_owner ArcCatalog / Pro - upgrade sde repo note: use a connection file to connect as the "sde" user revoke sde db_owner 5. upgrade wmx repo ArcCatalog / Pro - "Upgrade Workflow Database" GP tool note: use a connection file to connect as the data owner user of the wmx repo 6. refresh permission editor/viewer users - if necessary 7. rebuild indexes 8. gather new statistics 9. if geodatabase has gis data then run sde compress from ArcCatalog connected as the sde user and repeat steps 7, 8 10. ArcCatalog wmx repo connection (jtc) - use the wmx editor user and not the wmx data owner user to connect to the wmx repo using the wxm administrator and wmx app https://desktop.arcgis.com/en/arcmap/latest/extensions/workflow-manager/granting-privileges-to-the-workflow-manager-workspace-components-in-sql-server.htm - ArcCatalog set new wmx connection (jtc) https://desktop.arcgis.com/en/arcmap/latest/extensions/workflow-manager/adding-wmx-database-arccatalog.htm - WMX Administrator, try to connect to new wmx repo - WMX Administrator > Security > Users - open the "Post Install" user - your windows domain login shall be mapped to the Post Install user after the "Upgrade Workflow Database" gp tool was executed note: you can add other domain login users and grant the appropriate roles. 11. WMX Administrator Reconfigure to use the new wmx repo sde gdb - Geodatabse > data workspaces - https://desktop.arcgis.com/en/arcmap/latest/extensions/workflow-manager/adding-data-workspace-connections-using-the-workflow-manager-administrator.htm - maps - https://desktop.arcgis.com/en/arcmap/latest/extensions/workflow-manager/maps-and-versioning.htm note: might need to export the map, open it in ArcMap, resource the layers to point to new gdb, save mxd and then update the map in the wmx repo - external properties tables - those shall remain the same, see note below - queries / reports - those shall remain the same, see note below - job types - make sure the job type is using the correct data workspace and configuration is fine - workflows - check the job workflows and make necessary adjustments note: extended properties configuration does not change once the SQL Server database data owner user of the wmx repo remains the same, means the fully qualified object names do not change, e.g. db.owner.table Example: 12. rebuild indexes, gather new statistics, take a full backup 13. open the WMX Application and try to create a new job, and test as needed. ---------------------------------------------------------------------------------------------------
... View more
04-01-2021
10:46 AM
|
0
|
0
|
1125
|
BLOG
|
What is Egdbhealth? Using Egdbhealth to Evaluate a Geodatabase
... View more
03-25-2021
03:17 PM
|
0
|
0
|
527
|
BLOG
|
How to Load a Large Featureclass in a SQL Server Geodatabase. Read more here: How to Load a Large Featureclass in a SQL Server Geodatabase For more best practices see: Mapping and Charting Solutions (MCS) Enterprise Databases Best Practices
... View more
12-03-2020
10:46 AM
|
0
|
0
|
1002
|
BLOG
|
How to Load a Large Featureclass in an Oracle Geodatabase. Read more here: How to Load Large Featureclass Oracle Geodatabase For more best practices see: Mapping and Charting Solutions (MCS) Enterprise Databases Best Practices
... View more
12-03-2020
10:41 AM
|
0
|
0
|
764
|
BLOG
|
MCS Enterprise Geodatabases Best Practices Intended for database administrators as recommendations for establishing the product workspaces* in an Enterprise Geodatabase (Oracle®, SQL Server®, PostgreSQL®). *Production Mapping, Data Reviewer, Workflow Manager
... View more
11-16-2020
10:50 AM
|
0
|
0
|
539
|
BLOG
|
MCS Enterprise Geodatabases Best Practices Intended for database administrators as recommendations for establishing the product workspaces* in an Enterprise Geodatabase (Oracle®, SQL Server®, PostgreSQL®). *Production Mapping, Data Reviewer, Workflow Manager
... View more
11-16-2020
10:44 AM
|
0
|
0
|
444
|
BLOG
|
MCS Enterprise Geodatabases Best Practices Intended for database administrators as recommendations for establishing the product workspaces* in an Enterprise Geodatabase (Oracle®, SQL Server®, PostgreSQL®). *Production Mapping, Data Reviewer, Workflow Manager
... View more
11-16-2020
10:39 AM
|
0
|
4
|
1390
|
POST
|
yes, it is expected and known the data will take more disk space when it is loaded into SDE. You can improve performance using SQL Server Page Compression, create the empty featureclass schema, enable page compression on the tables and indexes, then load the data using ArcCatalog Simple Data Loader, also you can have a custom SDE DBTUNE to separate tables and indexes into different filegroups, large featureclasses like "contours" can have its own SDE DBTUNE keyword with its own filegroups/datafiles that are completely separate from the rest of the data, you still need to tune and maybe rebuild the spatial index after you load the data to make sure you get the best drawing performance possible, another thing to consider is table partitioning and index partitioning, you can implement SQL Server partitioning to improve performance, also when loading data on large featureclasses if the datafiles are in autogrow then this will cause lots of fragmentation, then consider to run dbcc shrink database, or resize the datafiles before you load the data to avoid heavy fragmentation, now all these best practices will help you improve disk I/O, but still at the end of the day your database storage plays a crucial role in I/O, you might find that even after implementing the best practices that the storage does not scale up and you need to consider a faster storage for your database datafiles, this can be determined if you run SQL Profile traces when the data is drawing in ArcGIS to find the slow queries and by analyzing the SQL Query PLAN you can get more info about the storage performance. Also the version of SQL Server plays an important role in performance, if you are using SQL Express then you cannot expect it to scale up with very large featureclasses, and the advanced features like SQL Server Page Compression requires SQL Server Enterprise Edition. I hope this helps.
... View more
05-19-2020
11:50 AM
|
2
|
0
|
757
|
IDEA
|
Yes, in a file gdb you have no option, you have to use the ArcCatalog tools.
... View more
05-06-2020
02:05 PM
|
0
|
0
|
2540
|
IDEA
|
You can make changes to the sequence and increase for example the cache sequence values in the RDMBS using SQL.
... View more
05-06-2020
01:52 PM
|
0
|
1
|
2540
|
IDEA
|
The best practice to separate tables and indexes applies for virtual and SAN storages. I talk more about this in my best practices guides books, check it out Mapping & Charting Solutions (MCS) Enterprise Databases Best Practices. In particular this might be of interest DBTUNE configuration keywords and disk configuration.
... View more
05-05-2020
12:14 PM
|
0
|
0
|
1051
|
IDEA
|
I am a very experienced Geodatabase Administrator that has been using DBTUNE for over 20 years, this idea is to bring that forgotten knowledge back into the current documentation and to continue to provide a dbtune template file that is so useful for the geodatabase administrators. I talk more about this here "Mapping & Charting Solutions (MCS) Enterprise Databases Best Practices".
... View more
05-05-2020
12:08 PM
|
0
|
1
|
1051
|
Title | Kudos | Posted |
---|---|---|
3 | Tuesday | |
2 | a week ago | |
2 | a week ago | |
2 | a week ago | |
1 | a week ago |
Online Status |
Offline
|
Date Last Visited |
Friday
|