I'm getting charged per GB for SQL Server storage, so I have an incentive to use just as much storage as I need and no more. There are enough GB involved to create an attractive benefit to using less storage. I immediately think, "shrink the database." Then I think, "uh oh."
I notice that the Esri help is silent on the subject:
While the Microsoft help is not:
So... have you ever done this successfully with an enterprise geodatabase in SQL Server? Any gotchas, hints, suggestions, soothing sounds, pointing and laughing?
Thanks,
tim
@John_Spence, are you saying there's a geoprocessing tool to release unused space from within the Esri ecosystem, or are you saying to compress the DB and immediately shrink using the tools in SQL Server administrator?
Do not shrink the DB using the SQL Server admin tools or commands. You will cause yourself more headache than it is worth. Performance problems with your DB is usually due bad indexes & stats or garbage in the metadata or configuration issues or not using best practices in some form or shape. Obviously, all things we all (generally) deal with on a day to day basis.
If you are running into issues, I would...
Hope this helps.
Thank you for clarifying.
It sounds like I might have to contact Esri tech support to find a path forward. The DB gets compressed and reindexed every morning, I don't record GP history on all my regular scripts (and I delete it periodically just to be sure), and the log file isn't too scary. My big culprit is an archive class that I don't need (but exists anyway because the table is related to a FC that does need archiving). That archive class had ballooned to 140 GB. I deleted it the other day, but the DB is still tying up all that space. The Shrink workflow you link to seems to only apply to SQL Server Express, as I can't find equivalent tools for my SQL Server data connnections.
Oh geez! You're right on the nose there. My bad.
I have a similar situation on my end with regards to archiving. I just dealt with it by disabling the relate (during maintenance) and dropping the archiving and then turning it back on followed by a rebuild of the relationship. Cleans it up, keeps it smaller (long term).
No worries! If I'm understanding correctly you do the following:
and this gets rid of the unused space?
Hi @John_Spence & @jsarthur,
Good discussion above. Under my standing "I really don't know anything" principle, I'd like to know more about the hugeness of the shrinking eGDB no-no. If you can point me at some help documentation, technical articles, blogs, etc. that might be easiest. Just chiming in with a few items:
Cheers,
tim
Good morning Tim,
Caught me at a good time lamenting my choice of which Crypto asset to get into. LOL.
Give me a call when you get a sec (noticed you are local). Details are in your LinkedIn messaging.
Well the scale and scope of what you are dealing with does change the discussion a bit. From the sounds of it, you are working in a cloud situation too which makes things all that much more fun.
600M record changeouts, YES, please by all means shrink that database. You really cannot afford to do otherwise due to the volume of data movement. Take care to rebuild all indexes when you are complete though as that volume of movement and shrinkage will have a impact on the indexes. The below query will rebuild the indexes for the entire database for you afterwards without having to go one or using the standard tools for Esri (python or toolbox).
DECLARE @Database NVARCHAR(255)
DECLARE @Table NVARCHAR(255)
DECLARE @cmd NVARCHAR(1000)DECLARE DatabaseCursor CURSOR READ_ONLY FOR
SELECT name FROM master.sys.databases
--WHERE name NOT IN ('master','msdb','tempdb','model','distribution') -- databases to exclude
WHERE name IN ('Carta') -- use this to select specific databases and comment out line above
AND state = 0 -- database is online
AND is_in_standby = 0 -- database is not read only for log shipping
ORDER BY 1OPEN DatabaseCursor
FETCH NEXT FROM DatabaseCursor INTO @Database
WHILE @@FETCH_STATUS = 0
BEGINSET @cmd = 'DECLARE TableCursor CURSOR READ_ONLY FOR SELECT ''['' + table_catalog + ''].['' + table_schema + ''].['' +
table_name + '']'' as tableName FROM [' + @Database + '].INFORMATION_SCHEMA.TABLES WHERE table_type = ''BASE TABLE'''-- create table cursor
EXEC (@cmd)
OPEN TableCursorFETCH NEXT FROM TableCursor INTO @Table
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD'
--PRINT @cmd -- uncomment if you want to see commands
EXEC (@cmd)
END TRY
BEGIN CATCH
PRINT '---'
PRINT @cmd
PRINT ERROR_MESSAGE()
PRINT '---'
END CATCHFETCH NEXT FROM TableCursor INTO @Table
ENDCLOSE TableCursor
DEALLOCATE TableCursorFETCH NEXT FROM DatabaseCursor INTO @Database
END
CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor
Shrinking that database, here is the right spot to read about it though it does only speak to SQL Server Express, the rules apply.
As your databases are for OLAP typically, have you considered using data warehousing options like Snowflake, etc? Pro has some options for that now and you would get a significant performance improvement from making that switchover.
We have had no problems using native sql shrink (if needed).