Why you should not stick to SQL Server’s default Initial file size and autogrowth
We hear a lot of these statements :
- My SQL Server is running slow
- My Production DB was fine when we started, But it is staggeringly slow now
- My Business end users are frustrated to wait too long
Well, there are lots of reasons why your SQL Server might be slow. Setting the Autogrowth option to default is definitely one of the vital ones which we seem to ignore most of the time.
SQL Server provides you with some default settings for autogrowth when you install it for the first time. These default cases are defined with increment by 8MB or by 10%.
You need to change it to suit your own needs. For Small
application, this default value might work but as soon as your system grows,
you feel the impact of it more often.
What Happens SQL Server Files needs more space
- SQL Server Requests the Server PC for more space
- The Server PC takes the request and asks the SQL request to wait until it finishes the job
- Server PC allocates space for your increased SQL file
- Your SQL request then can resume its work.
So, you can understand very easily that, the more frequently
you need space, the more overhead you incur.
How can you understand your Database is facing this autogrowth issue?
By analyzing the wait type in SQL Server. If you find that your Database is facing more and more preemptive type waits (preemptive simply means OS itself), you need to investigate why your SQL Server is yielding to Server OS. The following query can help you find out the current wait types your SQL server is facing. Look for ones with preemptive waits.
SELECT dm_ws.wait_duration_ms,
dm_ws.wait_type,
dm_es.status,
dm_t.TEXT,
dm_qp.query_plan,
dm_ws.session_ID,
dm_es.cpu_time,
dm_es.memory_usage,
dm_es.logical_reads,
dm_es.total_elapsed_time,
dm_es.program_name,
DB_NAME(dm_r.database_id) DatabaseName,
-- Optional columns
dm_ws.blocking_session_id,
dm_r.wait_resource,
dm_es.login_name,
dm_r.command,
dm_r.last_wait_type
FROM sys.dm_os_waiting_tasks dm_ws
INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id
INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id
CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t
CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp
WHERE dm_es.is_user_process = 1
GO
If you have fewer Database, you can simply go to DatabseName
-> Right Click -> Properties, Files -> Check the initial file size and
autogrowth value.
If you have lots of databases, you need to use the following script to find out.
-- Drop temporary table if it exists
IF OBJECT_ID('tempdb..#info') IS NOT NULL
DROP TABLE
#info;
-- Create table to house database file information
CREATE TABLE #info (
databasename
VARCHAR(128)
,name
VARCHAR(128)
,fileid INT
,filename
VARCHAR(1000)
,filegroup VARCHAR(128)
,size VARCHAR(25)
,maxsize
VARCHAR(25)
,growth
VARCHAR(25)
,usage
VARCHAR(25));
-- Get database file information for each database
SET NOCOUNT ON;
INSERT INTO #info
EXEC sp_MSforeachdb 'use ?
select ''?'',name,
fileid, filename,
filegroup = filegroup_name(groupid),
''size'' = convert(nvarchar(15), convert (bigint, size) * 8)
+ N'' KB'',
''maxsize'' = (case maxsize when -1 then N''Unlimited''
else
convert(nvarchar(15), convert (bigint, maxsize) * 8) + N''
KB'' end),
''growth'' = (case status & 0x100000 when 0x100000 then
convert(nvarchar(15), growth) + N''%''
else
convert(nvarchar(15), convert (bigint, growth) * 8) + N''
KB'' end),
''usage'' = (case status & 0x40 when 0x40 then ''log
only'' else ''data only'' end)
from sysfiles
';
-- Identify database files that use default auto-grow
properties
SELECT databasename AS [Database Name]
,name AS
[Logical Name]
,filename AS [Physical
File Name]
,growth AS
[Auto-grow Setting] FROM #info
WHERE (usage = 'data only' AND growth = '1024 KB')
OR (usage = 'log
only' AND growth = '10%')
ORDER BY databasename
-- get rid of temp table
DROP TABLE #info;
What to Do to avoid this?
Every environment has a maintenance window, it can be once
in a week, or once in two weeks. Calculate how much file (mdf, ldf, temdb)
growth occurs during this one/two week (time before you get that maintenance
windows back) and set the increased initial file size in that maintenance
window.
Using Microsoft's VLF-Fix Script
If your application has been running for some time and the log file has increased during that time, you can use Microsoft's github code for fixing Virtual Log Files.
The Script Gets an overview of the current VLF status in all databases of a given instance, and if the number of VLFs are above a pre-determined threshold, also makes a suggestion of how many and how large the VLFs should be for that particular database.
In the output, you will find the following information:
- The database name;
- The transaction log current size and the size it will be after applying suggested changes. Both in MB;
- The current number of VLFs and the number of VLFs that will remain after applying suggested changes;
- The amount of growth iterations necessary to get to the suggested size;
- The transaction log initial size and the autogrow size that should be set;
In addition, a script is generated with the typical example steps needed to deal with the issue, depending on whether the database is in Simple recovery model or not.
Once you know how much initial size and autogrowth is optimum for you, you can use the following script to set the initial size and autogrowth
USE MASTER;
GO
CREATE DATABASE AnotherDB
ON
( NAME = AnotherDB_data,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2\MSSQL\DATA\AnotherDB.mdf',
SIZE = 125MB,
MAXSIZE = 500MB,
FILEGROWTH = 25MB )
LOG ON
( NAME = AnotherDB_log,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2\MSSQL\DATA\AnotherDB.ldf',
SIZE = 5MB,
FILEGROWTH = 5MB ) ;
GO
Create A Monitoring Job to Alert You for Autogrowth
Following is a simple monitoring job that will alert you when the file size hits a predefined maximum before autogrowth.
SELECT instance_name,
[Data File(s) Size(KB)],
[LOG File(s) Size(KB)],
[Log File(s) Used Size(KB)],
[Percent Log Used]
into ##Logsize
FROM
(
SELECT *
FROM
sys.dm_os_performance_counters
WHERE counter_name IN
(
'Data File(s) Size(KB)',
'Log File(s) Size(KB)',
'Log File(s) Used Size(KB)',
'Percent Log Used'
)
AND instance_name =
'database your interested in'
) AS Src
PIVOT
(
MAX(cntr_value)
FOR counter_name IN
(
[Data File(s) Size(KB)],
[LOG File(s) Size(KB)],
[Log File(s) Used Size(KB)],
[Percent Log Used]
)
) AS pvt
go
declare @logsize int
Select @logsize = [Percent Log Used] from ##Logsize
If @logsize > the maximum percent you want the log to fill too
i.e 90
BEGIN
--Do your thing here
END
Drop table ##Logsize
You can remove the "AND instance_name = 'database your interested in" so it returns all databases. For a smooth performance in SQL Server, we need to make sure that the autogrowth function is not called before our maintenance window comes along. Along with Default Autogrowth, also see some bad practices to avoid in SQL Server. In addition, when the autogrowth function gets called, make sure to use Instant File Initialization.
Instant File Initialization
For the autogrowth event to be smooth as pie, we need to perform another thing, namely instant file initialization. Instant file initialization allows you to perform the following operations quickly :
- Database Create
- Adding data or log files to existing databse
- Increase the size of existing mdf, ldf and tempdb
- Restore Database
How to enable instant file initialization
Run secpol.msc (Local Security Policy) in Windows. Then:
- Expand the Local Policies Folder
- Click on User Rights Assignment
- Go down to the “Perform Volume Maintenance Tasks” option and double click it
- Add your SQL Server Service account, and click OK out of the
dialog.
Comments
Post a Comment