Powershell script to move Failover Cluster Instances to their preferred node

Assume you have multiple SQL Server Instances spread over multiple Failover Cluster Nodes (Active/Active), and you want to move the SQL Instances back to their preferred node after maintenance (example: Windows Updates) on the cluster.

You could choose to configure the Failback option, which is build-in. However, this does not give you exact control regarding the failback time. It only allows specification of a daily time window during which failback is allowed, specifying one day is not possible.

So that was not good enough for me. I ended up writing the Powershell script as listed below. Please use at your own risk and test before deployment. Thanks to @sql_williamd @retracement and @sqlfinn for pointing me in the right direction.

# Powershell script to move Failover Cluster Instances to their preferred node # By Hugo Tap # This version only uses the first preffered node, feel free to improve this. # It only does its work on the node which owns the Cluster Group.
# Powershell script to move Failover Cluster Instances to their preferred node
# By Hugo Tap
# This version only uses the first preffered node, feel free to improve this.
# It only does its work on the node which owns the Cluster Group.

Import-Module FailoverClusters
$computer = get-content env:computername
$computer = $computer.ToLower()

Get-ClusterGroup “Cluster Group” |
foreach-object `
{
$owner = “” + $_.OwnerNode
If ($owner -eq $computer) #Only if this node owns the Cluster Group, run the following.
{
Get-ClusterGroup | Get-ClusterOwnerNode |
foreach-object `
{
If ($_.OwnerNodes[0] -ne $null) #Only if the Group has a preffered node
{

Move-ClusterGroup -Name $_.ClusterObject -Node $_.OwnerNodes[0]
}
}
}
}

Having your application behave as replication

What if you want your application to behave just like replication? With behave just like replication I mean: Ignore foreign key constraints and triggers which have NOT FOR REPLICATION set.

This behavior can be activated using the a connection string property:

OleDB:

Replication server name connect option=_

More information: http://msdn.microsoft.com/en-us/library/ee224228(v=sql.105).aspx

In my experiment the value didn’t matter.

 

.Net:

Replication=True

More information: http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring.aspx

Just as simple as that.

Policy Based Management: Changes are not fully rolled back after a failure.

I’ve started using Policy Based Management to control some of the settings within my SQL instances. One of the policies I’ve created controls the Max Memory setting. I ran into a little problem using the apply function.

The Problem

Consider the following memory configuration:

Min Memory Setting: 2048 MB

Max Memory Setting: 2048 MB

Then you apply a policy which reduces the Max Memory Setting to 1023 MB, you would receive an error:

This would suggest the change has not been made effective. However: the only part that failed was the RECONFIGURE statement. The setting itself has been changed. Next time the reconfigure statement is ran, it will fail. One of those moments is during a SP installation.

The solution

Create a policy which sets the Min Memory setting to be lower than the Max Memory setting.

Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’ encountered error 5831 – During SQL 2008 R2 SP2 installation

The problem

I recently installed SP2 on a SQL 2008 R2 Instance, after which it wouldn’t startup anymore. Below a relevant portion of the error log:

 2012-09-15 07:22:23.26 spid7s Database 'master' is upgrading script 'sqlagent100_msdb_upgrade.sql' from level 0 to level 2.
 2012-09-15 07:22:23.26 spid7s ----------------------------------------
 2012-09-15 07:22:23.26 spid7s Starting execution of PREINSTMSDB100.SQL
 2012-09-15 07:22:23.26 spid7s ----------------------------------------
 2012-09-15 07:22:24.19 spid7s Setting database option COMPATIBILITY_LEVEL to 100 for database msdb.
 2012-09-15 07:22:24.47 spid7s Configuration option 'allow updates' changed from 1 to 1. Run the RECONFIGURE statement to install.
 2012-09-15 07:22:24.47 spid7s Configuration option 'allow updates' changed from 1 to 1. Run the RECONFIGURE statement to install.
 2012-09-15 07:22:24.47 spid7s Error: 5831, Severity: 16, State: 1.
 2012-09-15 07:22:24.47 spid7s Minimum server memory value (2048) must be less than or equal to the maximum value (1023).
 2012-09-15 07:22:24.47 spid7s Error: 912, Severity: 21, State: 2.
 2012-09-15 07:22:24.47 spid7s Script level upgrade for database 'master' failed because upgrade step 'sqlagent100_msdb_upgrade.sql' encountered error 5831, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
 2012-09-15 07:22:24.49 spid7s Error: 3417, Severity: 21, State: 3.
 2012-09-15 07:22:24.49 spid7s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
 2012-09-15 07:22:24.49 spid7s SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.

The Solution

I resolved this issue by following the steps below:

  1. Uninstall the entire SQL Instance from the machine.
  2. Remove the remaining system MDF’s and LDF’s, but leave the user database files where they are.
  3. Install a fresh SQL Instance, use the same options as used during the original installation.
  4. Upgrade the Installation to the build version prior to the installation of the SP.
  5. Restore the master database from backup. (See previous blog post on instructions)
  6. Restore the msdb database from backup. (Don’t forget to enable the Service Broker and Trustworthy options)

At this point, the SQL Instance is up again with all the user databases and with the original build version.
Now it’s time to fix the issue which caused the SP2 installation to break the instance. In my case, as you can read in the error log, this was caused by the Max Memory setting being higher than the Min Memory settings. This causes the RECONFIGURE statement to fail during the rollup script. After changing the configuration, the installation of SP2 succeeded.

Before doing the above, you could give trace flag 902 a try, see below in the comments….

How I gather the Wait Stats every hour

The sys.dm_os_wait_stats DMV can be of great value when troubleshooting, or just finding places to improve. However, the DMV displays cumulative values since it was last reset.

Below the code I use to gather the wait stats values since the last time I gathered the stats, without resetting the DMV values.

CREATE PROCEDURE [sta].[spOSWaitstatsCollect]

AS

DECLARE @oswaitstats TABLE

(

[snapshot_time] [datetime]

NOTNULL,

[wait_type] [nvarchar]

(60)NOTNULL,

[waiting_tasks_count] [bigint]

NOTNULL,

[wait_time_ms] [bigint]

NOTNULL,

[max_wait_time_ms] [bigint]

NOTNULL,

[signal_wait_time_ms] [bigint]

NOTNULL

)

 

INSERT  INTO @oswaitstats

SELECT  *

FROM    sta.OSwaitstats

TRUNCATETABLE sta.OSwaitstats

INSERT  INTO sta.OSwaitstats

SELECT  GETUTCDATE(),

*

FROM    sys.dm_os_wait_stats

ORDERBY wait_type

DECLARE @xml ASXML

SELECT  @xml =(SELECT@@servernameAS [@servername] ,

@@servicenameAS [@servicename] ,

[@wait_type]

= n.wait_type ,

[@snapshot_from]

= o.snapshot_time ,

[@snapshot_to]

= n.snapshot_time ,

[@waiting_tasks_count]

= n.waiting_tasks_count

- o.waiting_tasks_count ,

[@wait_time_ms]

= n.wait_time_ms - o.wait_time_ms ,

[@signal_wait_time_ms]

= n.signal_wait_time_ms

- o.signal_wait_time_ms

FROM   sta.OSwaitstats n

INNERJOIN @oswaitstats o ON o.wait_type = n.wait_type

FOR

XMLPATH(‘WaitType’),

ROOT(‘WaitTypes’),

TYPE

)

IFNOT @xml ISNULL

BEGIN

– Send @XML using Service broker here

END

GO

Transaction Logs: Virtual Log File (VLF) Management

There are already a lot of articles about VLF management. I’ll only try to enrich and correct here.

What are VLFs and why do they matter?

I don’t want to get too deep into this, so much on the web already: Simply said: Each transaction log file is divided in Virtual Log Files. Too few VLFs are bad, but too much VLFs are also very bad. To many VLFs can result in bad performance (http://sqlblog.com/blogs/linchi_shea/archive/2009/02/09/performance-impact-a-large-number-of-virtual-log-files-part-i.aspx).
The amount of VLFs can be checked using the following statement, for each VLF a row is returned.
dbcc loginfo(‘Databasenamehere’)

Log file growth and the amount of VLFs

Each time a transaction log grows (either automatic or manually), the part that is extended gets divided in VLFs. The amount of VLFs depends on the file growth.
Across the internet you’ll find the following information regarding how many VLFs are created in your transaction log file:
  • File growth < 64MB the new part will contain 4 VLFs
  • File growth >= 64MB and < 1GB the new part will contain 8 VLFs
  • File growth > 1GB the new part will contain 16 VLFs
However, this is not entirely correct. To start with, a file growth of 64MB will also have 4 VLFs, not 8 as read across the web (http://www.realworlddba.com/post/2011/06/07/Virtual-Log-Files-VLF-And-SQL-Server-Performance.aspx and http://adventuresinsql.com/2009/12/a-busyaccidental-dbas-guide-to-managing-vlfs/).
Also, as pointed out in many articles: keep an eye on the 4GB bug (http://sqlskills.com/BLOGS/PAUL/post/Bug-log-file-growth-broken-for-multiples-of-4GB.aspx).
But there is more! I do have a few transaction log files which only have 2 or 3 VLFs, so how did that happen? The answer: there is a minimum VLF size, being 248KB. Rule number two: Transaction log files can’t be smaller than 512KB. A 512KB transaction log file will consist of two VLFs:
FileId     FileSize                StartOffset         FSeqNo                Status   Parity    CreateLSN
2              253952                  8192                       99           2              64           0
2              262144                  262144                  0              0              0              0
The first VLF is 248KB as the first 8KB of the transaction log file can’t be used by VLFs. The second VLF is 256KB. If you were to try growing the file to 760KB (or anything between 512KB and 760KB), it will grow the file to 760KB, creating an extra VLF of 248KB.
So the correct growth versus VLFs is:
  • File growth <= 248, the new part will be 248KB and have 1 VLF
  • File growth > 248KB and < 496KB, the new part will be the size you specify and have 1 VLF
  • File growth >= 496KB and < 1MB, the new part will be the size you specify; the amount of VLFs will depend on the growth size
  • File growth >= 1MB and <= 64MB the new part will contain 4 VLFs
  • File growth > 64MB and <= 1GB the new part will contain 8 VLFs
  • File growth > 1GB the new part will contain 16 VLFs

Fixing

After reading about VLFs (in this article, but also read the ones I link to) you know how to check the amount of VLFs and you know why to keep it low. Only thing lacking is a script to fix and prevent it. I saw some scripts “out there”, but those were not ideal in my situation (or were not doing what was promised).
Warning: This version of the SP breaks your backup-chain when you’re using the full recovery model.
So I’ve created a SP which resizes the transaction log to have between 52 and 64 VLFs. It looks up the closes log size in the table below. If the @newSizeMB parameter is used that value is looked up, otherwise the current log size is used.
LogSize_MB
Num_VLFs
Growth_MB
13
52
1
26
52
2
52
52
4
104
52
8
208
52
16
416
52
32
780
52
60
896
56
128
1792
56
256
3584
56
512
4900
56
700
7000
56
1000
8000
64
2000
12000
64
3000
16000
64
4000
20000
64
5000
24000
64
6000
28000
64
7000
32000
64
8000
 You might want to adjust this with bigger transaction logs.
Some of the usage:
exec dbo.spResizeTlog@modus=‘view’
exec dbo.spResizeTlog@modus=‘view’,@database=‘databasehere’, @newSizeMB=2048
exec dbo.spResizeTlog@modus=‘generate’,@database=‘databasehere’
exec dbo.spResizeTlog@modus=‘execute’,@database=‘databasehere’

Of course you’ll have to test this SP before using it in production.

Below the code: (CODE UPDATED SINCE 2011/09/14 9:45 CET)

SETANSI_NULLSON
GO
SETQUOTED_IDENTIFIEROFF
GO
CREATEProcedure [dbo].[spResizeTlog]
(@modus varchar(50)=‘view’
, @database nvarchar(128)=NULL
, @newsizeMB int=NULL
, @maxautomaticgrowthMB int=NULL
)
AS
setnocounton
–Create a lookup table, to lookup the required growth size
declare @maxgrowth int
declare @maxVLFs int
declare @lookup table (size int, VLFs int, fullsize int, fullVLFs int, numofgrowths int)
insert @lookup(size, VLFs)values (1, 4)
insert @lookup(size, VLFs)values (2, 4)
insert @lookup(size, VLFs)values (4, 4)
insert @lookup(size, VLFs)values (8, 4)
insert @lookup(size, VLFs)values (16, 4)
insert @lookup(size, VLFs)values (32, 4)
insert @lookup(size, VLFs)values (60, 4)
insert @lookup(size, VLFs)values (128, 8)
insert @lookup(size, VLFs)values (256, 8)
insert @lookup(size, VLFs)values (512, 8)
insert @lookup(size, VLFs)values (700, 8)
insert @lookup(size, VLFs)values (1000, 8)
insert @lookup(size, VLFs)values (2000, 16)
insert @lookup(size, VLFs)values (3000, 16)
insert @lookup(size, VLFs)values (4000, 16)
insert @lookup(size, VLFs)values (5000, 16)
insert @lookup(size, VLFs)values (6000, 16)
insert @lookup(size, VLFs)values (7000, 16)
insert @lookup(size, VLFs)values (8000, 16)
update @lookup set fullsize=((50/VLFs)+1)*size, fullVLFs=((50/VLFs)+1)*VLFs, numofgrowths=((50/VLFs)+1)
selecttop 1 @maxgrowth=size, @maxVLFs=VLFs from @lookup orderby size desc
–Create tmp table to hold the info of logfiles of all databases
createtable #dblogfiles(
dbname nvarchar(128)
, RecoveryModel nvarchar(128)
, logfilename nvarchar(128)
, physical_name nvarchar(2000)
, logfileid int
, logsizeMB int
, CurrentVLFs int
, CurrentGrowth int
, CurrentGrowthType smallint)
–Declare and fill table for database loop
declare @databases table (dbname nvarchar(128),recoverymodel nvarchar(128))
insertinto @databases
select name, recovery_model_desc fromsys.databases
–Declare table for DBCC loginfo output
CREATETABLE #dbcc_loginfo
        (
           fileid       TINYINT,
           file_size    BIGINT,
           start_offset BIGINT,
           FSeqNo       INT,
           [status]     TINYINT,
           parity       TINYINT,
           create_lsn   NUMERIC(25, 0)
        )
–Loop to gather all current data
declare @dbname nvarchar(128)
declare @recoverymodel nvarchar(128)
declare @cmd nvarchar(max)
declare @cmd2 nvarchar(max)
WHILEEXISTS(select*from @databases)
BEGIN
      selecttop 1 @dbname=dbname, @recoverymodel=recoverymodel from @databases
      set @cmd=
      insert into #dblogfiles
      SELECT DB_NAME()
      ,null
      ,name
      , physical_name
      ,file_id
    ,( size / 128 )
    ,NULL
    ,growth
    ,is_percent_growth
      FROM   sys.database_files
      WHERE  type_desc = ”log”
      ‘
      SET @CMD2=‘declare @cmd nvarchar(max)’
      SET @CMD2=@CMD2 +‘SET @cmd =”’+REPLACE(@cmd,””,”””)+””+char(13)+char(10)
      SET @CMD2=@CMD2 +‘Execute ['+@dbname+'].dbo.sp_executesql @cmd’
      exec (@cmd2)
      truncatetable #dbcc_loginfo
    SET @cmd =‘DBCC loginfo (‘+””+ @dbname +”’) WITH NO_INFOMSGS’
    INSERTINTO #dbcc_loginfo
    EXEC (@cmd)
      UPDATE l set CurrentVLFs=x.cnt, RecoveryModel=@recoverymodel
      FROM #dblogfiles l innerjoin 
      (SELECT fileid, cnt=COUNT(1)FROM #dbcc_loginfo GROUPBY fileid) x
      ON x.fileid=l.logfileid
      WHERE l.dbname=@dbname
     
      delete @databases where dbname=@dbname
END
select
dbname, logfilename, RecoveryModel, CurrentSizeMB=logsizeMB, CurrentNumOfVLFs=CurrentVLFs,
CurrentGrowth=CASEWHEN CurrentGrowthType=1 THEN CurrentGrowth ELSE CurrentGrowth/128 END
, CurrentGrowthType=CASEWHEN CurrentGrowthType=1 THEN‘%’ELSE‘MB’END
, RecommendedGrowthMB=ISNULL((SELECTTOP 1 size FROM @lookup WHERE fullsize >=(ISNULL(@newsizeMB,logsizeMB))Orderby fullsize), @maxgrowth)
, RecommendedNewSize=ISNULL((SELECTTOP 1 fullsize FROM @lookup WHERE fullsize >=(ISNULL(@newsizeMB,logsizeMB))Orderby fullsize),((CEILING(CAST((ISNULL(@newsizeMB,logsizeMB))ASFLOAT)/CAST(@maxgrowth ASFLOAT))))*@maxgrowth)
, RecommendedNumOfVLFs=ISNULL((SELECTTOP 1 fullVLFs FROM @lookup WHERE fullsize >=(ISNULL(@newsizeMB,logsizeMB))Orderby fullsize),((CEILING(CAST((ISNULL(@newsizeMB,logsizeMB))ASFLOAT)/CAST(@maxgrowth ASFLOAT))))*@maxVLFs)
, RecommendedNumGrowths=ISNULL((SELECTTOP 1 numofgrowths FROM @lookup WHERE fullsize >=(ISNULL(@newsizeMB,logsizeMB))Orderby fullsize),((CEILING(CAST((ISNULL(@newsizeMB,logsizeMB))ASFLOAT)/CAST(@maxgrowth ASFLOAT)))))
, AutomaticGrowthSet=CASEWHENISNULL((SELECTTOP 1 size FROM @lookup WHERE fullsize >=(ISNULL(@newsizeMB,logsizeMB))Orderby fullsize), @maxgrowth)> @maxautomaticgrowthMB THEN @maxautomaticgrowthMB ELSEISNULL((SELECTTOP 1 size FROM @lookup WHERE fullsize >=(ISNULL(@newsizeMB,logsizeMB))Orderby fullsize), @maxgrowth)END
into #result
from #dblogfiles orderby dbname
select*
,IsDiff=CASE
WHEN CurrentSizeMB<>RecommendedNewSize THEN 1
WHEN CurrentGrowthType=‘%’THEN 1
WHEN CurrentGrowth<>AutomaticGrowthSet THEN 1
WHEN CurrentNumOfVLFs >(RecommendedNumOfVLFs + 30)THEN 1
ELSE 0
END
into #result_generate
from #result
droptable #dblogfiles
droptable #dbcc_loginfo
IF @modus=‘view’
BEGIN
      select*from #result_generate where dbname=isnull(@database, dbname)orderby dbname
END
ELSE
BEGIN
      –modus must be generate or execute, so we need to generate the code
      DECLARE @logfilename nvarchar(128)
      DECLARE @GrowthMB int
      DECLARE @Growths int
      DECLARE @AutomaticGrowthSet int
      declare @shrink_command nvarchar(max)
      declare @alter_command nvarchar(max)
      WHILEEXISTS(select*from #result_generate where dbname=isnull(@database, dbname)and IsDiff=1)
      BEGIN
            SELECTTOP 1 @dbname=dbname, @logfilename=logfilename,@recoverymodel=recoverymodel, @GrowthMB=RecommendedGrowthMB,@Growths=RecommendedNumGrowths, @AutomaticGrowthSet=AutomaticGrowthSet  
            FROM #result_generate where dbname=isnull(@database, dbname)  and IsDiff=1 ORDERBY dbname
            SET @alter_command=
            IF @recoverymodel<>‘SIMPLE’
            BEGIN
                  SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘ALTER DATABASE ['+ @dbname +'] SET RECOVERY SIMPLE;’
            END
            SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘DBCC SHRINKFILE (N”’+ @logfilename +
                                                 ”’ , 0, TRUNCATEONLY);’;
            SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘DBCC SHRINKFILE (N”’+ @logfilename +”’ , 0);’;
            SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘ALTER DATABASE ['+ @dbname +'] MODIFY FILE ( NAME = N”’+ @logfilename +”’, MAXSIZE = UNLIMITED, FILEGROWTH = ‘+
            CAST(@AutomaticGrowthSet ASNVARCHAR)+‘ MB );’
            IF @recoverymodel<>‘SIMPLE’
            BEGIN
                  SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘ALTER DATABASE ['+ @dbname +'] SET RECOVERY ‘  + @recoverymodel +‘;’
            END
           
            –Do this twice, to get the file as small as possible
            IF @recoverymodel<>‘SIMPLE’
            BEGIN
                  SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘ALTER DATABASE ['+ @dbname +'] SET RECOVERY SIMPLE;’
            END
            SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘DBCC SHRINKFILE (N”’+ @logfilename +
                                                 ”’ , 0, TRUNCATEONLY);’;
            SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘DBCC SHRINKFILE (N”’+ @logfilename +”’ , 0);’;
            SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘ALTER DATABASE ['+ @dbname +'] MODIFY FILE ( NAME = N”’+ @logfilename +”’, MAXSIZE = UNLIMITED, FILEGROWTH = ‘+
            CAST(@AutomaticGrowthSet ASNVARCHAR)+‘ MB );’
            IF @recoverymodel<>‘SIMPLE’
            BEGIN
                  SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘ALTER DATABASE ['+ @dbname +'] SET RECOVERY ‘  + @recoverymodel +‘;’
            END
           
            DECLARE @Cntr int
            SET @Cntr = 0
            WHILE @Cntr < @Growths
              BEGIN
                    SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+
‘IF EXISTS(select * from sys.database_files where type_desc=”LOG” and name=”’+@logfilename+”’ and size/128 < ‘+CAST(((@GrowthMB )*( @Cntr + 1 ))ASNVARCHAR)+‘)
BEGIN
      ALTER DATABASE ['+ @dbname +'] MODIFY FILE (NAME = N”’+@logfilename+”’, SIZE = ‘+CAST(((@GrowthMB )*( @Cntr + 1 ))ASNVARCHAR)+‘MB)
END;’
                    SET @Cntr = @Cntr + 1;
              END 
                  IF @modus=‘generate’
                  BEGIN
                        SET @alter_command=‘USE ['+ @dbname +'];’  +CHAR(13)+CHAR(10)+@alter_command 
                        SET @alter_command=@alter_command +CHAR(13)+CHAR(10)+  ‘GO’
                        PRINT @alter_command
                  END
                 
                  IF @modus=‘execute’
                  BEGIN
                        SET @CMD2=‘declare @cmd nvarchar(max)’
                        SET @CMD2=@CMD2 +‘SET @cmd =”’+REPLACE(@alter_command,””,”””)+””+char(13)+char(10)
                        SET @CMD2=@CMD2 +‘Execute ['+@dbname+'].dbo.sp_executesql @cmd’
                        exec (@cmd2)
                  END
              DELETE #result_generate where dbname=@dbname and logfilename=@logfilename
      END
END
droptable #result
droptable #result_generate
GO

 

SQL warm up script

In a perfect world your SQL server has enough memory to store all data in buffer memory. If that is the case, your server will respond fast and your users will expect the server to response fast all the time. But then you need to reboot the server (or service), resulting in an empty buffer memory. All initial page reads will have to come from disk, which is a lot slower then what the users expect.
The solution is to use a warm up script, to load as many pages in memory as possible. For this there are several possibilities.
The first I used was to run a SELECT * FROM every table in the database. That did a large part of the trick, but did not include any indexes. Also it was not that generic.
The second, and current, solution I used involves sys.dm_db_index_physical_stats. A little experiment showed that running this dmv with ‘DETAILED’ as mode parameter will load all involved data and index pages in memory. Allthough this was not the case in all tested scenarios, it’s good enough for this purpose. To make life easy I wrapped this in a stored procedure, making it easily possible to load a table in memory or an entire database. Also will it show how much of the tables is currently in the buffer memory (code orginally from http://www.sqldev.org/sql-server-database-engine/help-understanding-what-is-using-all-my-memory–64bit-ee-85617.shtml).
This is provided as is with no guarantee in any way. Run your own tests before using this on production servers.

Create Procedure spWarmUp
(@mode varchar(10)=’read’
,@dbname nvarchar(128)
,@tname nvarchar(128)=NULL)

AS
DECLARE @obid bigint
DECLARE @cmd nvarchar(max)
DECLARE @cmd2 nvarchar(max)
DECLARE @cmd3 nvarchar(max)
SET @cmd=’SELECT
st.name TableName,
sum(a.page_id)*8 AS MemorySpaceKB,
SUM(au.used_pages)*8 AS StorageSpaceKB,
CASE WHEN SUM(au.used_pages) <> 0 THEN CAST((SUM(a.page_id)/CAST(SUM(au.used_pages) AS FLOAT)) * 100 AS NUMERIC(20,1)) END AS [Percent In Memory]
FROM
(SELECT database_id, allocation_unit_id, COUNT(page_id) page_id
FROM
sys.dm_os_buffer_descriptors GROUP BY database_id, allocation_unit_id) a
left JOIN sys.allocation_units au ON a.allocation_unit_id =au.allocation_unit_id
JOIN sys.partitions sp ON (au.type IN (1,3) AND au.container_id = sp.hobt_id)
OR (au.type = 2 AND au.container_id= sp.partition_id)
JOIN sys.tables st ON sp.object_id = st.object_id
AND st.is_ms_shipped = 0
WHERE a.database_id = DB_ID() and ISNULL(‘ + ISNULL(”” + @tname+ ””, ‘NULL’)+ ‘, st.name)=st.name
GROUP BY st.object_id, st.name
ORDER BY st.name’

SET @CMD2=’declare @cmd nvarchar(max)’
SET @CMD2=@CMD2 + ‘SET @cmd =”’ + REPLACE(@CMD,””,”””) + ”” + char(13) + char(10)
SET @CMD2=@CMD2 + ‘Execute
‘+@dbname+’.dbo.sp_executesql @cmd’
exec (@cmd2)

IF @mode=’load’ AND @tname IS NULL
BEGIN
select * FROM sys.dm_db_index_physical_stats(db_id(@dbname), NULL, NULL, NULL, ‘DETAILED’)
END
IF @mode=’load’ AND NOT @tname IS NULL
BEGIN
set @cmd3=’Declare @dbid int, @obid bigint
select @obid=object_id(
”’+@tname+”’), @dbid=db_id(”’+@dbname+”’)
select * FROM sys.dm_db_index_physical_stats(@dbid, @obid, NULL, NULL, ”DETAILED”)’
SET @CMD2=’declare @cmd nvarchar(max)’
SET @CMD2=@CMD2 + ‘SET @cmd =”’ + REPLACE(@CMD3,””,”””) + ”” + char(13) + char(10)
SET @CMD2=@CMD2 + ‘Execute
‘+@dbname+’.dbo.sp_executesql @cmd’
print @cmd2
exec (@cmd2)
END
IF @mode=’load’
BEGIN
SET @CMD2=’declare @cmd nvarchar(max)’
SET @CMD2=@CMD2 + ‘SET @cmd =”’ + REPLACE(@CMD,””,”””) + ”” + char(13) + char(10)
SET @CMD2=@CMD2 + ‘Execute
‘+@dbname+’.dbo.sp_executesql @cmd’
exec (@cmd2)
END
GO

How to move all databases to another server, preserving the server name

What if you need to replace a server. And you have to preserve the server name. Then follow these steps.

Before you start

  • Have a new server available, with a temporary name.
  • Have the SQL installation files available together with all Service Packs & Hotfixes you have installed on the original server.
  • On the original server run the following query, save the output on the new server (at least NOT on the old server)

select name, recovery_model_desc, is_trustworthy_on, is_broker_enabled from sys.databases order by name

On the original server

  • Freeze all activity. You can use my “Block Remote Users” trigger for that.
  • Stop SQL Agent service (to prevent jobs to start)
  • Create backups of all databases excluding tempdb & model, save them somewhere but NOT on the original server.
  • Stop & Disable the SQL Services.
  • Shutdown the server.

On the new server

  • Rename the machine so it has the name of the original server
  • Install SQL server, Service Packs and Hotfixes
  • Verify the IP gets resolved correctly in your domain.
  • Stop the SQL Service
  • Open CMD and start SQL Server in single user mode (in the SQL Server Binn folder: sqlserv –c –m –f)
  • Open another CMD and connect to SQL (SQLCMD –E), and restore the master database as below:

Restore the master database
restore database master from disk = ‘r:\master.bak’ with replace, stats=1

go

  • Open CMD and start SQL Server in single user mode (in the SQL Server Binn folder: sqlserv –c –m –f –T3608)
  • Open another CMD and connect to SQL (SQLCMD –E)
  • Change the locations of the system databases if they differ from the original server. And shutdown the service

alter database mssqlsystemresource modify file (name=’data’, filename = ‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mssqlsystemresource.mdf’)

go

alter database mssqlsystemresource modify file (name=’log’, filename = ‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mssqlsystemresource.ldf’)

go

alter database model modify file (name=’modeldev’, filename = ‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\model.mdf’)

go

alter database model modify file (name=’modellog’, filename = ‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\modellog.ldf’)

go

alter database msdb modify file (name=’msdbdata’, filename = ‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\msdbdata.mdf’)

go

alter database msdb modify file (name=’msdblog’, filename = ‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\msdblog.ldf’)

go

alter database tempdb modify file (name=’tempdev’, filename = S:\SQLDATA\tempdb.mdf’)

go

alter database tempdb modify file (name=’templog’, filename = ‘T:\SQLLOG\templog.ldf’)

go

shutdown

go

  • Start SQL Server Service (leave SQL Agent Service stopped, disabled)
  • Restore msdb database
  • Tweak memory settings
  • Restore all other databases. For replicated databases use the KEEP_REPLICATION parameter.
  • On databases using Service Broker  (see previous query output),, activate Service Broker.
ALTER DATABASE MyDB SET ENABLE_BROKER
  • On databases with trusthworty ON (see previous query output), set it ON:
ALTER DATABASE MyDB SET TRUSTWORTHY ON
  • Enable & Start the SQL Agent Service
  • Reactivate activity (if you used my “Block Remote Users” trigger, drop it.
  • Verify your backup & maintenance jobs!

Block Remote Users

From time to time I am in a situation where I want to freeze all databases. To be more precise: I need to seriously lockdown all access.
For this I use a logon trigger. The logon trigger issues a Roll Back if the logon is being issued from a remote machine and the user is not in the exclude list. This gives me the flexibility to kick all services & users accessing the database, while keeping all my options open as I can still access the server and do anything with it. Below the Trigger I use. Please test on a demo machine first. Remember: you’ll still be able to logon to the server locally and remove the trigger. If nothing else works you can logon with DAC to remove the trigger.

USE MASTER
GO
–Create the trigger
CREATE TRIGGER BlockRemoteUsers ON ALL SERVER
WITH EXECUTE AS ‘sa’
FOR LOGON
AS
BEGIN
IF ‘<local machine>’ <> EVENTDATA().value(‘(/EVENT_INSTANCE/ClientHost)[1]‘,
‘nvarchar(512)’)
AND EVENTDATA().value(‘(/EVENT_INSTANCE/LoginName)[1]‘,
‘nvarchar(512)’) NOT IN
(‘mydomain\user1′, ‘mydomain\user2′,’mydomain\user2′)
BEGIN
ROLLBACK ;
END
END
GO

–Show the sessions that are still connected
SELECT  *
FROM    sys.dm_exec_sessions
WHERE   session_id > 50
AND session_id <> @@spid
AND host_name <> @@servername
AND login_name NOT IN (‘mydomain\user1′, ‘mydomain\user2′,’mydomain\user2′)

–Kill the sessions that are still connected
DECLARE @cmd NVARCHAR(MAX)
SET @cmd = ”
SELECT  @cmd = @cmd + ‘KILL ‘ + CAST(session_id AS VARCHAR(50)) + CHAR(13)
+ CHAR(10)
FROM    sys.dm_exec_sessions
WHERE   session_id > 50
AND session_id <> @@spid
AND host_name <> @@servername
AND login_name NOT IN (‘mydomain\user1′, ‘mydomain\user2′,’mydomain\user2′)
EXEC ( @cmd
)
GO

–Drop the trigger
DROP TRIGGER BlockRemoteUsers ON ALL SERVER

Percent Complete in sys.dm_exec_requests

In the Dynamic Management View sys.dm_exec_requests a lot of nice information can be found of requests currently being executed.
A nice column I recently “discovered” is percent_complete. It shows what it suggests: Percentage of work completed. However it only works for a selected set of commands:
  • ALTER INDEX REORGANIZE
  • AUTO_SHRINK option with ALTER DATABASE
  • BACKUP DATABASE
  • CREATE INDEX
  • DBCC CHECKDB
  • DBCC CHECKFILEGROUP
  • DBCC CHECKTABLE
  • DBCC INDEXDEFRAG
  • DBCC SHRINKDATABASE
  • DBCC SHRINKFILE
  • KILL (Transact-SQL)
  • RESTORE DATABASE,
  • UPDATE STATISTICS
This is all straight from the books online. The nice thing is the resulting value is a REAL. So it shows any progress, even the smallest. Also it’s a nice way to view the progress of the actions listed, for most of them the only way to view the progress.

A nice example to view the progress of a backup being executed:

select percent_complete, * from sys.dm_exec_requests where command=’BACKUP DATABASE’

Update:
Using the following query you can easily see the progress your backup is making.

select percent_complete, start_time, command, estimated_completion_time_min=(estimated_completion_time/60000),
time_allready_running=datediff(minute,start_time,getdate())  from sys.dm_exec_requests
where command=’BACKUP DATABASE’

Follow

Get every new post delivered to your Inbox.