Should I support this database?


database_supportAs DBA’s one of the jobs is making sure all the databases and database servers are working properly and you have a controlled environment. Once in a while you have skeleton in the closet where suddenly a database server comes around and you’re scratching your head where it came from.

Your manager (and maybe your colleagues) expect you to support it because you’re the DBA. You don’t know the server, your colleague doesn’t (really) know the server. So now what. Are you going to take responsibility for everything on the database server of the database itself when you don’t know anything about it?

I have had this so many times that I decided not to support a database (or entire database server) until I did thorough scan of the environment to see if the database server is up-to-date, the settings in the server are set properly etc.

I use this checklist to make sure I cover all the aspects of the server:

Database server

  • Is the server a virtual of physical server?
  • What version of Windows (hopefully Server edition) is being used?
  • What version and edition of SQL Server is being used?
  • When was the last time the Windows OS was updated?
  • When was the  last time the SQL server instance was updated?
  • How much memory is in the system
  • How is the storage configured?
    • What disks are being used?
    • What kind of disks are being used?
  • How many cores are present?
  • What services of SQL server are running?
  • What accounts are being used for the services?
  • What logins are present in SQL Server and what are their permission?
  • And maybe the most improtant one, is there a SLA on this server?

Database level

  • What’s the size of the database?
  • When was the last backup taken?
  • When was the last time an integrity check was executed
  • How are the files set up?
  • What compatibility setting is being used?
  • What are the options in the database, i.e. auto close, updates of statistics, page verification etc?
  • What users are present in the database and what are their permissions?

resized_diagnostic-house-meme-generator-it-s-not-the-database-it-s-never-the-database-but-it-might-be-lupus-d2af08This is just a small list of the things I’d like to know about a database or database server. Based on that information I can make a decision either to support the current state of the database or server or not to support it (support it best effort) until I’ve gotten the chance to change settings I don’t approve.

Normally you’d get a discussion why you’re not fully supporting the database or database server. People know you’re the DBA and that it’s your job to support the database servers.
Unfortunately we’re not living in a perfect world and as soon as shit hits the fan, and you’ve taken full responsibility for a database that’s not up to your standards, then you have explain yourself and don’t say the database is not up to your standards because you should’ve have said that from the beginning.
In these situations I would put all the findings in writing and inform the IT manager of the “new” database or database server and what to do to support it. This may sound political but you’ve got to protect yourself. I’ve seen people get lower performance reviews because they supported items that were not up to standards and had gone haywire.

resized_jesus-says-meme-generator-jesus-says-sort-out-the-fucking-database-675a7dOne of the situations I had was a database server installed by a system administrator working at the software vendor. No documentation about the installation, configuration, databases, logins etc. It was all installed during a project and now the project was finished, maintenance was the responsibility of the DBA’s. Logins used as service accounts for the application running under sysadmin privileges, no min or max memory set, SQL Server service running with the local system account etc etc.

I wasn’t going to fully support this server and mentioned this to the IT manager in writing and personally. It wasn’t that the server was buggy but it wasn’t up to standards. After a while we had an issue with the server. Turned out that the service account of the application had destroyed the system database because a consultant forgot to include the “USE [database]” clause in a script and because it had sysadmin privileges, the default database was “master”, it dropped and truncated a lot of tables. Besides the fact that the consultant did all that DDL with a dynamic query, if the privileges were set correctly, the login would never been able to corrupt the system databases.

It took us many hours to get everything back up and I was called into an office with some managers who wanted to know why the server went down the way it did. The managers were angry because this incident cost the business a lot of money, a full department couldn’t work and foremost customers couldn’t be helped.
All the findings were put in changes, the changes were scheduled in a low priority by the business because the chance of things going down was pretty slim.
I mentioned the findings and the correspondence about the low priority changes for the findings. If I hadn’t done that I would probably have been fired because of the impact the incident had on the company.

It’s not about playing the blame game but if it’s not in your power to change things that should be changed than you have to protect yourself. At the end of the day you want to the best job you can do and have a good night sleep without the thought in the back of your head that hell could break loose at any moment.

If you have any stories like this please share them in the comments. I’d like to hear if anybody had the same experience I had and maybe it helps someone else to avoid such a situation.

Make your SSMS tabs more efficient


Like many of your I normally have loads of tabs open in SSMS to get all my work done.
Based on this article I want to show how to reduce the size of your tab by disabling most of the information which is already present somewhere else.

By default SSMS shows all kinds of information which you probably don’t need in your tab because the information is present somewhere else. Making the tab names smaller makes it possible to have more tabs open at the same time without the hassle of scrolling to the right one.

Your tab will probably look something like this before the change:


Who cares about the database server, the database name and the login who is connected if all the information is already in the task bar below in your screen.

Go to the options screen to adjust the settings for the tabs:

Tools -> Options


You’ll get the see following screen:


Go to the section “Tab Text” and set the following options to “False” to disable the information shown in the tab:

  • Include database name
  • Include login name
  • Include server name

Click the OK button and open a new tab. The result will look similar to this:




No more information than needed and you’ll be able to a lot more tabs on your screen than before.

I recently started using SQL Prompt from Red-Gate which I should’ve done a long time ago. The speed I have writing queries has at least doubled. I really love the coloring of the tabs to let me know if I’m connected to production, staging, test or development. Cathrine Wilhemsen has an nice article how to use these features if you’re interested.

Hope this helps you out!



Don’t reboot SQL Server


rebootOver the years I can’t count the times that I’ve seen people reboot a SQL Server instance because there was some kind of problem. I can’t emphasize this enough: “In normal circumstances there is no valid reason to reboot SQL Server to solve an issue.”

I’ve had the task to investigate performance problems, login issues and other problems in SQL Server that suddenly appeared. The only problem was that the first action people took in that case is to reboot the database server.

If you reboot the SQL Server service (or the entire server) you create the following scenario in SQL Server

  • Force a cold cache for data
  • Force a cold cache for execution plans
  • Collected statistics from DMV’s are no longer available
  • Default trace is empty

Rebooting a server can, in some cases, solve an issue temporarily because in almost all the cases, this is just solving the symptoms of the issue instead of the issue at hand.

The situations where a reboot is needed are:

  • when you install an update and it needs to a reboot to take effect.
  • changing a setting in SQL Server is also a situation where a reboot is acceptable.
  • the server is complete frozen en nobody can get in

But this is all why you shouldn’t do a reboot, but what should you do:

The examples above are just some of the actions you can do before rebooting a server. You should be able to find the problem with the above examples and work towards a solution.

Next time you reboot the SQL Server instance without doing the proper investigation imagine someone behind you holding a baseball bat ready to strike.



But we do have licenses for all our servers…

Standard almost every company I’ve worked for there was this indifference about licensing for SQL Server. Every time a company, appointed by Microsoft, came by to check the licenses of all the Microsoft products, at least one server would not have been licensed correctly.

At a company I’ve worked for in the in the past I asked about the contract for all the SQL Server instances. The reason I asked was because, after a quick scan, I found out every SQL Server instance had Enterprise Edition installed.

Turned out that the manager told everybody that they could install as much SQL Server Enterprise instances as they wanted because there was some sort of enterprise licensing arranged. That wasn’t true and anybody with some knowledge of licensing would have questioned that. The company was just too small to have that kind of Enterprise licensing.

Projects can also put pressure on you to install new SQL Server instances to keep the deadline of the project. It normally goes like this:

– Project manager: We need a SQL Server instance for project Y
– DBA: We don’t have a license for the instance
– Project manager: I don’t care just install the instance we’ll sort it out later

After the project is done, everything is delivered to the IT department and nothing has happened to license the SQL Server instance. I know the dba should have taken responsibility but as you know projects tend to put pressure on you.

I’ve seen companies that during an audit would shut down instances to make sure the servers weren’t picked up by the license audit tool (mostly it’s the MAP toolkit). That’s ridiculous! If you don’t have the license don’t install the instance!

About 6 months in the job we got such an audit for the licensing and it turned out that we had to buy about 30 SQL Server Enterprise server licenses and that we had to pay about 140.000 to get all these licenses. I’m not kidding, that was the proposal.
Fortunately I was able to arrange something else where I would consolidate the servers which didn’t have a license. I would than run the MAP Toolkit again to see if we did everything we promised to do.

It took me 6 months, a lot of planning and a lot of headache to get all the servers consolidated to just 3 SQL Server instances. Consolidating database servers is not fun. You have to know exactly which databases can be put together, which databases have special needs like security, compatibility from vendors, performance requirements etc etc etc. You have to get all the objects to the other instances, test everything over and over to see if databases don’t slow each other down.
Brent Ozar has a nice article to get you started with consolidation.

From that moment on I was really keen on keeping up with every SQL Server instance being installed.

Do we have a license?! No! Fine than we don’t install the instance until we have licensing sorted out. But the project will slow down because of that! I don’t care, we have to make sure we have the license before we will install the instance.

Page Verification None… Now what?


Is everything verified

I had the privilege to go on the SQL Cruise this summer and I never learned that much in one week ever! One of the things I knew about but wasn’t on my mind to check yet was the page verification option of the databases.

Argenis Fernandez gave a session on the SQL Cruise about the internals of SQL Server where this option was discussed. Back in the office I made a little script to check all my servers and boom! Almost all our databases, except the ones that were made after the installation of SQL Server 2005, had either the page verification set to NONE or TORN_PAGE_DETECTION.

This shocked me because the CHECKSUM option is available since SQL Server 2005 and every database was already running on SQL Server 2005 and higher. It turns out the former administrators didn’t think it was needed or didn’t know anything about it and didn’t want to change anything because everything worked as it did.

Because CHECKSUM makes sure that SQL Server can check if the read from disk is the actual data written to disk from before this is a big win when it comes to preventing large amounts of data corruption. You can read more about the checksums from Paul Randal’s blog.

When you change the page verification option to CHECKSUM, SQL Server will not do anything with the present data until it has to write the data to disk. Than, and only than, will it create a checksum for the data written. It will not do anything with the data that isn’t changed from the moment of changing the page verification option.

When you want to make sure that the data present from that moment on is being verified with a checksum you have to force SQL Server to read all the data and write it back to disk.

You can do this by various methods but the ones that came to my mind were:

  1. Create a new table and copy all the data from the old to the new. Delete the old table and rename the new to the old table.
  2. Rebuild the clustered index if present. If no clustered index is present, create one and drop it afterwards.

Method one sounds easy but is really time consuming besides the fact that you’ll need twice the amount of disk space for the tables to copy. When you have small tables this is fairly easy but in my case some of the tables were hundreds of gigabytes in size. So this wasn’t an option.

Method two is a lot easier. You rebuild all the clustered indexes and the data will have a checksum after the action is completed. Unfortunately I was dealing with a database where 95% of the tables were heaps (oh what do I hate heaps). To make this work I would have to create a clustered index for every heap and drop is afterwards.

The first thing I want to check is which tables are heaps.


	object_id INT
	, table_schema VARCHAR(200) COLLATE SQL_Latin1_General_CP1_CI_AS
	, table_name VARCHAR(200) COLLATE SQL_Latin1_General_CP1_CI_AS

-- List all heap tables
	, AS [table_schema]
	, AS [table_name]
FROM sys.tables AS t
INNER JOIN sys.schemas AS s
	ON t.schema_id = s.schema_id
INNER JOIN sys.indexes AS i
	ON t.object_id = i.object_id
	AND i.type = 0 -- = Heap

The result by selecting the data from the table #HEAPS:

Overview Heaps

This will collect all the heaps that are present in the current database. That’s a start but how do I efficiently create all the clustered indexes where I don’t use a lot of disk space and it doesn’t take too long to create.

The next script will use the details from the temporary heaps table and check all the columns and get the column with the smallest length. Based on that a T-SQL statement will be created which will create the clustered index on the tables.

		+ QUOTENAME('tempci_' + t.table_name) + ' ON ' + QUOTENAME(t.table_schema) + '.' + QUOTENAME(t.table_name)
		+ '(' + QUOTENAME(t.column_name) + ' ASC' + ')') COLLATE SQL_Latin1_General_CP1_CI_AS
		, h.table_name 
		, c.object_id
		, AS column_name
		, c.column_id
		, AS type_name
		, c.max_length
			PARTITION BY c.object_id ORDER BY c.max_length ASC
		) AS rownr
		sys.columns c
		On h.object_id = c.object_id 
	INNER JOIN sys.types t	
		ON t.user_type_id = c.user_type_id 
) t
	t.rownr = 1

I imagined that nobody with their right mind would create a clustered index named “tempci_[table name]” so that’s the name I used to create the clustered indexes.

Of course you could automatically execute all the statements but I like to have control of that so I let the script create all the dynamic statements for me and I would execute them separately.

Now that all the clustered indexes are created we have to cleanup the temporary clustered indexes. The following script will do that for you:

	sys.indexes i
INNER JOIN sys.tables t
	ON i.object_id = t.object_id
INNER JOIN sys.schemas s
	ON s.schema_id = t.schema_id
WHERE LIKE 'tempci_%'


The script creates the necessary statements which you can copy and execute in a query window.

If hope this script is useful for you and that it will make your life easier. Any comment is appreciated.

Restore a database using Powershell


/home/sstad/domains/ html/wp content/uploads/2015/05/150504 backup restore

Recently I had a situation where I had to restore several full backups of multiple databases on a SQL Server instance daily. The databases would already be there and a simple restore with replace option would suffice. As I’m a fan of Powershell I wanted to restore a database using Powershell.

As lazy as I am I don’t want to make a script for each of the databases, I just want to supply a few parameters to a script and that the script does the rest for me.

I wanted the script to do the following:

  • Get the latest backup file from a directory
  • Restore the database with replace option
  • Be able to use the script in a SQL Server Agent job

The script would have to have the following parameters to do everything:

  • Directory: Directory to search through to get the latest file.
  • SQL Server instance: Which instance the server needed to restore to
  • Database: The database to restore to
  • Filter: Which extension to search for like “BAK”

Eventually the script looks like this

  File:     RestoreDatabase.sql
  Summary:  Restores databases in SQL Server
  SQL Server Versions: 2005 onwards
  Written by Sander Stad,
  (c) 2015, All rights reserved.
  For more scripts and sample code, check out
  You may alter this code for your own *non-commercial* purposes (e.g. in a
  for-sale commercial tool). Use in your own environment is encouraged.
  You may republish altered code as long as you include this copyright and
  give due credit, but you must obtain prior permission before blogging
  this code.

    , [string]$s
    , [string]$db
    , [string]$f

$directory = $d
$serverInstance = $s
$database = $db
$filter = $f

if(Test-Path $directory)
    # Get latest file from the directory
    $latestBackup = Get-ChildItem -Path $directory -File -Recurse -Filter $filter | Sort-Object LastAccessTime -Descending | Select-Object -First 1

    # Setup the full path
        Restore-SqlDatabase -ServerInstance $serverInstance -Database $database -BackupFile $latestbackup.FullName -ReplaceDatabase
    } catch{
        #Write-Error ("Restore of database failed." + $_)
    #Write-Host -ForegroundColor Red "Directory couldn't be found!"

The catches are commented due the fact that the SQL Server job can’t cope with these kind of error messages. The script assumes that the database is already present. In a later version I will make it possible to adjust this accordingly but for now the functionality is just to execute a replace.

To make the job go to SQL Server and create a new job and job step and fill it in like below:


Make sure the field Type is set on “Operating system (CmdExec)” and that the command is set up like this:

powershell "[folder to script]\RestoreDatabase.ps1 -d '[folder]' -s '[instance]' -db '[database]' -f '*.bak'"

The parameters:

  • -d: Directory to search
  • -s: SQL Server instance
  • -db: Database to replace
  • -f: Filter

I used the “*.bak” filter to get all the bak-files but you could use a different extension if needed as long as it is for a full backup.

Replace the values between the brackets and give the job a spin.



Nobody is sysadmin but me


I’m going to be very blunt here: Nobody is sysadmin but me on a SQL Server instance. With “me” I mean the DBA group that has to be able to work on the instances of SQL Server.

Security BreachDo you recognize this situation:

A company application that stores mostly private information of their customers and everything runs smoothly. Than suddenly you get a security audit and you discover that the application has GOD mode on the instance. The auditor asks you why and you don’t have your story ready. The auditor writes down something and leaves. A few hours later your manager comes along, sweaty, fear in his/her eyes and asks you why you didn’t do your work for being in control of the server. “Why didn’t you screen the database servers for security implications!”.

This is what happened to me and I was only there for about 6 months, but being the only DBA I was the one to go to. From that moment on I made it a mission on database servers to make sure that no account but the DBA group was sysadmin on the instance.

Why was the server set up this way?
What went wrong during the installation of the application was that the software vendor couldn’t give a straight answer why the application didn’t install with normal installation right like db_creator. In a perfect world the software vendor supplies a set privileges needed for the installation and a set of privileges needed for normal functioning of the application.
In this situation this was not the case obviously and the DBA at that moment just gave SA to the service account.

But why is that a bad thing? I’ll give you a few examples:

  1. Nobody on the instance need to be sysadmin except admins…. period 😉
  2. If you’re sysadmin you can do anything in SQL Server, no restriction
  3. All security check are bypassed
  4. No software is hack-free

Imagine that you come in on Monday and suddenly and due to budget cuts or other priorities your application is not up-to-date. I’ve had situations where the vendor went out of business but the application was needed for several business-critical processes. Suddenly employees come to the administrators telling them that there is no more data in the application. You look at the databases and the database is gone, or all the tables are deleted, or even worse the application could connect to the internet and suddenly all the data is available online.

But what if the software vendor doesn’t know ( or want to know ) how to make the accounts work with less rights. I assume you’re having a test-server where everything is setup as in production. Just remove all the rights and only give the minimum read ( and possible write ) permissions. Let the users test the application and adjust where needed.

But what if I have a certain group of people who need more rights and sysadmin is easiest way? Lucky for you since SQL Server 2012 you’re able to create your own server roles. That means that you can setup a role with for example the “view server state” right to let other users see what happens on the server. Mike Walsh created a nice example how to give junior DBA’s the tools to do the work they need to do.

I’ve been in situation where project managers were desperate to complete their project but with an ugly security model in place. In the past, I must admit, I sometimes caved due to the pressure to complete the projects. Don’t let this happen!
When the project is complete you’re responsible for the database server, with or without security in place like it should, and when push comes to shove you have to explain why you accepted the database server as it is and why you didn’t act to get things right.

But I don’t know this server, how do I see which users have too much rights?
I’ve used the script from the following article for a while now and it helped me a lot to get a fast overview of the database and server permissions set for each user or group.

Hope this helped you out!


SSRS permissions overview


Problem: SSRS Permissions Overview

I had to create a SSRS permissions overview with all the privileges set for each report, folder and other objects in SQL Server Reporting Services.

When you try to google anything like export permissions SSRS  you get tons of results where people have issues with the permissions.

I don’t have issues with the permissions I only want to get an SSRS permissions overview.


Just a simple T-SQL script which combins some of the tables which hold the values I needed.

The script below will show all the different objects in SSRS with their path and name with username and the privilege it has.

USE [ReportServer] 


SELECT u.username, 
FROM   [dbo].[users] u 
       INNER JOIN [dbo].[policyuserrole] pur 
               ON u.userid = pur.userid 
       INNER JOIN [dbo].[policies] p 
               ON p.policyid = pur.policyid 
       INNER JOIN [dbo].[roles] r 
               ON r.roleid = pur.roleid 
       INNER JOIN [dbo].[catalog] c 
               ON c.policyid = p.policyid 
ORDER  BY u.username, 

More info about the ReportServer database can be found here.

The image below shows how the query returns the results

results ssrs permissions

Results from SSRS permissions query

This is a fairly simple T-SQL script but it can help you out.

I always appreciate comments so if you have one feel free to share it.