Hide or Show Columns in SSRS Reports Using Parameters

Standard

Reading Time: 4 minutes

Regularly I have reports that have an extensive amount of columns.
Because the amount of columns, reports tend to become inefficient and have too much information we don’t always need. The users may want to select certain columns and to make the report easier to read.

Hiding and showing columns in SSRS reports using parameters is a solution to make reports more dynamic and easier to use.

At the time of writing of this article, SQL Server Reporting Services did not yet have the possibility to use checkbox parameters. Instead we will be using a multi-value text parameter to show or hide our columns.

The project file

For the demo I created a dummy project that looks to the AdventureWorks2014 database. I have created a data set that returns the addresses from the Person.Address table.

The report contains a table that shows the data from the address data set.

If you want to play around with the demo, the project file can be downloaded from here. Make sure you use Visual Studio 2017 or higher.

Create the parameter

For us to have the ability to hide or show columns we have to create a parameter. We want to be able to filter on multiple columns so we want to create a multi-value parameter.

To do that, either right-click on “Parameters” in your “Report Data” pane, or right-click in the top pane for the parameter and click on “Add Parameter”.

Add Parameter

When you click “Add Parameter” you get a window that looks like this:

Add Parameter Window

Give the parameter a name, in this case I named it “ColumnFilter”, but you can name it whatever you want.
Select “Text” as the data type ( you can choose any other data type but I like text) and make sure you select “Allow multiple values”.

For this example we’re going to hide or show the address fields. The data set has two address columns, AddressLine1, AddressLine2.

Go to the tab “Available Values” and add the following fields:

Add Parameter Available Values

In this example we have two values in our parameter. The label and the value are pretty self explanatory.

We’re going to add the values to the “Default Values” tab to make sure our report shows the columns by default.

Add Parameter Default Values

Click “OK” and we have our parameter.

Show and hide the columns

Now that we have our parameter we can change the table to hide or show the address columns.

Right click on the column “Address Line 1” and click on “Column Visibility”.

Column Visibility

The following window will appear:

Column Visibility Show Or Hide

Click on the “Show or hide based …..”. Click on the “fx” button and the following window appears:

Show or Hide Expression Window

This window is really powerful. We can do a lot with expressions throughout our SSRS reports.

In this case we’re only going to enter a small line of code:

Expressions are a little different that you might be used to with normal programming. We are not able to search through our multi-valued parameters like we do with arrays in other programming languages (at least I have not found a way yet).
Instead we will be joining all the values from the ColumnFilter parameter and check if it contains one of the values we like to search for.

What the expression does is combine all the values separated with a comma (“,”). If all the values are selected we’ll have a string like this “address1,address2”.
The next part will look if the string contains one of the values. The “.Contains” method returns true if a value is within that string and false if it’s not.
That entire expression is contained within an IIF that assesses if the statement is true or not. In this case, when the string contains the value we look for, we want to return false to not hide the column.

Add the code to the Expression window and click on “OK”. Click “OK” again to close the column visibility window.

Do the same thing with the column “Address Line 2” but change the value in the expression to “address2”.

Executing the report

Now that we’ve done all the work. Open the report preview.

In this example I deselect the the “Address Line 2” value and click on “View Report”. The result is the same table but with the column “Address Line 2” hidden.

The result can be seen below:Hidden Column Address Line 1

Conclusion

Showing or hiding columns with parameters is a very useful way to make your reports more dynamic.

This is one way of doing this, I bet there are other ways of showing or hiding columns but these tend to work really well in my environment. The users are able to adjust their reports and I don’t have to create multiple reports to get to the same result.

I hope you enjoyed this post and comments are always appreciated.

 

 

 

Test Log Shipping With DbaChecks

Standard

Reading Time: 2 minutes

I’ve kind of wrote a lot about log shipping the last few months and this is another one of them.
This time I’m talking how to test log shipping with dbachecks.

DbaChecks?! What is that?

DbaChecks is a brand new module developed by the community to test your environment for all kinds of best practices. There are even possibilities to export the data in such a way that you can create amazing dashboard in Power BI.

The module uses Pester to test a whole lot best practices. Because of that we can test very quickly what’s wrong with our server(s).

These tests are all included in the module:

get-dbacheck

Why this blog post

We want everyone to know about this module. Chrissy LeMaire reached out to me and asked if I could write some tests for the log shipping part and I did.

Because I wrote the log shipping commands for dbatools I was excited about creating a test that could be implemented into this module for everyone to use.

Test Log Shipping With DbaChecks

If you want to go all nuts you can use the following command and see all the tests fly by

invoke-dbccheck all checks

To make it easier to read I would only want to know about the things that failed

To test for a certain group of test you can use a tag of specify a certain test

invoke-dbccheck logshipping

 

Using this module will really help you to check your entire SQL Server infrastructure with ease. So what are you waiting for, download the module and go test your instances.

 

Log Shipping With dbatools – Part 4: Recover a Log Shipped Database

Standard

Reading Time: 3 minutes

In the third part of the series we discussed the way to get the log shipping errors using the command “Get-DbaLogShippingError”. This blog will be about how to recover a log shipped database.

Out of the box solutions

I can be very short in this paragraph, there is no out-of-the-box solution to recover a log shipped database.

Why this command

Initially log shipping was meant to be used as a warm standby. You have your data on another instance but you still have some human intervention to get it all back up.

Imagine the following situation. You have setup log shipping using either the GUI or the commands in dbatools. You have about 15 databases and everything is working fine.

Until one day the primary instance goes down and is not recoverable. For the production to continue you have to bring the log shipped databases online fast.

You have to figure what the last transaction log backup was. You have to check if it was copied to the secondary instance and if it’s restored.

To do this by running a couple of queries, copying the files if needed and run the log shipping jobs takes time. I’d rather run a command and recover one or more databases and get back to the problem of the primary instance.

Invoke-DbaLogShippingRecovery

The Invoke-DbaLogShippingRecovery command is will execute the following steps:

  1. Check the agent status and start the agent if it’s not a started state.
  2. Retrieve the latest transaction log backup and try to copy it to the secondary instance if it’s not present. It will wait and check the log shipping status to see if the backup file is copied.
  3. Retrieve the last restored transaction log backup. Execute the restore process to get the database up-to-date.
  4. Disable the jobs after the copy and restore process.
  5. After all the actions it restores the database to a normal state.

To execute the command

The result of the command

Recover Log Shipping Result Command

The image below shows the database in a normal state after the command in the SQL Server Management Studio.

Recover Log Shipping Result GUI

The result of the jobs being disabled

Recover Log Shipping Result Jobs

More options

In my example I showed how I recovered a single database. The parameter does accept multiple databases.

Besides setting the individual databases you can also let the command recover all the log shipped databases

In some cases you want to recover the databases but not execute the recovery to a normal state

 

This concludes the command “Invoke-DbaLogShippingRecovery”. This was the final post in this series. If you want to look back at the other command follow the links below:

  1. Setup Log Shipping
  2. Test Log Shipping Status
  3. Get Log Shipping Errors

 

Log Shipping With dbatools – Part 3: Get Log Shipping Errors

Standard

Reading Time: 3 minutes

In the second part of the series we discussed the way to test the log shipping status using the command “Test-DbaLogShippingStatus”. This blog will be about how to get log shipping errors to analyze why your log shipping isn’t working.

Out of the box solutions

There are several options to get the log shipping errors right out-of-the-box. One of them is using queries and the other one is the event viewer.

Queries

There are queries that can help to get log shipping errors. One that is likely used is by using the log_shipping_monitor_error_detail table.

Although this is a good way to get the log shipping errors, unless I’m using CMS or a monitoring server, I have no way to find errors on multiple instances with log shipped databases.
Furthermore, I want to keep my all my tools together and make it easy to solve any errors.

Event Viewer

The other way to see if there are any errors in the log shipping is by using the event viewer.

Get Log Shipping Errors Event Viewer

Personally I don’t like using the event viewer because there could be other errors which I would have to filter through before getting to the root cause. The other reason this doesn’t give me a complete view is because it doesn’t register any errors that occurred with restore of the backups.

Why this command

When there is something wrong with the log shipping I want to know about it as soon as possible. Using the above options doesn’t give me the flexibility to do that.
For instance I cannot check multiple instances at once, unless I’m using CMS or a monitoring server.

To make the tool set complete this command was needed to make sure a user would be able to get an overview of the log shipping status.

Get-DbaLogShippingError

By default the command will return all the errors that ever occurred. It collects all the information and returns it in a table structure to the user.

To execute the command:

The result is an overview of the errors.

Get Log Shipping Errors No Filter

In my play area I made sure I would have a lot of errors to test the command. It turned out I had more than 3100 errors!

Get Log Shipping Errors No Filter Count

This brings us to one of the reasons I created this command; filtering the results.

I may only want to see the errors that occurred the last hour.

The result is still a quite big list but more manageable.

Get Log Shipping Errors Filter Date From

In this example we’re only using the parameter “-DateTimeFrom” but we can also use “-DateTimeTo” to filter between certain periods.

More filtering

Of course there are other filters to make it easier to zoom in into a specific problem.

It’s possible to filter on the databases:

Get Log Shipping Errors Filter Database

To filter on the type of instance you can use the “-Primary” and “-Secondary” parameters

To filter on the specific actions use the “-Action” parameter:

 

This concludes the command “Get-DbaLogShippingError”. The next post will be about the command “Invoke-DbaLogShippingRestore”.

Log Shipping With dbatools – Part 2: Test Log Shipping Status

Standard

Reading Time: 3 minutes

In the first part of the series I described the command “Invoke-DbaLogShipping”. This makes it possible to set up log shipping. This blog post will be about the command to test log shipping status.

Before I describe the command, I want to discuss the options that are available in the SQL Server Management Studio (SSMS) and SQL Server.

Out of the box monitoring

Microsoft has made it possible to check the log shipping using SSMS. This can be done using queries or by using the “Transaction Log Shipping Status” report.

Queries

To return the log shipping status using T-SQL execute the following code in a query window in SSMS:

The result is as follows:

Log Shipping Status Query Result

A good method to find any possible issues but you need to check multiple columns to find out if something is wrong.

Transaction Log Shipping Status Report

By default SSMS has a report that shows the status of the log shipping. To open the report right-click on the server, go the “Reports”, Go the “Standard Reports” and click on “Transaction Log Shipping Status”.

Log Shipping Status Report

The result is an overview of each database with the status of that process:

Log Shipping Status Report Result

The report shows in red colored text if something is wrong which is a great way to find any problems. Still we need to go into SSMS and click around before we get to this information.

Why this command

Monitoring your log shipping processes is important. You need the synchronization status of the log shipped databases.

The log ship process consists of three steps; Backup, Copy and Restore. The log shipping tracks the status for these processes.
It registers the last transaction log backup, the last file copied and the last file restored. It also keeps track of the time since the last backup, copy and restore.

But that’s not all. Log shipping also checks if the threshold for the backup and restore has been exceeded.

During the log shipping thresholds are set for the backup and restore process.
The default thresholds are 60 minutes for the backup and 45 minutes for the restore. There is no threshold for the copy process.
If for example, the last time since the last backup exceeds the backup threshold an alert will be triggered.

That’s a lot of information to consider and that’s why this command was developed. It will enable the user to get a complete overview of the log shipping status without having to know all the details.

Test-DbaLogShippingStatus

The commands will return a lot of the information by default. It will collect all the information and based on that returns a status.

To execute the command:

The result is a detailed overview of each database including the status.

Log Shipping Status Command Detailed

This looks a lot like the results you get from the queries of the report we talked about earlier. All this information can be a little overwhelming and it’s not always needed.
If you don’t need all that information there is an option to format and filter the output.

It will only show you the core information to know what the status of the log shipping is.

The result of the command:

Log Shipping Status Command Simple

As you can see there is lot less information. It only shows the instance, database, instance type and the status. In most cases that’s all you need.

More filtering

The command also allows you to filter out the primary and the secondary databases.

There is a way to filter on specific databases using the “-Database” parameter.

Of course there is also an option to exclude certain databases by using the “-ExcludeDatabase” parameter.

 

This concludes the command “Test-DbaLogShippingStatus”. The next post will be about the command “Get-DbaLogShippingError“.

Log Shipping With dbatools – Part 1: Setup Log Shipping

Standard

Reading Time: 7 minutes

This post is the first one of a series of four describing all the different commands. We’ll discuss the commands to setup log shipping with dbatools, how to check the status, to check for errors and to recover in case of an emergency.

What is log shipping

Before we go into the why I want you to know what log shipping is.

Log shipping dates back to the early versions of SQL Server. The first article that I could find dates back to the year 2000 and explains how to setup log shipping with SQL Server version 7.

Microsoft has done a lot of work on the log shipping setup process and we no longer have to setup the “poor man’s log shipping” anymore. The setup contains a nice GUI and it makes easier to setup log shipping for your database(s).

Log shipping is the process of creating a transaction log backup. Copy that backup to a secondary server and restore the log to synchronize the databases. The image below shows the different elements of a log shipping architecture..

Log Shipping

https://docs.microsoft.com/en-us/sql/database-engine/log-shipping/media/ls-typical-configuration.gif

I’m not going to explain in detail what log shipping is or how to set it up using the traditional method. There are lots of websites that have already done a great job describing everything related to log shipping. Here are a couple of websites that could help

Why create a command set

Like I said before, Microsoft has done a great job of automating the log shipping process and making it easier for the user to setup. The GUI describes all the settings needed. Even someone inexperienced could setup log shipping in a short amount of time (given that all the requirements have been met).

Microsoft has simplified the process since the earlier versions of SQL Server, setting up log shipping still takes a lot of time. You have to setup each database individually and most of the time your are doing almost the same.
There are ways to automate this using dynamic T-SQL. However, I found that, at least in my situation, it was prone to errors and it took me more time to troubleshoot it than to set it up manually.

Why we needed to setup log shipping

A couple of months ago my company wanted to create a High Availability solution for our production databases.
Researching a solution like clustering for Availability Groups, we looked at architecture decisions, licensing and costs. The outcome was that we weren’t able to use any SQL Server Enterprise HA solutions.
Log shipping is not a so called hot standby that immediately recovers in case a service or database goes down. Instead it is called a warm standby where there is human interaction needed to recover.

We have about 20 production databases and in my opinion we are a small shop. Looking at the database size and amount of transactions.
We decided to go with log shipping. It’s well established within SQL Server and not yet deprecated like mirroring.

So on a Monday morning, after all the backups were done, I started to configure our databases for log shipping. For me the process of setting up log shipping was very tedious. I basically had to go through each of the databases and use the same settings over and over. The biggest difference would be the database name in the backup or copy directory.

The reason for the command set

In the end it took me about 3 to 4 days to set it up. This was mostly because I didn’t want to stress the network too much by setting up multiple databases at once. At some point I made some mistakes and had to start over again. I really don’t like to do repetitive tasks and in my daily work I automate most of those processes.

To setup log shipping for a single database in the GUI you need about 40 clicks and have to  insert lots of information in different windows.

That’s where I contacted Chrissy LeMaire ( b | t ) from the dbatools project and asked if it would be interesting to create log shipping commands for the PowerShell module. She was really enthusiastic about it and I started to develop the commands.

The command set

Initially it started with the command “Invoke-DbaLogShipping” but soon after a couple of commands followed. The command set contains the commands

This blog will be the first of four posts where we dive into “Invoke-DbaLogShipping” because we first need to have log shipping setup before we can use the other commands.

The Command Invoke-DbaLogShipping

During the development of the command I chose to make the process for setting up the log shipping as easy as possible for the user. My main requirement for the command was that as long as you have the minimal requirements for log shipping you would be able to set it up.

The main command is “Invoke-DbaLogShipping”.

Within log shipping there are a lot of decisions that can be made. All those decisions ended up with lots of parameters. The command has over 90 different parameters ranging from setting the backup folder to setting the name of schedules.

I will explain a couple of the most important ones that I think will be used the most:

General

  • Force: When this parameter is used, lots of assumptions will be made and you need only a minimum of settings to setup log shipping.

Server

  • SourceSqlInstance: The primary instance
  • DestinationSqlInstance: The secondary instance

Database

  • Database: Database (or databases) that need to be log shipped

Backup

  • BackupNetworkPath: Network path where the backups will be saved. This is the parent folder, a child folder for each database will be created in this folder.
  • BackupLocalPath: The local path for the backups. Only used by the primary instance and not mandatory
  • BackupRetention: How long to keep the backups. The default is 3 days.
  • BackupScheduleStartDate: Start date when the backup schedule needs to start
  • BackupScheduleStartTime: Start time of the backup schedule. Maybe the backups only take place a certain time periods of the day.
  • BackupScheduleEndTime: End time of the backup schedule. Maybe the backups only take place a certain time period of the day.
  • CompressBackup: Do you want the backup to be compressed. By default it looks at the SQL Server edition and server settings.
  • FullBackupPath: Path to the full backup. Is only used when log shipping is being setup.
  • GenerateFullBackup: Instead of using an existing full backup you can use this parameter to create a full backup on the fly during setup.
  • UseExistingFullBackup: Besides FullBackupPath and GenerateFullBackup you can also set the option to let the command retrieve the last full backup made for the database.

Copy

  • CopyDestinationFolder: Where do the backups need to be copied to

Restore

  • DisconnectUsers: Important setting if your secondary database is in read-only mode. Users will be disconnected if the restore process starts and this is parameter was set.
  • Standby: Do you want your databases to be in stand-by mode. By default the database will be in no-recovery mode.
  • StandbyDirectory: Directory where the TUF (transaction undo file) file needs to go for a database. This is needed when using the -Standby parameter.

Setup log shipping using Invoke-DbaLogShipping

Before we look into the command we need to go through the prerequisites of log shipping. The items below are the bare minimum to setup log shipping.

Prerequisites

  • At least SQL Server 2000 standard edition
  • The primary database needs to be in Full recovery mode.
  • A network share/drive that’s accessible for the primary and secondary server

A more sophisticated setup would be:

  • A primary server containing the primary databases
  • A secondary server containing the secondary databases
  • A monitoring server logging all the information
  • At least SQL Server 2000 standard edition
  • The primary database needs to be in Full recovery mode.
  • A separate network share for the backup process
  • A separate network share for the copy process

Technically you don’t need multiple servers to setup log shipping. You can set it up with just one single SQL Server instance. In an HA solution this wouldn’t make sense but technically it’s possible.

Having a separate server acting as the monitoring server ensures that when one of the server goes down, the logging of the actions still takes place.

Having a separate network share for both the backup and copy makes it easier to setup security and decide which accounts can access the backups. The backup share needs to be readable and writable by the primary instance and readable by the secondary instance.
The copy share needs to be accessible and writable for only the secondary instance.

Additional information

  • Backup Compression
  • Database Status
  • Schedules
  • Cleanup

Backup Compression

Backup compression was introduced in SQL Server 2008 Enterprise. Beginning in SQL Server 2008 R2, backup compression is supported by SQL Server 2008 R2 Standard and all higher editions. Every edition of SQL Server 2008 and later can restore a compressed backup.

Backup Compression in  SQL Server

Database Status

You can decide whether you want the database to be in a recovery or read-only state. Read-only gives you the ability to let other processes, like SSRS or ETL, read production data without interfering with the actual production instance.

Schedules

When to run the backup of the transaction log. Does it need to run every 15 minutes, which is the default, or every hours, once a day etc. The same goes for the copy and restore schedule.

Cleanup

How long do you need to hold on to the backups. The default is three days which in some cases can be quite long.

Setup

This is where it’s going to be fun. As an example I’ll be using the command with the minimum of parameters. To do this a couple of the requirements need to be met. Let’s go through these prerequisites:

  1. Setup the shares for the backup and copy destination
  2. Privileges for primary and secondary instance to the backup and copy shares are set
  3. Database(s) that need to be log shipped
  4. Set the database in read-only or no-recovery mode
  5. Compress or not to compress the backup
  6. Generate or use an existing full backup

The command below will log ship the database from instance SQL1 to SQL2 by generating a full backup. The backups will be compressed and to initialize the backup a new full backup is generated.
No other settings are made. To make sure all the settings are met the -Force parameter is used to enable all the assumptions.

The result:

logshipping example force

This is only a fraction of the parameters available. The remaining parameters make it possible to configure your log shipping exactly the way you want it to be.

Execute the following command to get more information:

The next blog will be about the command “Test-DbaLogShippingStatus“. It’s responsible for testing if all of your log shipped databases are in a synchronized state.

T-SQL Tuesday #98 – Your Technical Challenges Conquered

Standard

Reading Time: 6 minutes

T-SQL Tuesday is back around again and this time it’s the first one of 2018!

This wonderful idea by Adam Machanic ( Blog | Twitter ), is this time hosted by Arun Sirpal ( Blog | Twitter ).

 

 

The topic Arun has chosen is:

Write about and share with the world a time when you faced a technical challenge that you overcame and you can go really technical with both the issue and solution if you like.

In the years I faced lots of technical challenges and with me lots of other DBAs have. But one that still sticks with me the most is the story of the failing cluster setup.

It was back in 2008/2009 and I was a consultant in a hospital for a job to setup a fail-over cluster with SQL Server 2005. If you’re wondering why SQL Server 2005, it’s because SQL Server 2008 was just released, and the medical industry is not that keen on being an early adopter if it comes to technology.

My task was to setup a fail-over cluster on a couple of HP servers. They were already prepared by the system administrators an installed with Windows Server. Seems simple enough, I had done that many times before and should give my any trouble. Oh boy was I wrong about that!

Having setup all the different parts for Windows Server, I started the installation of SQL Server 2005 for the first node. Everything went smooth and the installation was succesfull.

Now comes the installation of the second node. I start the installation of the node to add it to the cluster and it installs to about 90% and stops. It gives me a general error that something in the installation went wrong and that I should check the installation logs to find out what went wrong.

I checked the log files and didn’t find anything special that could explain why the installation failed. Let’s try this again and go through all the settings an try again. Was it that I entered the wrong credential or some other setting?
After starting the installer it again fails at the 90% mark. The installation rolls back the entire installation and I’m not sure what happened.

Let’s check the installation logs again, hopefully this time it shows me what went wrong. Unfortunately it didn’t show me anything.

I wasn’t going to give up. I tried different variations of the installation settings to see if I made some sort of mistake.
I checked the hardware for any defects, replaced memory, swapped the CPUs of the machines.

Nothing there that could point out why the setup failed.
At this point it’s a good idea to get some higher power involved and we created a ticket with Microsoft support. It even got a few levels higher and even they weren’t able to figure out why the installation failed. Their answer was; “Just use SQL Server 2008, that should work”.

Besides the fact that was not the answer I was looking for, it was not possible for me to install SQL Server 2008 because it was not yet proven technology.

So as the consultant I was kind of in a pickle. How was I going to get the job done because my reputation was on the line. I’m supposed to be the one with the knowledge and there I was struggling for over a week to get a cluster setup.

Because at this point I had to start thinking outside the box and I was looking at the hardware again. The hardware was all approved and supported hardware by Microsoft. We didn’t need any third party drivers or anything like that.

Looking at the hardware, besides the motherboard power supply and some other parts , we only able to control the CPU, memory and the hard disks. The servers were identical with the following configuration:

CPU: 4 x 6 core
Memory: 64 GB
Hard disk:
System disk – 40 GB
Application disk – 30 GB
Data disk: 300 GB
Log disk : 100 GB

I first examined the disks. There was nothing special about them. The system and application disk were the only local disks and the data and log disk were LUNs.
The memory was nothing special either and was setup in 4 x 16GB modules.
The CPU was something I had not seen yet and what caught my attention was the 6 core specification. In total we had 24 cores and for some reason that number was odd to me.

Let me explain why the number 24 is an odd number for me. It’s because when you’re used to working with computers you are used to work with number in the power of 2. We have 1,2,4,8,16,32,64 etc but not the number 24. That’s right between 16 and 32.

It’s possible to let Windows believe there is a different amount of processors in your server. You can change this setting in the MSCONFIG utility by setting the amount of processors in the BOOT.INI tab.

At this point I was good for any new idea so I changed the value for the amount of processors to 2. The number 2 is in the power of 2 so let’s see what happens.
I restarted the server and looked into the task manager and I saw two processors. That all worked now let’s run the setup of the SQL Server node again.

I would not have believed it if this didn’t happen to me but for some reason the setup was successful. At this point it could’ve been a fluke and I wanted to be sure. I removed the node from the cluster, reset the amount of processors to the original version and ran the setup again. This time it failed as I had seen before.

This is still not good enough for me but I was getting closer to a conclusion. Let’s see what happen if we change the amount of processors to 8. I changed the amount, rebooted the server and started the setup again. Again this time it’s successful.

My conclusion was that for some reason the setup couldn’t handle an amount of processors that was not in the power of 2. In my mind this was insane because I had never encountered such a problem before.

Now that everything is installed I can again set the amount of processors back to 24 and get on installing the updates like service pack 3. I changed the amount of processors, rebooted the server and as soon as the server booted up, my cluster installation failed. It couldn’t be the amount of processors right? Again no clear error and because I knew what went wrong I changed the amount of processors back to 8 and rebooted it. This time the server and cluster booted up.

I contacted Microsoft support and told them about my findings. Their reply was that indeed something odd was fixed in SQL Server and it the fix was released in service pack 3.

Back with SQL Server 2005 you weren’t able to slipstream your service packs. You first had to install the initial release and than apply the service pack.

The next step was to install SP 3 with the 8 cores. I started the setup and of course things didn’t go as expected. The installation failed and gave no clear reason why.

Again I was stuck. I had a uncompleted cluster install because we couldn’t use the full amount of resources, I couldn’t install the right update to fix it.

I started to dig into the updates that would applied in SP 3. I noticed some changes to the system databases especially the mssqlresource database.

The mssqlresource database is a system database that’s not visible for the user and is only used internally by SQL Server.

I ran the setup again and this time I wanted to see what happened during the install with the system databases. I saw the master database being replaced by a new version, and the msdb database as well and as soon as the installation wanted to finish the update of the mssqlresource database it disappeared, the original version was placed back and the installation failed.

That’s it! That’s the reason the service pack was failing. By a little research and some gut feeling I saw what happened. But why did it happen. Was the installation not able to rename the database?

To be sure I did something rigorous and don’t do this in production ever! I went into the SQL Server settings and I updated the physical name of the mssqlresource database. I shut down the instance and I renamed the files for the database on disk.
I started the instance sucessfully. I checked the name of the database and it showed the renamed files.

I started the installation and watched the system database files being changed. First the master database, than msdb and finally the mssqlresource database. I see a new mssqlresource database being created next to the original one, the old one being removed and the installation finishes!

That was the problem for SP3, it was not able to do an in place upgrade of the mssqlresource database.

We’re almost done. The only thing I have to change back is the amount of processors and reboot the system. I changed the setting, rebooted the server and everything booted up successfully.

This one hell of a journey that gave the grey hairs I have now. This took about 2 weeks of my life what could’ve been an afternoon.

This was and experience that changed my way of thinking as a DBA and I learned a lot along the way.
The moral of the story is that there is always a solution. Go and dig into the material. Think outside the box and do not let any challenge get the best of you.

Reflect on 2017 And My Learning Goals for 2018

reflection
Standard

Reading Time: 4 minutes

nfortunately I missed the last T-SQL Tuesday of 2017 but I’ll share my learning goals for 2018. I also want to reflect on last year because I think it’s good to see if I was able to improve.

Reflect on last year

At the end of 2016 I set myself the following goals:

  • Do more presentations
  • Get better in PowerShell development
  • Play a bigger part in the SQL Server community

I got really excited in 2016 by getting drawn into the SQL Server community by presenting and help people out. That gave me a lot of energy and I wanted that to continue in 2017. And oh boy did it continue!

I got really involved with the dbatools project. I developed a bunch of functions to support the SQL Server Agent, log shipping and a bunch of others. With that I hit two birds with on stone; Play a bigger part in SQL Server community and getting better in PowerShell development.

I also wanted to do more presentations. I submitted sessions to lots of events and I got to present a couple. Not the smallest ones because I got a session in SQL Saturday Oregon and a lightning talk at the PASS Summit.

One thing I really wanted to get better at was unit testing. As a DBA with development experience I never came in contact with this concept. Though I got excited about it due to Rob Sewell (b | t) who showed my some interesting uses of PowerShell unit testing with Pester.
The unit testing concept was a real eyeopener and a big learning curve. At the end of 2017 I was able to create sophisticated unit tests for SQL Server (tSQLt) and PowerShell (Pester) which helped me a lot with my development.

Goals for next year

So the T-SQL Tuesday post had the following questions:

  • What do you want to learn?
  • How and when do you want to learn?
  • How do you plan to improve on what you learned?

What do you want to learn?

I want always want to learn a lot of things but if I would make a shortlist it would contain the following

  • Query Store
  • SQL Server 2017
  • Continuous Integration
  • More consistent blogging

I’m a database administrator that has knowledge in database development and other processes. I’ve done small projects with business intelligence but that subject does not really suit me. I understand the technology and the thought process, and that’s good, but I could not switch to it full time.
I get excited when I see an inefficient process or code and when I’m able to improve that process by automation or by redeveloping the procedures.
That’s where my strength lies and that’s where I want to continue at.

Query Store

I went to PASS Summit 2017 and there were a of of talks about Query Store. I have not yet had the chance to take a deep dive into the material but I will very soon. I think this is one of the things that will impact my work the most

SQL Server 2017

That’s a big topic and something I’ve been looking forward for. I have been doing tests with SQL Server 2017 when it was first released but this year we’re going to put into production.

Continuous Integration

You already read about me being active with unit testing with tSQLt and Pester. Now I want to take the next step and implement CI for the database development.

It costs me a lot of time to create new release scripts and to make sure everything works as as expected. I just want to automate that and make my life and that of the other developers easier.

More consistent blogging

Last year was a really busy year for me in both the work and personal aspect. Because of all the pressure I was not able to blog as much as I wanted and the only thing that I was able to do was to create some fast drafts. I have about 50 drafts laying around that I could finish and I probably should.

For me to improve this year is to at least create a new blog post at least every two weeks.

How and when do you want to learn?

As with many other people in the tech industry we have a full time job and finding time to study can be tough. We never have a 40 hour work week and sometimes we make 10 or more hours a day.

I have the privilege that my employer enables me to do self-study. That means that I can spend about 10% of my time learning new things or improve my current skills.

But that’s mostly not enough. When I want to take a deep dive I go and get into the details of a subject. I’m going to ask questions that in most cases would not make sense or I’ll try to break things to see what would happen in a production environment.

That answers the question about when I learn things but now how. The “how” I learn largely depends on the topic at hand. In most cases I go and search already published articles and google for any information that I can find.

In most cases that will give me enough information to get started. Another place that does have a a lot of content is the Microsoft Academy. If all of that still does not satisfy me, which does not happen very often, I will turn to the SQL Server community and ask questions. One place that has lots of IT professionals present is the SQL Server Community Slack channel.

How do you plan to improve on what you learned?

The subjects I explained earlier are important to me because they can make my life easier. They will improve my skills as a DBA and will make sure less mistakes are made.

I already mentioned that I want to spend more time on my blog. I think blogging is an important part of my career and it enabled to connect to the rest of the world.
If you blog about something you need to know all the ins-and-outs of the subject. You cannot just write about something and not be confident about the information you’re presenting. Making sure that you have all the information by diving into the subject will make me better at that.

Someone ones told me, if you can tell me in children’s language about a particular subject you’ll know enough about it to present it to others. That’s true for both blogging or presenting a session.

Talking about presenting, I’m also going to spend more time developing presentations and submitting them to events. I love to present about the stuff I do and if I can help just one person I’ve already that’s already good for me.
All the new stuff I’ll be learning could also end up in a session so expect me to submit lots of sessions this year.

This is going to be a busy year but I like that. Lots of things to get better at.