Working hard pays off

Standard

Reading Time: 2 minutes

On Monday, October 1st, 2018 I was awarded the Cloud and Datacenter Management MVP award. I did not know about it until a few days after because I busy renovating my office. During the renovation I did not regularly check my e-mail and the notification slipped through the cracks.

Two days later at 6 AM in the morning I checked my e-mail and jumped up. I saw an email that started “Congratulation…” from the MVP Global Administrator. I got the MVP award.

How did I get here

I remember when I first started contributing to the community. It was after SQL Cruise, now Tech Outbound, that I got introduced to a couple of the famous people like, Grant Fritchey (t), Argenis Fernandez (t), Tim Ford (t) (organizer of the event).

During the cruise I met up with Catherine Wilhemsen (t) who asked me why I was not presenting about all the things I knew. I did not think I was that knowledgeable compared to the people on the cruise but she thought otherwise. She, and the others on the cruise, suggested me to start presenting and I got into contact with Aaron Nelson who gave me an opportunity to present on the virtual chapter of PASS.

I got into contact with Chrissy LeMaire (t) who is the godmother of dbatools. My first task was a new command and that got me started developing for dbatools and contributing more to the community.

I started doing more presentations and spend countless hours developing PowerShell commands to help me and other simplify DBA work. It was a lot work but the reward I got from the community worth it.

Over the years I’ve met some wonderful people and that certainly was a reason to continue.

I want to thank my boss, Joe Sheffield (t), from Waypoint Analytical LLC for supporting me to contribute to the community. This would not have been possible without him.

I’m not going to stop and this only let’s me be more active in the community.

Thank you all that supported me and keep working hard because it pays of in the end.

Troubleshooting Dynamic SSRS Queries

Standard

Reading Time: 3 minutes

The Problem

In my daily work I have to work with a lot of SSRS (SQL Server Reporting Services) reports that have either stored procedures, queries or dynamic queries to get all the results.
Troubleshooting dynamic SSRS queries can sometimes be difficult. That’s is especially the case when you’re dealing with multiple hundreds of lines of code all generated dynamically.

An example of such an error is:

Error Dynamic Query

Because the query is generated during execution using all kinds of parameters, you don’t know how the query looks like exactly. Debugging this query would be pretty difficult and tedious, but there is an easy way to get the compiled query.

How does a dynamic query look like

If you never used dynamic queries in SSRS you might not know how that looks like and when it’s used.

A dynamic query in SSRS is actually an expression in a data set. The expression is build upon execution and can include TSQL code, SSRS fields etc.
The reason I use this in my reports is because I sometimes need to implement complicated filtering in my reports using the report parameters.

You can filter the data using the “Filters” tab in the data set but that would cause me to generate to extensive amount of data during execution which I’d like to avoid.

When you open a data set with an dynamic query it looks like this

Dataset Window Dynamic Query

You don’t see much of the query because you needto click the “fx” button on the right to get the following screen:

Expression Window Dynamic Query

The “=” sign indicated the start of the expression. For readability reasons I join every new line with the “&” sign. I could have used one big line of code but this makes it easier to read.

During execution I use several parameters to filter out values. That can be seen on the last two lines in the expression the image above. Using this method of filtering cuts down the amount of data I retrieve during execution.

The solution

How can I look at the query after it’s executed? The solution is to create a text box and enter the following expression:

Solution Dynamic Query

This expression will display the query of the data set in the text box during execution.

This doesn’t solve the problem yet because we’re still dealing with the problem that the query fails during execution.

How to display the query without executing it? The solution is really simple but you it requires some understanding how SSRS compiles the expression during execution.
The fact is that the query expression is compiled as a single line of code.

To display the code we can use the “–” to create a comment.

Commentec Expression Dynamic Query

This will render the query but it will not execute out query:

Compiled Query Dynamic Query

There you have it, your compiled query.

The next step would be to copy this code and paste it to SSMS or SQL OPS to further debug the query.

 

TSQL-Tuesday #104: Code You Would Hate To Live Without

Standard

Reading Time: 3 minutes

This month’s TSQL Tuesday is hosted by Bert Wagner (w | t) and he asks us to write about code you’ve written that you would hate to live without.

Over the years I’ve used other people’s code regularly. It could have gone from Github, Stack Overflow, blogs etc.

I have always appreciated the way the SQL Server community (and other communities) share their code to help other.

I have worked alongside lots of people who were not very keen on sharing. People actually told me that their code was theirs and only theirs and that nobody is to see it.

Although I don’t agree, I do understand people not wanting to share their code. They may have worked hard on it and the person could have an advantage to other people who don’t have that “special” piece of code.

There is a couple of caveats to that:

  1. Your code could have bugs that you never noticed
  2. Your code could be improved by other people with different views

I am completely the opposite. If my code can help one person to make their life easier, I have already won. In the last few years that has been proved because I have been contributing to dbatools. Because I contributed I have learned a lot about open source projects, learned a lot about SQL Server. I met new people from all over the world who in their turn shared their code to make my life easier.

Code that I couldn’t live without

Back to the task at hand and tell you about code that I’ve written that I cannot live without.

Anything I create and is is usable for other DBA’s tends to end up in the dbatools module. The reason I do that is because it’s an amazing platform and it makes it easy for me to reach a large audience.

Testing Backups

One that I cannot live without is the dbatools command called Test-DbaLastBackup. Testing your backups is one of the most important things you can do as a DBA. I did not write that command but I did use it in one of my commands.
It’s basically a wrapper around that command that sends me an email with all the tested backups and the results in a csv file. The code is freely available on my blog.

Log Shipping

Some other things I’ve written is are the log shipping commands in dbatools. At my current workplace we have log shipping enabled. Everybody who has set up log shipping knows it’s a PITA if you have lots of databases.

I recently presented a session about the log shipping commands on the PASS DBA Virtual Chapter. The session was recorded and if you’re interested you can watch it through this link.

One log shipping command in particular that I use regularly is the Test-DbaLogShippingStatus. This command makes it easy for me to check up on my log shipping servers and see the status of each database.

SQL Server Agent

I don’t like to click around and if I can do something in a console I choose to do that. In most cases it is faster than the alternative.

The same goes for the SQL Server Agent. You have your jobs, schedules, job steps etc. For all these objects I have created commands in dbatools. The commands enable you to create, edit or remove any of those objects with ease.

So these are the script I cannot live without.

Maybe these scripts can help you in your journeys through SQL Server.

 

 

 

 

 

Decrypting SQL Server Objects with dbatools

DECRYPTING SQL SERVER OBJECTS
Standard

Reading Time: 4 minutes

There are lots of great articles that describe how the command for decrypting SQL Server objects when they’re saved with encryption like:

Encrypting and Decrypting SQL Server Stored Procedures, Views and User-Defined Functions

Simple Way to Decrypt SQL Server Stored Procedure

Most of them rely on T-SQL and SSMS but I wanted to do the same thing with PowerShell. This way I can execute it for several objects at the time and maybe even in multiple databases and instances.

Let me first say that I’m not an encryption guru, but I do understand the technology and how I could possibly develop some code to decrypt an object.

This article describes the inner workings of the command “Invoke-DbaDbDecryptObject”.

Why would you encrypt an object?

Encrypting objects was first introduced in SQL Server 2000 to protect objects. I don’t prefer this method because there are a couple of disadvantages to it.

The first disadvantage is that the security is not very good. It’s very simple to get the original code when you have the right privileges. The code is not protected when for instance the SQL Server Profiler is run or when you catch executed procedures using extended events.

The second disadvantage, but you probably should have this anyway, is that you need to make sure that you have the original code in source control. Once the object is created with encryption there is no normal way to retrieve the code.

The third disadvantage is, and this is becoming more and more popular, that there is no easy way to check differences between objects and therefore is harder to use with CI/CD (Continuous Integration / Continuous Delivery).

I would only use this method if the “VIEW DEFINITION” privilege would not be sufficient and there are no other DBA’s who can google the solution to retrieve the original code.

How is an object encrypted?

Objects are encrypted using “WITH ENCRYPTION” when creating or altering an object. This encrypts the object and it’s no longer available using the general methods.

If you want to view the code you’ll get an error message like this:

error encrypted object

During the encryption process, SQL Server look at the column “imageval” in the table “sys.sysobjvalues”. This table can only be queried when you’re connected using the DAC (Dedicated Administrator Connection). This value is a VARBINARY(MAX) value.

The encryption uses a XOR cipher. The XOR cipher work by applying an XOR with a key (B) to a value (A) generating a result (C). This results in the following formula A ^ B = C
The cipher works is also called a modulus 2 addition. If we know the key and the encrypted value we can decrypt the (C ^ B = A).

Decrypting an object

To decrypt the database object we need to calculate the secret (A), apply the XOR cipher to it with known plain text (B) and the known encrypted text (C).

Getting the needed values

(A) Get the secret

The secret is the imageval value in the sys.sysobjvalues table for the object. This can be retrieved like this:


The known object will be an alter statement for that particular type of object which we can use to calculate the key. This known object needs to be a valid create statement like:

(B) Get the binary known object

Because we’re dealing with binary data we need to convert the known object to binary (known plain). This can be done by using the System.Text.Encoding class with the function “GetBytes”.

(C) Get the encrypted known object

To get the encrypted known object (known secret) we need to alter our object in the database.

We don’t want the original object to be replaced with our known object. To achieve that a transaction is used that’s rolled back right after the imageval value has been retrieved.

Start decrypting

To get out decrypted text we have to loop through the secret and apply the known plain and known secret.

The function below demonstrates how the decryption is put in place:

The loop increases the integer by two because each character in the secret has a size of 2 bytes.

In the end the decrypted data is still in a binary version. To get the text we need to use the  method “GetString” using the Encoding object.

Output Invoke-DbaDbDecryptObject

The object is decrypted. Now what?

First of all you should save the code somewhere and preferably in a source control system.

To make things easier I implemented a feature in the command to export all the results to a folder. It will separate objects based on the instance and the object type and create a separate file for each object.

To execute the script using a directory you have to use the –ExportDestination parameter.

 

That’s the command to decrypt your encrypted objects.

As I said, there are several solutions for this problem using T-SQL which are very good.
I always like to use PowerShell for these kind of problems because it makes it easier to go through multiple servers, databases and objects in one script.

If you want to try it out, make sure you have the latest version of dbatools.

Start to see with tSQLt

Standard

Reading Time: 6 minutes

What is tSQLt, why to use and how can I make it a little easier by automating lots of tests with PowerShell

This is going to be my first post on tSQLt. I will tell you why I love this framework and why using it changed my way how I develop databases.

tSQLt is a unit testing framework for SQL Server databases. There are lots of articles out there that did a very good job describing what the framework is and what it does. The links below helped me out quite a bit:

How did I get started with tSQLt

A couple of months ago I got introduced into PowerShell unit testing with Pester, That has nothing to do with SQL Server databases but I learned why it’s important make these kind of tests.

The tSQLt framework already existed for a while but I never spend too much time on it because it seemed to me like another framework to learn and I did not have the time to dive into it. Until I got introduced to Pester and how good it worked for my projects.

The tSQLt is completely free and can be downloaded from here. You only need to change a couple of things in your development instance, install the framework into your development database and your done.

As with many new things there is a big learning curve. You spend hours reading documentation and you try out several ways you could use the framework. It took me a while to understand the framework but once I got used to it I was able to create lots of different kinds of tests.

I work in a smaller environment where we have lots of development. We have not yet implemented continuous integration or continuous delivery. As we grow in the amount of projects and complexity of it all, we need to implement these practices to improve the quality of our releases.

Why I love tSQLt

The thing with unit testing is, you don’t see the benefit of it until things go wrong. I remembered creating hundreds of tests for the stored procedures in our database and thinking why I was actually doing it.
Developing the extensive tests with mocked tables and integration with other procedures was a tedious task.

But I persevered, I finished my tests and it was not long after that, that a change had to made to that specific database. It was a change in a table to rename a certain column. I made the change, changed the procedures I knew relied on that column and ran my unit tests.
The result was that I had over 10 stored procedures that for some reason did not return the expected results. Apparently they relied on that column and I totally overlooked those procedures.

If you have over 400 stored procedures in your database, you don’t remember all the code you’ve written and at some point just forget. It’s exactly that reason why I love tSQLt.
If I had made the change and applied it to our testing environment I would’ve broken a lot of code and our testers would come back to me that thing didn’t work.

With the test results returning the failed procedures I was able to create a quality release.

Creating the tests

The integration tests were the first unit tests I created and I was really satisfied with the result. The integration tests would get results from a stored procedure to test the workings. These kinds of tests can be quite complicated but give you a very good view if your code does what it’s supposed to do.

But I wanted to take it further than that. With CI/CD (Continuous Integration/Continuous Delivery) in mind I wanted to create extra tests to assure a proper release.

I wanted to make the following tests:

  • Are all the tables present
  • Are all the stored procedures present
  • Check object permissions

This database does not use any views or user defined functions. I also did not want to go overboard to test for the existence of objects. You can create a test for any type of object.

Are all the tables present

Imagine a situation where a developer by accident deletes a table and doesn’t notice the change. He/she commits it to your source control (your have your database in source control right? ) and it goes to your build server.

I want to be sure all my tables are present before any other test get’s executed. It may seem redundant because the integration tests would fail too, but it would give me a clear error that an object was missing.

Example of the code:

Are all the stored procedures present

The same situation as with the tables applies to my stored procedures. If a stored procedure would be deleted I wanted to know it right away.

The difference with the tables is that deleting a stored procedure does not have to break any other part in the database. It would only be noticed when an application would rely on it.

Example of the code:

Check object permissions

I cannot count the times I created a new stored procedure and I forgot to grant permissions to it. I would implement the change and it was unusable because the application had no privileges on it.

If you know the permission needed for an object, it’s easy to get those permissions and test if they are what you expect them to be.

This one is a little different than the object existence. I first create the tables for the expected and actual values. I than insert the values I expect should be in the the result.

In the act-part I get the permissions from the object and insert them into the table for the actual values.

Did you create all those tests by manually?

OH HELL NO!

If you’ve read any automation posts from my blog before, you know that when it comes to repetitive tasks, I am as lazy as it comes. I do not do two things manually if I can automate it once.

The only tests that could not be generated where the integration tests. Besides those, I generated all the all of them.

The code to assert the existence of the objects is basically the same.The only thing that would change would be the object name and maybe the schema.
The same goes for the tests to check the permissions. In most of the cases the permissions may be the same (at least for the same schema). With some tinkering I was able to generate all the tests for that purpose too.

How did I generate all the tests?

Simple! I use dbatools and a loop to go through the objects.

  1. Generate the test you need but just for one object
  2. Go to PowerShell and let it retrieve all the objects needed
  3. Export the test you create in step on and copy it to a query variable in your PowerShell script
  4. Make a loop around the pasted script to replace the object name
  5. Run the PowerShell script

The result is the script below. The script tests for the existence of the stored procedures. You can of course change the types of objects.

 

I hope you enjoyed this article to create some handy tests using tSQLt. This framework really helped me with my day-to-day development and has saved me numerous times.

Comments are always appreciated. If you want to stay informed about new blog posts, please subscribe.

Hide or Show Columns in SSRS Reports Using Parameters

Standard

Reading Time: 4 minutes

Regularly I have reports that have an extensive amount of columns.
Because the amount of columns, reports tend to become inefficient and have too much information we don’t always need. The users may want to select certain columns and to make the report easier to read.

Hiding and showing columns in SSRS reports using parameters is a solution to make reports more dynamic and easier to use.

At the time of writing of this article, SQL Server Reporting Services did not yet have the possibility to use checkbox parameters. Instead we will be using a multi-value text parameter to show or hide our columns.

The project file

For the demo I created a dummy project that looks to the AdventureWorks2014 database. I have created a data set that returns the addresses from the Person.Address table.

The report contains a table that shows the data from the address data set.

If you want to play around with the demo, the project file can be downloaded from here. Make sure you use Visual Studio 2017 or higher.

Create the parameter

For us to have the ability to hide or show columns we have to create a parameter. We want to be able to filter on multiple columns so we want to create a multi-value parameter.

To do that, either right-click on “Parameters” in your “Report Data” pane, or right-click in the top pane for the parameter and click on “Add Parameter”.

Add Parameter

When you click “Add Parameter” you get a window that looks like this:

Add Parameter Window

Give the parameter a name, in this case I named it “ColumnFilter”, but you can name it whatever you want.
Select “Text” as the data type ( you can choose any other data type but I like text) and make sure you select “Allow multiple values”.

For this example we’re going to hide or show the address fields. The data set has two address columns, AddressLine1, AddressLine2.

Go to the tab “Available Values” and add the following fields:

Add Parameter Available Values

In this example we have two values in our parameter. The label and the value are pretty self explanatory.

We’re going to add the values to the “Default Values” tab to make sure our report shows the columns by default.

Add Parameter Default Values

Click “OK” and we have our parameter.

Show and hide the columns

Now that we have our parameter we can change the table to hide or show the address columns.

Right click on the column “Address Line 1” and click on “Column Visibility”.

Column Visibility

The following window will appear:

Column Visibility Show Or Hide

Click on the “Show or hide based …..”. Click on the “fx” button and the following window appears:

Show or Hide Expression Window

This window is really powerful. We can do a lot with expressions throughout our SSRS reports.

In this case we’re only going to enter a small line of code:

Expressions are a little different that you might be used to with normal programming. We are not able to search through our multi-valued parameters like we do with arrays in other programming languages (at least I have not found a way yet).
Instead we will be joining all the values from the ColumnFilter parameter and check if it contains one of the values we like to search for.

What the expression does is combine all the values separated with a comma (“,”). If all the values are selected we’ll have a string like this “address1,address2”.
The next part will look if the string contains one of the values. The “.Contains” method returns true if a value is within that string and false if it’s not.
That entire expression is contained within an IIF that assesses if the statement is true or not. In this case, when the string contains the value we look for, we want to return false to not hide the column.

Add the code to the Expression window and click on “OK”. Click “OK” again to close the column visibility window.

Do the same thing with the column “Address Line 2” but change the value in the expression to “address2”.

Executing the report

Now that we’ve done all the work. Open the report preview.

In this example I deselect the the “Address Line 2” value and click on “View Report”. The result is the same table but with the column “Address Line 2” hidden.

The result can be seen below:Hidden Column Address Line 1

Conclusion

Showing or hiding columns with parameters is a very useful way to make your reports more dynamic.

This is one way of doing this, I bet there are other ways of showing or hiding columns but these tend to work really well in my environment. The users are able to adjust their reports and I don’t have to create multiple reports to get to the same result.

I hope you enjoyed this post and comments are always appreciated.

 

 

 

Test Log Shipping With DbaChecks

Standard

Reading Time: 2 minutes

I’ve kind of wrote a lot about log shipping the last few months and this is another one of them.
This time I’m talking how to test log shipping with dbachecks.

DbaChecks?! What is that?

DbaChecks is a brand new module developed by the community to test your environment for all kinds of best practices. There are even possibilities to export the data in such a way that you can create amazing dashboard in Power BI.

The module uses Pester to test a whole lot best practices. Because of that we can test very quickly what’s wrong with our server(s).

These tests are all included in the module:

get-dbacheck

Why this blog post

We want everyone to know about this module. Chrissy LeMaire reached out to me and asked if I could write some tests for the log shipping part and I did.

Because I wrote the log shipping commands for dbatools I was excited about creating a test that could be implemented into this module for everyone to use.

Test Log Shipping With DbaChecks

If you want to go all nuts you can use the following command and see all the tests fly by

invoke-dbccheck all checks

To make it easier to read I would only want to know about the things that failed

To test for a certain group of test you can use a tag of specify a certain test

invoke-dbccheck logshipping

 

Using this module will really help you to check your entire SQL Server infrastructure with ease. So what are you waiting for, download the module and go test your instances.

 

Log Shipping With dbatools – Part 4: Recover a Log Shipped Database

Standard

Reading Time: 3 minutes

In the third part of the series we discussed the way to get the log shipping errors using the command “Get-DbaLogShippingError”. This blog will be about how to recover a log shipped database.

Out of the box solutions

I can be very short in this paragraph, there is no out-of-the-box solution to recover a log shipped database.

Why this command

Initially log shipping was meant to be used as a warm standby. You have your data on another instance but you still have some human intervention to get it all back up.

Imagine the following situation. You have setup log shipping using either the GUI or the commands in dbatools. You have about 15 databases and everything is working fine.

Until one day the primary instance goes down and is not recoverable. For the production to continue you have to bring the log shipped databases online fast.

You have to figure what the last transaction log backup was. You have to check if it was copied to the secondary instance and if it’s restored.

To do this by running a couple of queries, copying the files if needed and run the log shipping jobs takes time. I’d rather run a command and recover one or more databases and get back to the problem of the primary instance.

Invoke-DbaLogShippingRecovery

The Invoke-DbaLogShippingRecovery command is will execute the following steps:

  1. Check the agent status and start the agent if it’s not a started state.
  2. Retrieve the latest transaction log backup and try to copy it to the secondary instance if it’s not present. It will wait and check the log shipping status to see if the backup file is copied.
  3. Retrieve the last restored transaction log backup. Execute the restore process to get the database up-to-date.
  4. Disable the jobs after the copy and restore process.
  5. After all the actions it restores the database to a normal state.

To execute the command

The result of the command

Recover Log Shipping Result Command

The image below shows the database in a normal state after the command in the SQL Server Management Studio.

Recover Log Shipping Result GUI

The result of the jobs being disabled

Recover Log Shipping Result Jobs

More options

In my example I showed how I recovered a single database. The parameter does accept multiple databases.

Besides setting the individual databases you can also let the command recover all the log shipped databases

In some cases you want to recover the databases but not execute the recovery to a normal state

 

This concludes the command “Invoke-DbaLogShippingRecovery”. This was the final post in this series. If you want to look back at the other command follow the links below:

  1. Setup Log Shipping
  2. Test Log Shipping Status
  3. Get Log Shipping Errors