TSQL-Tuesday #108: A Journey Into CI/CD

Standard

Reading Time: 4 minutes

This month’s TSQL Tuesday is hosted by Malathi Mahadevan (w | t) and she asks us to write about one thing you want to learn that is not SQL Server.

As a database administrator in SQL Server you might think it’s easy to think we only deal with that technology. But the role of the database administrator has been changing for a while now and we need to know more about other technologies besides SQL Server.

In my daily job I’m not just the database administrator, but I’m also a database developer. As a developer I had to make the shift from developing databases in SSMS to Visual Studio, which I still have to get used to but I’m getting there.

The reason I kind of had to do that was because it was time. Our IT team was going to switch from SVN to Git. We needed more collaboration, standards and processes to deploy new releases.

For me that was a BIG learning curve. As a DBA I never had to deal with source control for our databases. At previous jobs I never did a lot of development, mostly maintenance, and the responsibility for the databases was mostly with the third parties and development teams.

So now our projects are developed using the right tools, the source code is saved in a way that other can collaborate, but there is still the process of deployment that’s done manually.

Why use CI/CD?

CI/CD, or Continuous Integration/Continuous Development, is the cornerstone of DevOps. We’ve heard about DevOps for a while now and it never stuck with me until now.

If you know me, I’m a real advocate towards automation. I like to automate about everything that’s repetitive to make sure things run as smoothly as they can without the obvious PEBCAK (Problem Exists Between Chair And Keyboard) situations.

Handling all the releases by my own makes the development process prone to mistakes and delays. I want to have a process where I can wear a single hat, either the developer or the DBA. I do not want to be responsible for both the development and pushing he releases through the pipeline.

CI/CD can help me with that but saying some fancy buzzword is not going to help me.

How to learn CI/CD?

First of all, this was another big learning curve for me. For me to go from database development in SSMS to a CI/CD pipeline was something like climbing the Mount Everest. But as cliche this might sound, just start with a step and another and in the end you’ll get there.

The first thing I had to do was learn what CI/CD actually was. What does it entail? How does it integrate with my tools? What tools do I need to implement to get it to work?

I found that the best way to learn CI/CD is to just start and learn from my mistakes.

The following steps made me become familiar with CI/CD

  1. Read a lot
  2. Migrate database development to Visual Studio
  3. Build the architecture

Step 1: Read a lot

The first thing I did was spend a lot of time reading articles on the internet about what CI/CD actually is. If you google for that you get enough content to learn what it actually is.

I always want to know the ins and outs of a certain technology and I want to know enough to get me started. I find that diving into new technologies lets me make mistakes which makes me learn in a faster pace and will make it easier in the end.

Step 2: Migrate database development to Visual Studio

I already started developing tSQLt units tests for all my objects and I separated the data model and the data from the tests into separate projects within the solution.

This article helped me setup the solution to make sure everything was nicely separated and organized.

Step 3: Build the architecture

After investigating what I needed I decided to install Jenkins for the build server. Jenkins is free and it’s easy to install.

I setup a virtual machine to make sure I was able to create snapshots with every step to be able to revert back if something wen wrong. And I can assure you that I made a lot of mistakes.

This article helped me understand how to setup projects in Jenkins for SSDT.

I actually made some other changes to the project in Jenkins to me able to push the results to a server in conjunction with dbatools and a remote Git server. That’s going to b another article that will be released soon.

What’s the next step?

The next step is to implement a service that will be responsible for the delivery of the artifacts to different destinations like a test, acceptance and finally production server.

I’m still trying out a couple of solutions but I think I’ll probably go with Octopus Deploy.

After that it’s time to put all the knowledge into production. Getting the projects loaded and get the process implemented within the company.
The latter is going to be the most difficult one because it’s a huge change but I’m confident that in the end it will make things go smoother.

DevOps is not going to go away and CI/CD will make your life easier in the long run.

To Fail Is To Succeed

Standard

Reading Time: 2 minutes

There must be a 100 articles about this subject but I always like to talk about things that happen to me or where I ave an opinion about.

I was reading a blog post by Ben Kubicek about allowing failure and I agree with a lot of things.

I’m a parent of two adorable kids. The oldest is 4 years old and the youngest is almost 2. The youngest one is walking around but the oldest will not stop running, climbing on stuff and see how far she can go.

I’m not of one of those parents that constantly warns their kid that something can happen. If she wants to climb on the jungle gym to the highest point, go ahead. I will of course make sure the doesn’t make a death fall whenever she slips but I do tend to let my kids fail.
If they don’t fail, they will not learn. It’s hard though because you want to protect them as much as you can, but protecting them all the time does not help them either.

If Thomas Edison had believed in failure… we would still be living in darkness. If Henry Ford had given up, we would still be riding on horseback…if Alexander Graham Bell had given in to the clutches of failure, we would be spending less time staring at those small plastic things we call phones that rule our lives (which might not be a bad thing!).

Did I ever fail

Oh yes I did!

In my career I had my share of fails and from each and every one of them I learned something. Sometimes it was a hard lesson and sometimes it was something small, but every time I tried to learn something from it.

I did things like updating rows in a production database without a WHERE clause. I remember executing a change script in a hurry and because the boss was watching over my shoulder all the time and I didn’t pay attention to the code.
This was in my early years and nowadays I would not get flustered when there would be a lot of pressure.

That’s all fun and all but you just can’t fail when you’re dealing with production issues in environments with high volume transactions or when lives are at stake.

If you’re scared to fail in those environments than there is another problem. You probably don’t have a good disaster recovery plan (or haven’t tested it), you’re not confident about your backups or you’re just not confident enough in yourself.

There is going to be a time that something goes wrong and there is failure. Be prepared for it and make sure that when disaster strikes.

There are a lot of solutions that will test your backups, articles how to setup a disaster recovery plan, processes how to prevent unwanted changes (like Continuous Integration/Continuous Delivery).

So don’t be scared to fail. Learn from your mistakes and prepare yourself so it doesn’t happen again.

Working hard pays off

Standard

Reading Time: 2 minutes

On Monday, October 1st, 2018 I was awarded the Cloud and Datacenter Management MVP award. I did not know about it until a few days after because I busy renovating my office. During the renovation I did not regularly check my e-mail and the notification slipped through the cracks.

Two days later at 6 AM in the morning I checked my e-mail and jumped up. I saw an email that started “Congratulation…” from the MVP Global Administrator. I got the MVP award.

How did I get here

I remember when I first started contributing to the community. It was after SQL Cruise, now Tech Outbound, that I got introduced to a couple of the famous people like, Grant Fritchey (t), Argenis Fernandez (t), Tim Ford (t) (organizer of the event).

During the cruise I met up with Catherine Wilhemsen (t) who asked me why I was not presenting about all the things I knew. I did not think I was that knowledgeable compared to the people on the cruise but she thought otherwise. She, and the others on the cruise, suggested me to start presenting and I got into contact with Aaron Nelson who gave me an opportunity to present on the virtual chapter of PASS.

I got into contact with Chrissy LeMaire (t) who is the godmother of dbatools. My first task was a new command and that got me started developing for dbatools and contributing more to the community.

I started doing more presentations and spend countless hours developing PowerShell commands to help me and other simplify DBA work. It was a lot work but the reward I got from the community worth it.

Over the years I’ve met some wonderful people and that certainly was a reason to continue.

I want to thank my boss, Joe Sheffield (t), from Waypoint Analytical LLC for supporting me to contribute to the community. This would not have been possible without him.

I’m not going to stop and this only let’s me be more active in the community.

Thank you all that supported me and keep working hard because it pays of in the end.

Troubleshooting Dynamic SSRS Queries

Standard

Reading Time: 3 minutes

The Problem

In my daily work I have to work with a lot of SSRS (SQL Server Reporting Services) reports that have either stored procedures, queries or dynamic queries to get all the results.
Troubleshooting dynamic SSRS queries can sometimes be difficult. That’s is especially the case when you’re dealing with multiple hundreds of lines of code all generated dynamically.

An example of such an error is:

Error Dynamic Query

Because the query is generated during execution using all kinds of parameters, you don’t know how the query looks like exactly. Debugging this query would be pretty difficult and tedious, but there is an easy way to get the compiled query.

How does a dynamic query look like

If you never used dynamic queries in SSRS you might not know how that looks like and when it’s used.

A dynamic query in SSRS is actually an expression in a data set. The expression is build upon execution and can include TSQL code, SSRS fields etc.
The reason I use this in my reports is because I sometimes need to implement complicated filtering in my reports using the report parameters.

You can filter the data using the “Filters” tab in the data set but that would cause me to generate to extensive amount of data during execution which I’d like to avoid.

When you open a data set with an dynamic query it looks like this

Dataset Window Dynamic Query

You don’t see much of the query because you needto click the “fx” button on the right to get the following screen:

Expression Window Dynamic Query

The “=” sign indicated the start of the expression. For readability reasons I join every new line with the “&” sign. I could have used one big line of code but this makes it easier to read.

During execution I use several parameters to filter out values. That can be seen on the last two lines in the expression the image above. Using this method of filtering cuts down the amount of data I retrieve during execution.

The solution

How can I look at the query after it’s executed? The solution is to create a text box and enter the following expression:

Solution Dynamic Query

This expression will display the query of the data set in the text box during execution.

This doesn’t solve the problem yet because we’re still dealing with the problem that the query fails during execution.

How to display the query without executing it? The solution is really simple but you it requires some understanding how SSRS compiles the expression during execution.
The fact is that the query expression is compiled as a single line of code.

To display the code we can use the “–” to create a comment.

Commentec Expression Dynamic Query

This will render the query but it will not execute out query:

Compiled Query Dynamic Query

There you have it, your compiled query.

The next step would be to copy this code and paste it to SSMS or SQL OPS to further debug the query.

 

TSQL-Tuesday #104: Code You Would Hate To Live Without

Standard

Reading Time: 3 minutes

This month’s TSQL Tuesday is hosted by Bert Wagner (w | t) and he asks us to write about code you’ve written that you would hate to live without.

Over the years I’ve used other people’s code regularly. It could have gone from Github, Stack Overflow, blogs etc.

I have always appreciated the way the SQL Server community (and other communities) share their code to help other.

I have worked alongside lots of people who were not very keen on sharing. People actually told me that their code was theirs and only theirs and that nobody is to see it.

Although I don’t agree, I do understand people not wanting to share their code. They may have worked hard on it and the person could have an advantage to other people who don’t have that “special” piece of code.

There is a couple of caveats to that:

  1. Your code could have bugs that you never noticed
  2. Your code could be improved by other people with different views

I am completely the opposite. If my code can help one person to make their life easier, I have already won. In the last few years that has been proved because I have been contributing to dbatools. Because I contributed I have learned a lot about open source projects, learned a lot about SQL Server. I met new people from all over the world who in their turn shared their code to make my life easier.

Code that I couldn’t live without

Back to the task at hand and tell you about code that I’ve written that I cannot live without.

Anything I create and is is usable for other DBA’s tends to end up in the dbatools module. The reason I do that is because it’s an amazing platform and it makes it easy for me to reach a large audience.

Testing Backups

One that I cannot live without is the dbatools command called Test-DbaLastBackup. Testing your backups is one of the most important things you can do as a DBA. I did not write that command but I did use it in one of my commands.
It’s basically a wrapper around that command that sends me an email with all the tested backups and the results in a csv file. The code is freely available on my blog.

Log Shipping

Some other things I’ve written is are the log shipping commands in dbatools. At my current workplace we have log shipping enabled. Everybody who has set up log shipping knows it’s a PITA if you have lots of databases.

I recently presented a session about the log shipping commands on the PASS DBA Virtual Chapter. The session was recorded and if you’re interested you can watch it through this link.

One log shipping command in particular that I use regularly is the Test-DbaLogShippingStatus. This command makes it easy for me to check up on my log shipping servers and see the status of each database.

SQL Server Agent

I don’t like to click around and if I can do something in a console I choose to do that. In most cases it is faster than the alternative.

The same goes for the SQL Server Agent. You have your jobs, schedules, job steps etc. For all these objects I have created commands in dbatools. The commands enable you to create, edit or remove any of those objects with ease.

So these are the script I cannot live without.

Maybe these scripts can help you in your journeys through SQL Server.

 

 

 

 

 

Decrypting SQL Server Objects with dbatools

DECRYPTING SQL SERVER OBJECTS
Standard

Reading Time: 4 minutes

There are lots of great articles that describe how the command for decrypting SQL Server objects when they’re saved with encryption like:

Encrypting and Decrypting SQL Server Stored Procedures, Views and User-Defined Functions

Simple Way to Decrypt SQL Server Stored Procedure

Most of them rely on T-SQL and SSMS but I wanted to do the same thing with PowerShell. This way I can execute it for several objects at the time and maybe even in multiple databases and instances.

Let me first say that I’m not an encryption guru, but I do understand the technology and how I could possibly develop some code to decrypt an object.

This article describes the inner workings of the command “Invoke-DbaDbDecryptObject”.

Why would you encrypt an object?

Encrypting objects was first introduced in SQL Server 2000 to protect objects. I don’t prefer this method because there are a couple of disadvantages to it.

The first disadvantage is that the security is not very good. It’s very simple to get the original code when you have the right privileges. The code is not protected when for instance the SQL Server Profiler is run or when you catch executed procedures using extended events.

The second disadvantage, but you probably should have this anyway, is that you need to make sure that you have the original code in source control. Once the object is created with encryption there is no normal way to retrieve the code.

The third disadvantage is, and this is becoming more and more popular, that there is no easy way to check differences between objects and therefore is harder to use with CI/CD (Continuous Integration / Continuous Delivery).

I would only use this method if the “VIEW DEFINITION” privilege would not be sufficient and there are no other DBA’s who can google the solution to retrieve the original code.

How is an object encrypted?

Objects are encrypted using “WITH ENCRYPTION” when creating or altering an object. This encrypts the object and it’s no longer available using the general methods.

If you want to view the code you’ll get an error message like this:

error encrypted object

During the encryption process, SQL Server look at the column “imageval” in the table “sys.sysobjvalues”. This table can only be queried when you’re connected using the DAC (Dedicated Administrator Connection). This value is a VARBINARY(MAX) value.

The encryption uses a XOR cipher. The XOR cipher work by applying an XOR with a key (B) to a value (A) generating a result (C). This results in the following formula A ^ B = C
The cipher works is also called a modulus 2 addition. If we know the key and the encrypted value we can decrypt the (C ^ B = A).

Decrypting an object

To decrypt the database object we need to calculate the secret (A), apply the XOR cipher to it with known plain text (B) and the known encrypted text (C).

Getting the needed values

(A) Get the secret

The secret is the imageval value in the sys.sysobjvalues table for the object. This can be retrieved like this:


The known object will be an alter statement for that particular type of object which we can use to calculate the key. This known object needs to be a valid create statement like:

(B) Get the binary known object

Because we’re dealing with binary data we need to convert the known object to binary (known plain). This can be done by using the System.Text.Encoding class with the function “GetBytes”.

(C) Get the encrypted known object

To get the encrypted known object (known secret) we need to alter our object in the database.

We don’t want the original object to be replaced with our known object. To achieve that a transaction is used that’s rolled back right after the imageval value has been retrieved.

Start decrypting

To get out decrypted text we have to loop through the secret and apply the known plain and known secret.

The function below demonstrates how the decryption is put in place:

The loop increases the integer by two because each character in the secret has a size of 2 bytes.

In the end the decrypted data is still in a binary version. To get the text we need to use the  method “GetString” using the Encoding object.

Output Invoke-DbaDbDecryptObject

The object is decrypted. Now what?

First of all you should save the code somewhere and preferably in a source control system.

To make things easier I implemented a feature in the command to export all the results to a folder. It will separate objects based on the instance and the object type and create a separate file for each object.

To execute the script using a directory you have to use the –ExportDestination parameter.

 

That’s the command to decrypt your encrypted objects.

As I said, there are several solutions for this problem using T-SQL which are very good.
I always like to use PowerShell for these kind of problems because it makes it easier to go through multiple servers, databases and objects in one script.

If you want to try it out, make sure you have the latest version of dbatools.

Start to see with tSQLt

Standard

Reading Time: 6 minutes

What is tSQLt, why to use and how can I make it a little easier by automating lots of tests with PowerShell

This is going to be my first post on tSQLt. I will tell you why I love this framework and why using it changed my way how I develop databases.

tSQLt is a unit testing framework for SQL Server databases. There are lots of articles out there that did a very good job describing what the framework is and what it does. The links below helped me out quite a bit:

How did I get started with tSQLt

A couple of months ago I got introduced into PowerShell unit testing with Pester, That has nothing to do with SQL Server databases but I learned why it’s important make these kind of tests.

The tSQLt framework already existed for a while but I never spend too much time on it because it seemed to me like another framework to learn and I did not have the time to dive into it. Until I got introduced to Pester and how good it worked for my projects.

The tSQLt is completely free and can be downloaded from here. You only need to change a couple of things in your development instance, install the framework into your development database and your done.

As with many new things there is a big learning curve. You spend hours reading documentation and you try out several ways you could use the framework. It took me a while to understand the framework but once I got used to it I was able to create lots of different kinds of tests.

I work in a smaller environment where we have lots of development. We have not yet implemented continuous integration or continuous delivery. As we grow in the amount of projects and complexity of it all, we need to implement these practices to improve the quality of our releases.

Why I love tSQLt

The thing with unit testing is, you don’t see the benefit of it until things go wrong. I remembered creating hundreds of tests for the stored procedures in our database and thinking why I was actually doing it.
Developing the extensive tests with mocked tables and integration with other procedures was a tedious task.

But I persevered, I finished my tests and it was not long after that, that a change had to made to that specific database. It was a change in a table to rename a certain column. I made the change, changed the procedures I knew relied on that column and ran my unit tests.
The result was that I had over 10 stored procedures that for some reason did not return the expected results. Apparently they relied on that column and I totally overlooked those procedures.

If you have over 400 stored procedures in your database, you don’t remember all the code you’ve written and at some point just forget. It’s exactly that reason why I love tSQLt.
If I had made the change and applied it to our testing environment I would’ve broken a lot of code and our testers would come back to me that thing didn’t work.

With the test results returning the failed procedures I was able to create a quality release.

Creating the tests

The integration tests were the first unit tests I created and I was really satisfied with the result. The integration tests would get results from a stored procedure to test the workings. These kinds of tests can be quite complicated but give you a very good view if your code does what it’s supposed to do.

But I wanted to take it further than that. With CI/CD (Continuous Integration/Continuous Delivery) in mind I wanted to create extra tests to assure a proper release.

I wanted to make the following tests:

  • Are all the tables present
  • Are all the stored procedures present
  • Check object permissions

This database does not use any views or user defined functions. I also did not want to go overboard to test for the existence of objects. You can create a test for any type of object.

Are all the tables present

Imagine a situation where a developer by accident deletes a table and doesn’t notice the change. He/she commits it to your source control (your have your database in source control right? ) and it goes to your build server.

I want to be sure all my tables are present before any other test get’s executed. It may seem redundant because the integration tests would fail too, but it would give me a clear error that an object was missing.

Example of the code:

Are all the stored procedures present

The same situation as with the tables applies to my stored procedures. If a stored procedure would be deleted I wanted to know it right away.

The difference with the tables is that deleting a stored procedure does not have to break any other part in the database. It would only be noticed when an application would rely on it.

Example of the code:

Check object permissions

I cannot count the times I created a new stored procedure and I forgot to grant permissions to it. I would implement the change and it was unusable because the application had no privileges on it.

If you know the permission needed for an object, it’s easy to get those permissions and test if they are what you expect them to be.

This one is a little different than the object existence. I first create the tables for the expected and actual values. I than insert the values I expect should be in the the result.

In the act-part I get the permissions from the object and insert them into the table for the actual values.

Did you create all those tests by manually?

OH HELL NO!

If you’ve read any automation posts from my blog before, you know that when it comes to repetitive tasks, I am as lazy as it comes. I do not do two things manually if I can automate it once.

The only tests that could not be generated where the integration tests. Besides those, I generated all the all of them.

The code to assert the existence of the objects is basically the same.The only thing that would change would be the object name and maybe the schema.
The same goes for the tests to check the permissions. In most of the cases the permissions may be the same (at least for the same schema). With some tinkering I was able to generate all the tests for that purpose too.

How did I generate all the tests?

Simple! I use dbatools and a loop to go through the objects.

  1. Generate the test you need but just for one object
  2. Go to PowerShell and let it retrieve all the objects needed
  3. Export the test you create in step on and copy it to a query variable in your PowerShell script
  4. Make a loop around the pasted script to replace the object name
  5. Run the PowerShell script

The result is the script below. The script tests for the existence of the stored procedures. You can of course change the types of objects.

 

I hope you enjoyed this article to create some handy tests using tSQLt. This framework really helped me with my day-to-day development and has saved me numerous times.

Comments are always appreciated. If you want to stay informed about new blog posts, please subscribe.

Hide or Show Columns in SSRS Reports Using Parameters

Standard

Reading Time: 4 minutes

Regularly I have reports that have an extensive amount of columns.
Because the amount of columns, reports tend to become inefficient and have too much information we don’t always need. The users may want to select certain columns and to make the report easier to read.

Hiding and showing columns in SSRS reports using parameters is a solution to make reports more dynamic and easier to use.

At the time of writing of this article, SQL Server Reporting Services did not yet have the possibility to use checkbox parameters. Instead we will be using a multi-value text parameter to show or hide our columns.

The project file

For the demo I created a dummy project that looks to the AdventureWorks2014 database. I have created a data set that returns the addresses from the Person.Address table.

The report contains a table that shows the data from the address data set.

If you want to play around with the demo, the project file can be downloaded from here. Make sure you use Visual Studio 2017 or higher.

Create the parameter

For us to have the ability to hide or show columns we have to create a parameter. We want to be able to filter on multiple columns so we want to create a multi-value parameter.

To do that, either right-click on “Parameters” in your “Report Data” pane, or right-click in the top pane for the parameter and click on “Add Parameter”.

Add Parameter

When you click “Add Parameter” you get a window that looks like this:

Add Parameter Window

Give the parameter a name, in this case I named it “ColumnFilter”, but you can name it whatever you want.
Select “Text” as the data type ( you can choose any other data type but I like text) and make sure you select “Allow multiple values”.

For this example we’re going to hide or show the address fields. The data set has two address columns, AddressLine1, AddressLine2.

Go to the tab “Available Values” and add the following fields:

Add Parameter Available Values

In this example we have two values in our parameter. The label and the value are pretty self explanatory.

We’re going to add the values to the “Default Values” tab to make sure our report shows the columns by default.

Add Parameter Default Values

Click “OK” and we have our parameter.

Show and hide the columns

Now that we have our parameter we can change the table to hide or show the address columns.

Right click on the column “Address Line 1” and click on “Column Visibility”.

Column Visibility

The following window will appear:

Column Visibility Show Or Hide

Click on the “Show or hide based …..”. Click on the “fx” button and the following window appears:

Show or Hide Expression Window

This window is really powerful. We can do a lot with expressions throughout our SSRS reports.

In this case we’re only going to enter a small line of code:

Expressions are a little different that you might be used to with normal programming. We are not able to search through our multi-valued parameters like we do with arrays in other programming languages (at least I have not found a way yet).
Instead we will be joining all the values from the ColumnFilter parameter and check if it contains one of the values we like to search for.

What the expression does is combine all the values separated with a comma (“,”). If all the values are selected we’ll have a string like this “address1,address2”.
The next part will look if the string contains one of the values. The “.Contains” method returns true if a value is within that string and false if it’s not.
That entire expression is contained within an IIF that assesses if the statement is true or not. In this case, when the string contains the value we look for, we want to return false to not hide the column.

Add the code to the Expression window and click on “OK”. Click “OK” again to close the column visibility window.

Do the same thing with the column “Address Line 2” but change the value in the expression to “address2”.

Executing the report

Now that we’ve done all the work. Open the report preview.

In this example I deselect the the “Address Line 2” value and click on “View Report”. The result is the same table but with the column “Address Line 2” hidden.

The result can be seen below:Hidden Column Address Line 1

Conclusion

Showing or hiding columns with parameters is a very useful way to make your reports more dynamic.

This is one way of doing this, I bet there are other ways of showing or hiding columns but these tend to work really well in my environment. The users are able to adjust their reports and I don’t have to create multiple reports to get to the same result.

I hope you enjoyed this post and comments are always appreciated.