Export-DMVInformation Module Update

DMV
Standard
Last Friday I had the chance to show the Export-DMVInformation module to the Dutch Powershell user group. After the presentation I got a couple of suggestions and wanted to put them in place them into the module.

Changes:

  1. Possibility to execute the module using the pipeline
  2. Get all the databases in one statement by assigning the value “ALL” to the database parameter.
  3. Replaced messages with verbose
Changes 1 and 2 reduce the amount of time needed to get information from multiple instances and multiple databases. Before you had to execute the module in a external loop and for each database . Now you can get the information from all the databases across multiple instances in a single line!

Change 3 gives the ability to choose whether to show all the messages or not. Use the switch “-Verbose” to see the various messages.

An example how the new module works is shown below:

Export DMV result

Please take a look how to install or use the module on the module page.
 
Happy exporting!

 

Pretty up your KeePass

It's So Beautiful
Standard

Having a password manager these days is almost required especially for people working in IT. One password manager that I’ve been using for years is KeePass. You can secure the database with a master password and additionally (and recommended) create a key file.
You can create folders to differentiate the entries and it has a ton of features to make password management easy.

To make things even easier several browsers like Firefox and Chrome support using KeePass to handle passwords for websites.

If you don’t already have some sort of password manager, and I don’t mean the Post-Its you have on your monitor, I would tell you to at least give it a try. There are other solutions out there but this one is free and has made my life a lo easier.

So back to the original subject, prettying up KeePass. You might have noticed when you use KeePass that you can assign different icons to an entry.

Entry Details

And when you click the icon button you can select many standard icons.

Icon selection

KeePass makes it possible to add a custom icon and save it into the database. You don’t have to have the icons available on your computer.

As you may see in the image above I also have a selection of custom icons that I’ve added for my entries. In example I have an entry for LinkedIn for which I have the official icon for.

These days almost every bigger companies website has it’s logo in the top bar of the browser. That’s called a “shortcut icon”. Most websites have them named like “favicon.ico” or something similar.

If you open the code of the website and look for the i.e. shortcut icon you will get an URL that points to the icon. Copy the URL, open the password entry, click the icon button, click the add button for custom icons and paste the URL into the filename textbox.

Add new icon

Click the “Open” button and give it a couple of seconds to get the icon file downloaded and imported into the database. After the import of the icon it’s selected in the icon picker window. Click “OK” and you’ll see that your entry has it’s own icon.

Icon selection result

Now what are you waiting for, go pretty up your KeePass entries.

No One Listens To Me! Why? Part 2 – Colleagues

Standard

In part 1 of the “No One Listens To Me! Why?” series I described a situation where I had to convince my manager to take action on some actions.

This story will again be a true story where this time I had to convince my colleagues.

Colleagues

I went on the the SQL Cruise, which is a fantastic way to learn and meet new people. I attended a session by Argenis Fernandez (tl) who touched a subject about page verification.

Coming back to the office I wanted to start implementing my new acquired knowledge. I first investigated some of our main databases. What I found was that some of our databases were not configured with the right page verification. They still had the page verification set to “NONE”.  These databases were not installed by me and I had not considered this to be set this way.
If you know anything about corruption you’ll know that this setting is crucial to find corruption in your database.

Due to the impact this could have I went to the application engineers and arranged a meeting with them. There was a reluctance to come to the meeting but it was important to discuss, so with upside down smiles they arrived 10 minutes too late to the meeting.

The following conversation took place:

Me: I did some investigating on a couple of databases and found that there are some settings that need changing. We’re now on SQL Server 2012 a we have databases with settings dating back to SQL 2000. One of the I found is that our main database is not being checked for corruption.

Application Engineer (AE) 1: How do you mean doesn’t get checked for corruption we do a weekly DBCC check don’t we?

Me: Yes we do but that has little use because the pages in the databases are not marked with a checksum. Besides it’s only once a week if we’re already doing it due to weekly and monthly processes. There is a setting that enables the verification of pages in the database and this helps us find corruption.

AE 2: Ok but do we need it? Why should we enable it?

Me: Just as I told you it’s for finding out if a corruption took place in the database. Any corruption in the database is a really bad thing and should be avoided at any cause. If we don’t enable this we’ll not know if there is any corruption in an early stage. We would only find the corruption if the page is severely corrupted or if someone accidentally selects the data. To ask you a question, do you want to vouch for a database which could potentially give you corrupt information?

AE 1: When was this setting first released and you still didn’t give an answer why we should enable this.

Me: You’re not listening to what I’m saying. The feature was first released back in SQL Server 2005 and was so important that Microsoft made it a default setting for all new databases. As I answered this question many times, we need this to find corruption at the earliest moment possible.

AE 1: What’s the downside to his? I don’t know if our processes will be hit if we change this.

Me: There is no downside, it’s a setting that helps us. The only thing it’s going to do is from this point on check all the data that’s written and create a checksum for it. The next time it’s being read, it will check the checksum with the page and see if it matches.
But I understand you, I want this setting to be enabled and tested thoroughly before we put it in production.

AE 1 and 2: I’m still not convinced.

Me: You know what I’ll do? I’ll setup a demonstration to show you how this process works and how easy it is to corrupt some data in a database. Based on that we can decide what to do from that point.

As you may read through the conversation I had to deal with people who were scared of change and were reluctant to listen to anything I told them. I still had to cooperate with them to get this change done so I moved on.

Like I said in the previous article, you have to have the facts to build your case and that was the reason I wanted to show them how a corruption could work and why this setting was so important.

I created a demo and did the presentation to the application engineers.
I showed them a healthy database, corrupted the database with the setting we’re currently using and we didn’t find the corruption. I did the same thing with a database with the new setting and of course it showed me the corruption.

The outcome was far from satisfying because they were still not convinced.

Up to this point, let’s check what went wrong:

  1. Strong arguments did not work
  2. Nobody cared about the fact that situation could actually happen.

Strong arguments did not work

Even with the evidence in front of them I couldn’t convince my colleagues. The question they asked my in the end was: “But how many times would this actually happen?”.

If I wanted to I could’ve just changed the setting and let nobody know. In previous testing I got no performance loss and had no other symptoms either.

But that’s not what I wanted to do. If something would go wrong for some reason I would get the blame and that wasn’t what I had in mind.

From this point on I should’ve gone higher up the chain of command to my manager.

Nobody cared about the fact that situation could actually happen.

This hit me right after I did the corruption demo, they didn’t care and as long as it all worked they were not going to keen on changing anything. I understand the last part because I don’t want to change that works, but this was different because we didn’t do any thorough checks.

You’ll never be able to convince these people. If you get to this point and you did all the work, laying out the facts (and even demos) than take your losses and go higher up.

I did everything but still they won’t listen

Ok, so this situation is an extreme one. I was not able to convince my colleagues and this was a scenario that I couldn’t have won.

Instead I went higher up the chain because in the end my supervisor would be responsible if something would go wrong. My supervisor first reacted the same way as my colleagues. To make sure I got my point across I went to the IT manager.
I explained the situation and showed him the possible downtime and how much it would cost if we didn’t put this setting in place.

In the end the IT manager was not pleased with the whole situation and my colleagues and supervisor were called in to explain their reasons. That last thing came back came back to bite me.
That didn’t stop me because if we didn’t put the setting in place I would have been responsible to fix everything even though my colleagues were the ones who decided not to change anything.

Conclusion

Like I said this situation was an extreme one. In normal circumstances, with people with right attitudes, this would never happen. I’ve had other situations where it was the opposite of this and that my colleagues would listen to reason.

In the end it was my responsibility to make sure the the data is protected. You’ll always have people who will not agree with you about something. In most situations you’ll be able to get them convinced with the cold hard facts.

If that still doesn’t work you have to go higher up the chain of command. I’ve always been rigorous when it came to my work and if someone was in the way of me doing my work properly I would go around them.

That doesn’t mean you have to do that all the time. Choose you battles wisely and put your effort where it would make the biggest impact for you to deliver the best work you can.

Work After Hours

After hours email
Standard

It’s 8 PM, the kids are in bed and the wife and I are finally able to get some time for ourselves until I get an e-mail from work. Do I open the e-mail after hours or will I get to it in the morning? But what if it’s important and I have to act now? These days it’s we have to deal with blurring boundaries between work and life which has an impact on the so called work/life balance.

I recently read an article about France introducing a new law that where French workers are no longer obliged to respond to work related e-mails or phone call after hours.

What’s not really clear is if this bill limits the communication to e-mail and phone calls only or that they also included messaging apps too. These days we also use apps like Slack, HipChat, WhatsApp to communicate with colleagues which could be a loophole.

The thing that comes up when I read this is that in most situations it will not work because you’re removing all the flexibility. The other thing is that most companies evaluate employees based on their availability and their flexibility.

There also a side note that employers are allowed to make different arrangements with employees.
Employers will probably adjust contracts from this point on that, if you’re in some sort of position where the availability is important, you’re obliged to answer which will render the law useless in a lot of situations.

I never had a problem responding to work related e-mails after hours because I think in my profession as a DBA it’s part of the job. Usually you’re the lonely DBA (or part of a small team) and when shit hits the fan there’s nobody else that is able to fix it. That’s not only in the field of the DBA but mostly with all the colors of the IT rainbow.

In my opinion I think you’re responsible for setting your own boundaries to make sure your work/life balance doesn’t get distorted in such a way that it will affect you personal life in a bad way.

One thing I’ve always done is that when I was in the situation where I had to be available after hours I would have a separate phone. If I’m on vacation and I’m not able to respond, or don’t want to respond, I leave my phone at home. I would give my personal phone number to a very limited group of people that only in the case of a real bad situation they could try to contact me.
This has always worked out for me and fortunately I’ve had employers that respected that arrangement.

Still France has passed this law on January 1 and the thought behind is admirable but if this will ever work is something we’ll see in the future.

 

 

My SQL Server journey with Linux

Standard

Linux, where did it all begin

About a couple of years ago there was an April fools joke on SQLServerCentral.com about SQL Server coming to Linux. Now we know this isn’t a joke and SQL Server for Linux is a fact.

I started to explore Linux around 1998 on my AMD K6 266 Mhz pc. I bought Linux because back then I could download that entire distribution over a dial-up connection going 5Kb a second. If I would have downloaded that from the internet the phone bill would blow your mind.

My first Linux distro was Redhat Linux version 5.1 (Manhattan) and back then there was no nice GUI for the install so it took me a while to get things started. It wasn’t like I had loads of documentation on the internet back then so I did what I could using books and small pieces of information I found on the internet.

I didn’t have the luxury of owning multiple computers like some class mates had and I decided to do a dual install next to my Windows 98 installation (oh yeah I was really early to upgrade from Windows 95).

Nowadays I can download a Linux distro in a couple of minutes and within 30 minutes I have a virtual machine installed and ready to go.

Preparation

Where do we start? Let’s first download a distribution that’s supported my Microsoft. I decided to download Ubuntu Server 16.4 because I don’t need all the fancy GUI stuff that’s in the Desktop version and don’t to have too much software I’m never going to use.

I installed the distribution in a VM and installed the open SSH server to be able to connect to my VM using Putty. I use Putty because I want to easily copy and paste and I don’t like to be connected to the console all the time.

Installing SQL Server

I can give you a complete overview what I did to install SQL Server but Microsoft already did that for you.

Installing SQL Server using the “Install SQL Server on Linux article from Microsoft is as boring as can be, which is a good thing. If you go through the setup from that page you’ll notice that it all works.

Conclusion

Microsoft did a really good job providing SQL Server for Linux because the installation was boring as as can be. I know of course that this is still the CTP and that most configurations were done for us but still this is a really good introduction of SQL Server to a complete different platform.

So if you haven’t already tried to install SQL Server on Linux go try it out because it’s a piece of cake.

 

 

 

 

 

Automate Glenn Berry’s DMV Scrips With PowerShell

Standard

Last week I attended one of the 24 hours of PASS sessions by Glenn Berry about diagnosing performance problems with DMV’s. If you don’t know what the 24 hours of PASS is, it’s an initiative by PASS where there are 24 sessions, one session every hour for an entire day.

Glenn Berry is also knows as Dr. DMV because he made some genius scripts to analyze your instances and databases using the dynamic managed views (DMV).

I’ve used Glenn’s DMV scripts for years but always found them tedious to execute because there are about 70 individual scripts that either query instance or retrieve database information. Glenn did make it easier for you by creating Excel templates to save the information in.
There are separate scripts for each version of SQL Server that updated every month. Glenn only updates the versions for 2012 to 2016 with new features. The scripts are very well documented and even contain hints about how to solve some issues.

As almost every DBA I’m very lazy when it comes to repetitive tasks and when something takes a couple of minutes to do manually, I’ll make sure I don’t have to do it again by automating it.

So that’s what I did and what I created is a PowerShell module to export all the information from the DMV queries to an Excel sheet automatically.

The hardest part of automating this module was to parse the entire script into individual queries. Fortunately Glenn has made this easy for me by placing a marker at the end of each query.

Note: Because the scripts are only updated from version 2012 and forward I made some changes to the older scripts to make sure the files could be parsed. If you let the script download the DMV files you’ll have no issues executing the module.

How it works

The module has one cmdlet called “Export-DMVInformation”.

The module is dependent on the module “ImportExcel” to make it possible to export the data to an Excel file. You can find the ImportExcel module here.

The module can be executed simply by supplying the name of the instance, but there are other options to make the end result more satisfying.

The cmdlet has the following parameters:

  1. instance: name of the instance. If it’s a default instance just use the server name
  2. database: the name of the database to query. If no name is given the master database will be used.
  3. username: The username in case sql authentication is needed
  4. password: The password that goes with the username if sql authentication is needed
  5. dmvlocation: The location of the dmv queries. The default is “My Documents\dmv\queries”
  6. destination: The destination where the Excel files need to be written to. The default is “My Documents\dmv\results”.
  7. excludeinstance: A flag to exclude the instance queries. Default is 0 or $false.
  8. querytimeout: Timeout in seconds for the query to not let the query take up too much time. The default 300 seconds (or 5 minutes).

The heart of the cmdlet is the dmv file because without it doesn’t work. You don’t have to worry about which version needs to be used because the cmdlet will find that out itself.
If there are no DMV files, or you couldn’t bother do download them, don’t worry. The cmdlet will initiate a download and download the DMV files for you to the default location set in the parameter “dmvlocation”.

The result

A partial result of the cmdlet can be seen below:

Export-DMVInformation Result

There will be a Excel file with data for the instance and a separate file for the database when the cmdlet is finished.

How to install

The easiest method to install the module is by copying the code below and entering it in a PowerShell window:

Alternative installation method

Alternatively you can download the module from here.

Unzip the file.

Make a directory (if not already present) named “Export-DMVInformation” in one of the following standard PowerShell Module directories:

  • $Home\Documents\WindowsPowerShell\Modules (%UserProfile%\Documents\WindowsPowerShell\Modules)
  • $Env:ProgramFiles\WindowsPowerShell\Modules (%ProgramFiles%\ WindowsPowerShell\Modules)
  • $Systemroot\System32\WindowsPowerShell\v1.0\Modules (%systemroot%\System32\ WindowsPowerShell\v1.0\Modules)

Place both the “psd1” and “psm1” files in the module directory created earlier.

Execute the following command in a PowerShell command screen:

Import-Module Export-DMVInformation

To check if the module is imported correctly execute the following command:

Get-Command -Module Export-DMVInformation or Get-Module -Name Export-DMVInformation

If you see a list with the functions than the module is installed successfully. If you see nothing happening something has gone wrong.

I hope you enjoy the module and that it can help you to get performance information fast.

 

 

Enable changes that require table re-creation

Standard

The Problem

When you’re developing on an instance you might want to change something in a database where the change might require to re-create the table. By default, the SQL Server Management Studio (SSMS) will prevent saving changes that require the table to be recreated.

Examples of changes that require table re-creation:

  • Change a column to no longer allow NULL values
  • Adding columns in the before another column
  • Moving a column

In earlier editions it was possible to allow some changes to be done directly using the GUI. What SSMS does in the background is unloading data of the table into a temporary table (not a # temp table), then deleting the original table and renaming the temporary table to the original table name.
You can imagine that actions like that would have a huge impact on a large (maybe production) table.

When you apply such a change you’ll get the following error message:

prevent changes error

The solution

Again, be very careful when enabling this option when you regularly work in production environments. I very much hope so that you never make these kind of changes using the GUI especially not in production. Just don’t!

To enable the changes in SSMS go to the menu “Tools” and click on “Options”.

Go to option “Designers” and uncheck the checkbox at “Prevent saving changes that require table re-creation”.

prevent_changes_solution

Click on “OK” and you’ll be able to apply the changes you couldn’t do before.

Remember that this option is set for your SSMS installation and that this will be used for every instance you connect to.

Happy developing!

Does Log Shipping Transfer Schema Changes

Standard

Recently I got a project to implement log shipping in SQL Server. The project was to implement log shipping for multiple databases. I’ve configured log shipping before but that was years ago and I bet stuff has changed since then to make it easier to implement.

Log shipping existed partially in SQL 6.5 and 7 and was officially released in SQL Server 2000. Log shipping today is still being used pretty often as a warm standby or for readable secondary databases being used for reporting.

The Problem

One of the concerns was that in case of a schema change like, adding a table to the database or altering a column in an existing table, would affect the continuity or even break the log shipping.
If you look at replication for instance, it’s reliant on the schema to work and if the schema changes you’ve got to make sure the schema is the same on the secondary server.

I was pretty sure that schema changes would be applied in log shipping but I didn’t want to assume and tested it out to make sure it wouldn’t cause any problems.

I wanted the following cases to get tested:

  1. Create a table
  2. Adding a user
  3. Add a stored procedure
  4. Change the data type of a column

The Test

I created a database called “LogshippingTest” and set up log shipping to a database called “LogshippingTest_DR” on the same machine. For this test it was fine that both databases resided on the same machine but in the real world you’ll probably want to have different machines.

I created a table called “Test”:

logshipping_schemachanges_database

There are three columns and I want to see what happens when I change the column with the VARCHAR data type from 30 characters long to 130.

Keep in mind that there are caveats to changing the schema with log shipping. When changing a column from NULL to NOT NULL will result in a change where you have to recreate the table and import the data.

The DR database looks like this after implementing log shipping:

logshipping_schemachanges_databasesecondary

 

 

 

 

 

 

It all looks good now let’s execute the jobs for the log shipping and make the changes.

To change the table I executed the following script on the primary database:

Let’s see if the changes are in the transaction log:

This gave the following result:

logshipping_schemachanges_tlogdata

That looks pretty good let’s check the DR database to see what happened after log shipping.

logshipping_schemachanges_resultcolumnchangeAs expected, after the log shipping was executed, the change was transferred to the secondary database.

 

Conclusion

The same results could be seen when creating the other objects like the users and procedures.

We can safely say that schema changes are applied on the secondary database in log shipping.

Happy log shipping!