TSQL-Tuesday #104: Code You Would Hate To Live Without


This month’s TSQL Tuesday is hosted by Bert Wagner (w | t) and he asks us to write about code you’ve written that you would hate to live without.

Over the years I’ve used other people’s code regularly. It could have gone from Github, Stack Overflow, blogs etc.

I have always appreciated the way the SQL Server community (and other communities) share their code to help other.

I have worked alongside lots of people who were not very keen on sharing. People actually told me that their code was theirs and only theirs and that nobody is to see it.

Although I don’t agree, I do understand people not wanting to share their code. They may have worked hard on it and the person could have an advantage to other people who don’t have that “special” piece of code.

There is a couple of caveats to that:

  1. Your code could have bugs that you never noticed
  2. Your code could be improved by other people with different views

I am completely the opposite. If my code can help one person to make their life easier, I have already won. In the last few years that has been proved because I have been contributing to dbatools. Because I contributed I have learned a lot about open source projects, learned a lot about SQL Server. I met new people from all over the world who in their turn shared their code to make my life easier.

Code that I couldn’t live without

Back to the task at hand and tell you about code that I’ve written that I cannot live without.

Anything I create and is is usable for other DBA’s tends to end up in the dbatools module. The reason I do that is because it’s an amazing platform and it makes it easy for me to reach a large audience.

Testing Backups

One that I cannot live without is the dbatools command called Test-DbaLastBackup. Testing your backups is one of the most important things you can do as a DBA. I did not write that command but I did use it in one of my commands.
It’s basically a wrapper around that command that sends me an email with all the tested backups and the results in a csv file. The code is freely available on my blog.

Log Shipping

Some other things I’ve written is are the log shipping commands in dbatools. At my current workplace we have log shipping enabled. Everybody who has set up log shipping knows it’s a PITA if you have lots of databases.

I recently presented a session about the log shipping commands on the PASS DBA Virtual Chapter. The session was recorded and if you’re interested you can watch it through this link.

One log shipping command in particular that I use regularly is the Test-DbaLogShippingStatus. This command makes it easy for me to check up on my log shipping servers and see the status of each database.

SQL Server Agent

I don’t like to click around and if I can do something in a console I choose to do that. In most cases it is faster than the alternative.

The same goes for the SQL Server Agent. You have your jobs, schedules, job steps etc. For all these objects I have created commands in dbatools. The commands enable you to create, edit or remove any of those objects with ease.

So these are the script I cannot live without.

Maybe these scripts can help you in your journeys through SQL Server.






Reflect on 2017 And My Learning Goals for 2018


nfortunately I missed the last T-SQL Tuesday of 2017 but I’ll share my learning goals for 2018. I also want to reflect on last year because I think it’s good to see if I was able to improve.

Reflect on last year

At the end of 2016 I set myself the following goals:

  • Do more presentations
  • Get better in PowerShell development
  • Play a bigger part in the SQL Server community

I got really excited in 2016 by getting drawn into the SQL Server community by presenting and help people out. That gave me a lot of energy and I wanted that to continue in 2017. And oh boy did it continue!

I got really involved with the dbatools project. I developed a bunch of functions to support the SQL Server Agent, log shipping and a bunch of others. With that I hit two birds with on stone; Play a bigger part in SQL Server community and getting better in PowerShell development.

I also wanted to do more presentations. I submitted sessions to lots of events and I got to present a couple. Not the smallest ones because I got a session in SQL Saturday Oregon and a lightning talk at the PASS Summit.

One thing I really wanted to get better at was unit testing. As a DBA with development experience I never came in contact with this concept. Though I got excited about it due to Rob Sewell (b | t) who showed my some interesting uses of PowerShell unit testing with Pester.
The unit testing concept was a real eyeopener and a big learning curve. At the end of 2017 I was able to create sophisticated unit tests for SQL Server (tSQLt) and PowerShell (Pester) which helped me a lot with my development.

Goals for next year

So the T-SQL Tuesday post had the following questions:

  • What do you want to learn?
  • How and when do you want to learn?
  • How do you plan to improve on what you learned?

What do you want to learn?

I want always want to learn a lot of things but if I would make a shortlist it would contain the following

  • Query Store
  • SQL Server 2017
  • Continuous Integration
  • More consistent blogging

I’m a database administrator that has knowledge in database development and other processes. I’ve done small projects with business intelligence but that subject does not really suit me. I understand the technology and the thought process, and that’s good, but I could not switch to it full time.
I get excited when I see an inefficient process or code and when I’m able to improve that process by automation or by redeveloping the procedures.
That’s where my strength lies and that’s where I want to continue at.

Query Store

I went to PASS Summit 2017 and there were a of of talks about Query Store. I have not yet had the chance to take a deep dive into the material but I will very soon. I think this is one of the things that will impact my work the most

SQL Server 2017

That’s a big topic and something I’ve been looking forward for. I have been doing tests with SQL Server 2017 when it was first released but this year we’re going to put into production.

Continuous Integration

You already read about me being active with unit testing with tSQLt and Pester. Now I want to take the next step and implement CI for the database development.

It costs me a lot of time to create new release scripts and to make sure everything works as as expected. I just want to automate that and make my life and that of the other developers easier.

More consistent blogging

Last year was a really busy year for me in both the work and personal aspect. Because of all the pressure I was not able to blog as much as I wanted and the only thing that I was able to do was to create some fast drafts. I have about 50 drafts laying around that I could finish and I probably should.

For me to improve this year is to at least create a new blog post at least every two weeks.

How and when do you want to learn?

As with many other people in the tech industry we have a full time job and finding time to study can be tough. We never have a 40 hour work week and sometimes we make 10 or more hours a day.

I have the privilege that my employer enables me to do self-study. That means that I can spend about 10% of my time learning new things or improve my current skills.

But that’s mostly not enough. When I want to take a deep dive I go and get into the details of a subject. I’m going to ask questions that in most cases would not make sense or I’ll try to break things to see what would happen in a production environment.

That answers the question about when I learn things but now how. The “how” I learn largely depends on the topic at hand. In most cases I go and search already published articles and google for any information that I can find.

In most cases that will give me enough information to get started. Another place that does have a a lot of content is the Microsoft Academy. If all of that still does not satisfy me, which does not happen very often, I will turn to the SQL Server community and ask questions. One place that has lots of IT professionals present is the SQL Server Community Slack channel.

How do you plan to improve on what you learned?

The subjects I explained earlier are important to me because they can make my life easier. They will improve my skills as a DBA and will make sure less mistakes are made.

I already mentioned that I want to spend more time on my blog. I think blogging is an important part of my career and it enabled to connect to the rest of the world.
If you blog about something you need to know all the ins-and-outs of the subject. You cannot just write about something and not be confident about the information you’re presenting. Making sure that you have all the information by diving into the subject will make me better at that.

Someone ones told me, if you can tell me in children’s language about a particular subject you’ll know enough about it to present it to others. That’s true for both blogging or presenting a session.

Talking about presenting, I’m also going to spend more time developing presentations and submitting them to events. I love to present about the stuff I do and if I can help just one person I’ve already that’s already good for me.
All the new stuff I’ll be learning could also end up in a session so expect me to submit lots of sessions this year.

This is going to be a busy year but I like that. Lots of things to get better at.

Thoughts on PASS without a SQL Saturday

SQL Saturday

I was kind of surprised when I heard about PASS’s plans to no longer spend money on SQL Saturday and I want to share my thoughts about it.

Constantine wrote a blog post about this matter and I felt that I should do a blog post about it too.

There is a quote in the #SQLSaturday channel in the SQL Community Slack that blew my mind.

They [the PASS board] are already signaling they don’t want to spend $$ on SQL Sat and a few of the board members would just as soon SQL Sats die.

Let me be clear, if there were no local user groups and/or SQL Saturdays I do not think that PASS would exist and let me explain why.

I went to my first SQL Saturday a long time ago and I immediately felt that I was part of something big. The amount of effort that was put in by all the volunteers and sponsors to let us professionals grow was incredible.

For any data professional to grow you need content that’s up-to-date. You need interaction with other experts to get new ideas.
The SQL Saturday volunteers, the presenters, the organizers, the sponsors and all the people in the background make it possible to get the most up-to-date content there is.
There is no organization in the world that gives that amount of content for free (with sometimes a small fee) and let you interact with professionals in such a pleasant manner where I don’t have to spend thousands of dollars to attend the PASS Summit.

Although PASS does offer lots of content through virtual chapters I think that’s only partially important to data professionals. SQL Saturdays give us a platform to connect with other people from our field, provide networking opportunities and also job opportunities.

I can surely say that if the SQL Saturdays would not be there I would not be at the point I’m right now.

The volunteers have a really tough job arranging venues, food and drinks etc etc to organize the events and they all do it out of love for the SQL Server community.
Some of the SQL Saturdays would not survive without the help of the PASS organization. If PASS would cut the funding of those events I bet they would disappear really fast.

To conclude, cutting funds for SQL Saturdays will cause the events to disappear.
The disappearing events will cause the organization to no longer have the platform they had to communicate to the data professionals.
No longer having the platform will let people find other ways to get the content and will probably skip PASS Summit causing a loss in revenue and that in the end could be the end of the PASS organization.

Looking back at my first lightning talk


I had the opportunity to speak at the PASS Summit 2017 with a lightning talk.

This was the first time I ever did a lightning talk and it was different than a normal session.

It all boils down to the fact that you only have about 10 to 15 minutes for the entire talk.
This brings a couple of complications because suddenly you have to make sure your content fits within that time frame. You also have to make sure with the amount of content that you don’t go too fast. Going to too fast in a lightning talk is disastrous because attendees will not be able to follow you.

I normally do 60 minute sessions where I have the time to send to dig a little deeper than originally planned.

So how can I do such a short session and still make sure that the audience gets a bang for their buck?

After thinking about it I made a couple of steps:

  1. Write down the subjects I wanted to talk about
  2. Make the content for the subject and made sure it was short and to the point
  3. Present the content out loud and record it

During the recording I would watch the timer in PowerPoint to see when I would hit the 10 minute mark.

If I had gone over I would go into the content again and try to adjust it.
If I made it within those 10 minutes I would watch the recording and pay attention to how fast I was talking. I had to adjust multiple times to make sure I wasn’t going too fast.

After a couple iterations I was satisfied with the preparation and I could go into the presentation with confidence.

How did the session go?

This was actually really funny. I noticed on my itinerary for my flight that I would have to leave to the airport before 10:00 AM. The lightning talk sessions were from 9:30 AM to 10:30 AM so I had to make sure I was the first one to present because otherwise I wouldn’t be able to make it to my flight.

Fortunately the other presenters were so kind to let me go first and I thank them for it.

Because I prepared this session pretty well everything went smooth. I was able to do my talk and show some a couple of demos and finish within the 10 minute frame.

A couple of weeks later I got the feedback from several people from the audience and I was excited about that. Lots of people liked the content and the overall session so that was a big win for me.


Looking back this was a very good experience for me. I find doing a lightning talk is way more difficult than a normal session. It all comes down to preparation and placing yourself into the audience.

If you attended the lightning talks as PASS Summit 2017, please leave a comment because I’d really like to know your opinion about what went well and what didn’t. There is always an opportunity to learn and I like to get better with every session I do.


Why I love dbatools


I’ve been working for the dbatools project for a while now and I felt telling you why I love this project.

A little background about me, I’m not a full time programmer. I learned programming with Java years ago and did little personal projects with PHP, C# and a couple of other languages. I started PowerShell about 7 years ago and thought I was capable of delivering solid code. That all changed with dbatools.

Being part of the team

So for the last year I’ve part of the project, from debugging code, adding new features to existing functions and adding new functions.

A world opened up when I first joined the project. I had no idea I would be learning this much in such a short time. I had to deal with GIT, QA, advanced modules, styling, Pester etc.

So my first command was a little command Find-DbaOrphanedFile. One time Chrissy LeMaire asked if someone could make a function to find orphaned files of databases. I jumped in because I knew this was something I could do and I didn’t have the chance yet to do help out with the project. In about two weeks I had the function done as far as I knew. It did the job and now I wanted to present my awesome code to other developers.

My first challenge was to get to deal with GIT. I had never used GIT and the only source control my company had at the time was Visual SourceSafe. Don’t judge me! I wasn’t the one that decided to use an out-of-date piece of software. Of course when you do things the first time you’re going to fail and I failed big time. I made a branch from the wrong fork, committed stuff but didn’t synchronize it, created a pull request (PR) in the wrong branch and more. I did everything wrong you could do wrong and still Chrissy was nice as always trying to help me out to get everything on track.

After the GIT hurdle I finally submitted the PR and after about a day I got a kind but long comment back from one of the members that did the QA. Before I started, I read some of the standards the project put in place but as a developer you want to get started and a as a result I forgot some of them. The funny thing was though that I learned more about PowerShell, modules, functions, standards etc in that one comment than I had did in the last 4 years.

What struck me was the way the members dealt with the people like me who weren’t familiar with a more professional way of development. The members understand that reacting the wrong way, I would’ve quit helping out with the project because it would be too overwhelming.

That’s one of the strengths of the project, to embrace everyone that wants to help out. Find a way to make everyone a functional member of the team being either a developer, QA, writing articles etc.

That made me more enthusiastic about the project and I started to contribute more. Now I’ve become one of the major contributors.

In the last year I learned more about PowerShell than I did in my history of doing PowerShell. I’ve become more precise when it comes to my code, I go over my tests in meticulous way and try to keep by coding standards. I looked back at some code I’d written over the years and imagined that some crazed out monkey with a brain fart high on crack made it. Now I go through all the code I’ve written over the years and redo everything that’s no longer up to my new standards.

Being a user of the module

The other reason I love dbatools is because it has made my life soo much easier. I see myself as one of the lazy DBAs that would rather spend a couple of hours automating his work, than having to do the same thing over and over again. The project has about 200 different functions and it’s close to releasing version 1.0. This is big deal due to a lot of standardizing, testing and new functions that are going to get released. With that amount of functionality in one single module there is bound to be a solution for you in there to make it easier to do your work. Nowadays I test my backups every day using the Test-DbaLastBackup function. I see the status of all my backups on all my database server within seconds. I retrieve information about many servers without having to go through each one of them. And migration have been a blast.

If you aren’t exited about the project yet, please visit the website and look what has already been accomplished. Go see all the commands and then decide if it’s worth implementing in your daily work. If you’re wondering if a command is there that could help you out, the command index can help you find more information. This is still in beta though but we’re working on getting the information in there.

Thank you!

I want to thank the dbatools team for making me feel welcome and to boost my knowledge to a point that it has made a significant impact in the way I do things in my daily life.
I also want to thank all the contributors that put in all the effort to get the project where it is today. Without all the people putting in their valuable time this wouldn’t have worked.

No One Listens To Me! Why? Part 2 – Colleagues


In part 1 of the “No One Listens To Me! Why?” series I described a situation where I had to convince my manager to take action on some actions.

This story will again be a true story where this time I had to convince my colleagues.


I went on the the SQL Cruise, which is a fantastic way to learn and meet new people. I attended a session by Argenis Fernandez (tl) who touched a subject about page verification.

Coming back to the office I wanted to start implementing my new acquired knowledge. I first investigated some of our main databases. What I found was that some of our databases were not configured with the right page verification. They still had the page verification set to “NONE”.  These databases were not installed by me and I had not considered this to be set this way.
If you know anything about corruption you’ll know that this setting is crucial to find corruption in your database.

Due to the impact this could have I went to the application engineers and arranged a meeting with them. There was a reluctance to come to the meeting but it was important to discuss, so with upside down smiles they arrived 10 minutes too late to the meeting.

The following conversation took place:

Me: I did some investigating on a couple of databases and found that there are some settings that need changing. We’re now on SQL Server 2012 a we have databases with settings dating back to SQL 2000. One of the I found is that our main database is not being checked for corruption.

Application Engineer (AE) 1: How do you mean doesn’t get checked for corruption we do a weekly DBCC check don’t we?

Me: Yes we do but that has little use because the pages in the databases are not marked with a checksum. Besides it’s only once a week if we’re already doing it due to weekly and monthly processes. There is a setting that enables the verification of pages in the database and this helps us find corruption.

AE 2: Ok but do we need it? Why should we enable it?

Me: Just as I told you it’s for finding out if a corruption took place in the database. Any corruption in the database is a really bad thing and should be avoided at any cause. If we don’t enable this we’ll not know if there is any corruption in an early stage. We would only find the corruption if the page is severely corrupted or if someone accidentally selects the data. To ask you a question, do you want to vouch for a database which could potentially give you corrupt information?

AE 1: When was this setting first released and you still didn’t give an answer why we should enable this.

Me: You’re not listening to what I’m saying. The feature was first released back in SQL Server 2005 and was so important that Microsoft made it a default setting for all new databases. As I answered this question many times, we need this to find corruption at the earliest moment possible.

AE 1: What’s the downside to his? I don’t know if our processes will be hit if we change this.

Me: There is no downside, it’s a setting that helps us. The only thing it’s going to do is from this point on check all the data that’s written and create a checksum for it. The next time it’s being read, it will check the checksum with the page and see if it matches.
But I understand you, I want this setting to be enabled and tested thoroughly before we put it in production.

AE 1 and 2: I’m still not convinced.

Me: You know what I’ll do? I’ll setup a demonstration to show you how this process works and how easy it is to corrupt some data in a database. Based on that we can decide what to do from that point.

As you may read through the conversation I had to deal with people who were scared of change and were reluctant to listen to anything I told them. I still had to cooperate with them to get this change done so I moved on.

Like I said in the previous article, you have to have the facts to build your case and that was the reason I wanted to show them how a corruption could work and why this setting was so important.

I created a demo and did the presentation to the application engineers.
I showed them a healthy database, corrupted the database with the setting we’re currently using and we didn’t find the corruption. I did the same thing with a database with the new setting and of course it showed me the corruption.

The outcome was far from satisfying because they were still not convinced.

Up to this point, let’s check what went wrong:

  1. Strong arguments did not work
  2. Nobody cared about the fact that situation could actually happen.

Strong arguments did not work

Even with the evidence in front of them I couldn’t convince my colleagues. The question they asked my in the end was: “But how many times would this actually happen?”.

If I wanted to I could’ve just changed the setting and let nobody know. In previous testing I got no performance loss and had no other symptoms either.

But that’s not what I wanted to do. If something would go wrong for some reason I would get the blame and that wasn’t what I had in mind.

From this point on I should’ve gone higher up the chain of command to my manager.

Nobody cared about the fact that situation could actually happen.

This hit me right after I did the corruption demo, they didn’t care and as long as it all worked they were not going to keen on changing anything. I understand the last part because I don’t want to change that works, but this was different because we didn’t do any thorough checks.

You’ll never be able to convince these people. If you get to this point and you did all the work, laying out the facts (and even demos) than take your losses and go higher up.

I did everything but still they won’t listen

Ok, so this situation is an extreme one. I was not able to convince my colleagues and this was a scenario that I couldn’t have won.

Instead I went higher up the chain because in the end my supervisor would be responsible if something would go wrong. My supervisor first reacted the same way as my colleagues. To make sure I got my point across I went to the IT manager.
I explained the situation and showed him the possible downtime and how much it would cost if we didn’t put this setting in place.

In the end the IT manager was not pleased with the whole situation and my colleagues and supervisor were called in to explain their reasons. That last thing came back came back to bite me.
That didn’t stop me because if we didn’t put the setting in place I would have been responsible to fix everything even though my colleagues were the ones who decided not to change anything.


Like I said this situation was an extreme one. In normal circumstances, with people with right attitudes, this would never happen. I’ve had other situations where it was the opposite of this and that my colleagues would listen to reason.

In the end it was my responsibility to make sure the the data is protected. You’ll always have people who will not agree with you about something. In most situations you’ll be able to get them convinced with the cold hard facts.

If that still doesn’t work you have to go higher up the chain of command. I’ve always been rigorous when it came to my work and if someone was in the way of me doing my work properly I would go around them.

That doesn’t mean you have to do that all the time. Choose you battles wisely and put your effort where it would make the biggest impact for you to deliver the best work you can.

Work After Hours

After hours email

It’s 8 PM, the kids are in bed and the wife and I are finally able to get some time for ourselves until I get an e-mail from work. Do I open the e-mail after hours or will I get to it in the morning? But what if it’s important and I have to act now? These days it’s we have to deal with blurring boundaries between work and life which has an impact on the so called work/life balance.

I recently read an article about France introducing a new law that where French workers are no longer obliged to respond to work related e-mails or phone call after hours.

What’s not really clear is if this bill limits the communication to e-mail and phone calls only or that they also included messaging apps too. These days we also use apps like Slack, HipChat, WhatsApp to communicate with colleagues which could be a loophole.

The thing that comes up when I read this is that in most situations it will not work because you’re removing all the flexibility. The other thing is that most companies evaluate employees based on their availability and their flexibility.

There also a side note that employers are allowed to make different arrangements with employees.
Employers will probably adjust contracts from this point on that, if you’re in some sort of position where the availability is important, you’re obliged to answer which will render the law useless in a lot of situations.

I never had a problem responding to work related e-mails after hours because I think in my profession as a DBA it’s part of the job. Usually you’re the lonely DBA (or part of a small team) and when shit hits the fan there’s nobody else that is able to fix it. That’s not only in the field of the DBA but mostly with all the colors of the IT rainbow.

In my opinion I think you’re responsible for setting your own boundaries to make sure your work/life balance doesn’t get distorted in such a way that it will affect you personal life in a bad way.

One thing I’ve always done is that when I was in the situation where I had to be available after hours I would have a separate phone. If I’m on vacation and I’m not able to respond, or don’t want to respond, I leave my phone at home. I would give my personal phone number to a very limited group of people that only in the case of a real bad situation they could try to contact me.
This has always worked out for me and fortunately I’ve had employers that respected that arrangement.

Still France has passed this law on January 1 and the thought behind is admirable but if this will ever work is something we’ll see in the future.



No One Listens To Me! Why? Part 1 – Management


What if you’re in a situation where you see that something needs fixing but no one listens to you about what you have to say.

It seems unreal but I’ve been in situations where I had the hardest time convincing managers, colleagues and even third parties that there was something seriously wrong.

I’m going to share my experience with the three of the parties and how I made sure I was always in the clear when something would go wrong.

This is not an article how to spin the wheel of blame because that’s never a healthy situation. This article will give you tips and tricks how to make sure you get your things done and how you can avoid to become the victim of a situation where you’re to blame.

These are actual real situations I have been in so there is no fiction in the stories that are coming.


I start fresh at a new to the company as either a DBA and I was the only DBA a.k.a. the Lone DBA. The former DBA had left (I never got to know the reason) and I had little to no documentation.

The first things I did was do checks on backups and collect information from all the instances.
I noticed that all the backups are full backups. The backups ran for hours and there is a lot of contention due to other processes wanting to process their data.
It seemed all the databases are all in simple recovery mode. Normally that’s not a good sign but there could be a good explanation.

I send an e-mail to my colleagues in the IT department if anyone knew why this is set up the way it is. Of course nobody knows and it’s a dead end. Documentation is there and I’m already happy that the backups ran for the last few weeks.

I was the only DBA and therefor responsible for the situation. Unfortunately because of costs of disk space and other processes I wasn’t able to implement my solution without the consent of my manager who was also the change manager at the time.

I go to my manager have the following conversation:

Me: “Are you aware of the fact that all our databases are in simple recovery mode and that we make full backups of all our databases which take a considerate amount of time and make other processes run longer than needed?”

Manager: “Yes! I know why we did that. The transaction log backups were too difficult to recover so we make full backups all the time. It’s easier right?!”

Me: “You’re also aware that due to the fact that we run in simple recovery mode we have no way of recovering to a point in time and in the case of data loss can only return to the full backup?”

Manager: “Yes I’m aware of that but it doesn’t matter because we have calculated that it doesn’t matter if we lose 24 hours of data because we’ll just redo all the work that’s lost.”

Me: “That’s ridiculous because it’s not that difficult to implement a backup strategy that could avoid that situation. Why wouldn’t we want to do that?”

Manager: “Because we don’t need it and why put in the extra effort if nobody in the company cares.”

Me: “We’re clearly not on the same page and I think you underestimate the situation.”

I stop the meeting and walk out of the room to think of a plan to make sure this doesn’t get back to me when all hell breaks loose.

So let’s evaluate what went wrong:

  1. Nobody in the company knows why things were set up the way they are.
  2. The company has no idea what the impact of a data disaster could be.
  3. Nobody cared about the fact that situation could actually happen.
  4. I couldn’t convince my manager at that moment with good arguments.

First of all let’s be blunt, if shits hit the fan you as the DBA are ALWAYS, read it again, ALWAYS, responsible to recover the data even if you’re not responsible for the situation at hand. Management will not care about the fact that you mentioned this all months ago, they want everything fixed yesterday. It could even backfire (I had that situation) because you could be blamed not to take responsibility.

So how could we act on the points in the evaluation.

Nobody in the company knows why things were set up the way they are

Start documenting the backups, the schedules, the databases and the servers. If you don’t have something like that already read my article how to document all your servers in minutes.

Also document the architecture of the different applications and their relationship with each other. What interfaces are running between the systems etc etc.
Are there any financial applications that rely on interfaces on the main database for instance. What processes are running during the day that could be impacted.

Make a diagram of the connected applications/processes that are dependent on the database(s). Most people understand things better when they’re made visually.

Try to make sense of the current situation and make sure you have everything in writing. If it’s not documented you can’t prove that something is wrong.

I know this all sounds like a lot of work, but if nobody knows, you should. In the end this will save you loads of time and let you become the person that took the responsibility.

The company has no idea what the impact of a data disaster would be

Here is where documentation is important to get the facts straight. I’ve seen a lot of companies underestimate the situation that there is a real problem. Like I said before, if nobody knows, make sure you do.

If there is a disaster recovery plan see if it’s still up to date and if not make it so. Based on that information try to estimate how long it would take to recover all the dependent databases/processes when the main database is down.

Make sure you know how long it takes to get everything back in order and make sure you have a procedure ready when it does happen. This not only shows you’re proactive in your work but that you can also act when needed.

And one thing you should do is test your DR plan. You plan is worthless if it doesn’t work. Test it periodically to see if it’s still up-to-date.

Nobody cared about the fact that situation could actually happen

One thing I would do is manage expectations. I want everyone in the company to be on the same page with this that in the case of an emergency.

The manager in this situation thought the loss of one day of data was good enough for the other departments. These decisions were made years ago and the entire landscape had changed and the DR didn’t.

I asked the same questions I asked my manager to the managers of several departments and their reaction was a little different. Several managers explained that they would be in a lot of trouble if the application was even offline for half a day and others even for several hours.

Because this was not going to be a healthy situation I called for a meeting with all the managers and me. In this meeting I would explain the situation by the documentation (like the diagrams) and come up with a plan to get the DR up-to-date.

I did everything but still they won’t listen

If you did everything to convince the people and they still don’t want to set up things you would like to, either because of costs or other reasons, I would protect myself.

Make sure all the decisions that were made are in writing, the good and the less good decisions. I would send an e-mail to my manager with the decisions and explain the consequences. After that I would ask my manager to acknowledge the e-mail.

You don’t want a decision outside of your capabilities to come back to haunt you. I’ve been in such a situation and you don’t want to end up there.


It all starts with taking responsibility. If you don’t take responsibility for the data as a DBA I suggest you go look for another kind of job. After that it’s important to get the facts straight. You can’t build a solution based on assumptions. One of the quotes I use: “Assume” makes an “Ass” out of “U” and “Me”, AssUMe.

It’s very important to feel comfortable in your work environment and you should do everything to make sure you go to work with a good feeling. You spend more time at work than you would spend at home (remote workers excluded 😉 ).