Log Shipping With dbatools – Part 4: Recover a Log Shipped Database

Standard

Reading Time: 3 minutes

In the third part of the series we discussed the way to get the log shipping errors using the command “Get-DbaLogShippingError”. This blog will be about how to recover a log shipped database.

Out of the box solutions

I can be very short in this paragraph, there is no out-of-the-box solution to recover a log shipped database.

Why this command

Initially log shipping was meant to be used as a warm standby. You have your data on another instance but you still have some human intervention to get it all back up.

Imagine the following situation. You have setup log shipping using either the GUI or the commands in dbatools. You have about 15 databases and everything is working fine.

Until one day the primary instance goes down and is not recoverable. For the production to continue you have to bring the log shipped databases online fast.

You have to figure what the last transaction log backup was. You have to check if it was copied to the secondary instance and if it’s restored.

To do this by running a couple of queries, copying the files if needed and run the log shipping jobs takes time. I’d rather run a command and recover one or more databases and get back to the problem of the primary instance.

Invoke-DbaLogShippingRecovery

The Invoke-DbaLogShippingRecovery command is will execute the following steps:

  1. Check the agent status and start the agent if it’s not a started state.
  2. Retrieve the latest transaction log backup and try to copy it to the secondary instance if it’s not present. It will wait and check the log shipping status to see if the backup file is copied.
  3. Retrieve the last restored transaction log backup. Execute the restore process to get the database up-to-date.
  4. Disable the jobs after the copy and restore process.
  5. After all the actions it restores the database to a normal state.

To execute the command

The result of the command

Recover Log Shipping Result Command

The image below shows the database in a normal state after the command in the SQL Server Management Studio.

Recover Log Shipping Result GUI

The result of the jobs being disabled

Recover Log Shipping Result Jobs

More options

In my example I showed how I recovered a single database. The parameter does accept multiple databases.

Besides setting the individual databases you can also let the command recover all the log shipped databases

In some cases you want to recover the databases but not execute the recovery to a normal state

 

This concludes the command “Invoke-DbaLogShippingRecovery”. This was the final post in this series. If you want to look back at the other command follow the links below:

  1. Setup Log Shipping
  2. Test Log Shipping Status
  3. Get Log Shipping Errors

 

Log Shipping With dbatools – Part 3: Get Log Shipping Errors

Standard

Reading Time: 3 minutes

In the second part of the series we discussed the way to test the log shipping status using the command “Test-DbaLogShippingStatus”. This blog will be about how to get log shipping errors to analyze why your log shipping isn’t working.

Out of the box solutions

There are several options to get the log shipping errors right out-of-the-box. One of them is using queries and the other one is the event viewer.

Queries

There are queries that can help to get log shipping errors. One that is likely used is by using the log_shipping_monitor_error_detail table.

Although this is a good way to get the log shipping errors, unless I’m using CMS or a monitoring server, I have no way to find errors on multiple instances with log shipped databases.
Furthermore, I want to keep my all my tools together and make it easy to solve any errors.

Event Viewer

The other way to see if there are any errors in the log shipping is by using the event viewer.

Get Log Shipping Errors Event Viewer

Personally I don’t like using the event viewer because there could be other errors which I would have to filter through before getting to the root cause. The other reason this doesn’t give me a complete view is because it doesn’t register any errors that occurred with restore of the backups.

Why this command

When there is something wrong with the log shipping I want to know about it as soon as possible. Using the above options doesn’t give me the flexibility to do that.
For instance I cannot check multiple instances at once, unless I’m using CMS or a monitoring server.

To make the tool set complete this command was needed to make sure a user would be able to get an overview of the log shipping status.

Get-DbaLogShippingError

By default the command will return all the errors that ever occurred. It collects all the information and returns it in a table structure to the user.

To execute the command:

The result is an overview of the errors.

Get Log Shipping Errors No Filter

In my play area I made sure I would have a lot of errors to test the command. It turned out I had more than 3100 errors!

Get Log Shipping Errors No Filter Count

This brings us to one of the reasons I created this command; filtering the results.

I may only want to see the errors that occurred the last hour.

The result is still a quite big list but more manageable.

Get Log Shipping Errors Filter Date From

In this example we’re only using the parameter “-DateTimeFrom” but we can also use “-DateTimeTo” to filter between certain periods.

More filtering

Of course there are other filters to make it easier to zoom in into a specific problem.

It’s possible to filter on the databases:

Get Log Shipping Errors Filter Database

To filter on the type of instance you can use the “-Primary” and “-Secondary” parameters

To filter on the specific actions use the “-Action” parameter:

 

This concludes the command “Get-DbaLogShippingError”. The next post will be about the command “Invoke-DbaLogShippingRestore”.

Log Shipping With dbatools – Part 2: Test Log Shipping Status

Standard

Reading Time: 3 minutes

In the first part of the series I described the command “Invoke-DbaLogShipping”. This makes it possible to set up log shipping. This blog post will be about the command to test log shipping status.

Before I describe the command, I want to discuss the options that are available in the SQL Server Management Studio (SSMS) and SQL Server.

Out of the box monitoring

Microsoft has made it possible to check the log shipping using SSMS. This can be done using queries or by using the “Transaction Log Shipping Status” report.

Queries

To return the log shipping status using T-SQL execute the following code in a query window in SSMS:

The result is as follows:

Log Shipping Status Query Result

A good method to find any possible issues but you need to check multiple columns to find out if something is wrong.

Transaction Log Shipping Status Report

By default SSMS has a report that shows the status of the log shipping. To open the report right-click on the server, go the “Reports”, Go the “Standard Reports” and click on “Transaction Log Shipping Status”.

Log Shipping Status Report

The result is an overview of each database with the status of that process:

Log Shipping Status Report Result

The report shows in red colored text if something is wrong which is a great way to find any problems. Still we need to go into SSMS and click around before we get to this information.

Why this command

Monitoring your log shipping processes is important. You need the synchronization status of the log shipped databases.

The log ship process consists of three steps; Backup, Copy and Restore. The log shipping tracks the status for these processes.
It registers the last transaction log backup, the last file copied and the last file restored. It also keeps track of the time since the last backup, copy and restore.

But that’s not all. Log shipping also checks if the threshold for the backup and restore has been exceeded.

During the log shipping thresholds are set for the backup and restore process.
The default thresholds are 60 minutes for the backup and 45 minutes for the restore. There is no threshold for the copy process.
If for example, the last time since the last backup exceeds the backup threshold an alert will be triggered.

That’s a lot of information to consider and that’s why this command was developed. It will enable the user to get a complete overview of the log shipping status without having to know all the details.

Test-DbaLogShippingStatus

The commands will return a lot of the information by default. It will collect all the information and based on that returns a status.

To execute the command:

The result is a detailed overview of each database including the status.

Log Shipping Status Command Detailed

This looks a lot like the results you get from the queries of the report we talked about earlier. All this information can be a little overwhelming and it’s not always needed.
If you don’t need all that information there is an option to format and filter the output.

It will only show you the core information to know what the status of the log shipping is.

The result of the command:

Log Shipping Status Command Simple

As you can see there is lot less information. It only shows the instance, database, instance type and the status. In most cases that’s all you need.

More filtering

The command also allows you to filter out the primary and the secondary databases.

There is a way to filter on specific databases using the “-Database” parameter.

Of course there is also an option to exclude certain databases by using the “-ExcludeDatabase” parameter.

 

This concludes the command “Test-DbaLogShippingStatus”. The next post will be about the command “Get-DbaLogShippingError“.

Log Shipping With dbatools – Part 1: Setup Log Shipping

Standard

Reading Time: 7 minutes

This post is the first one of a series of four describing all the different commands. We’ll discuss the commands to setup log shipping with dbatools, how to check the status, to check for errors and to recover in case of an emergency.

What is log shipping

Before we go into the why I want you to know what log shipping is.

Log shipping dates back to the early versions of SQL Server. The first article that I could find dates back to the year 2000 and explains how to setup log shipping with SQL Server version 7.

Microsoft has done a lot of work on the log shipping setup process and we no longer have to setup the “poor man’s log shipping” anymore. The setup contains a nice GUI and it makes easier to setup log shipping for your database(s).

Log shipping is the process of creating a transaction log backup. Copy that backup to a secondary server and restore the log to synchronize the databases. The image below shows the different elements of a log shipping architecture..

Log Shipping

https://docs.microsoft.com/en-us/sql/database-engine/log-shipping/media/ls-typical-configuration.gif

I’m not going to explain in detail what log shipping is or how to set it up using the traditional method. There are lots of websites that have already done a great job describing everything related to log shipping. Here are a couple of websites that could help

Why create a command set

Like I said before, Microsoft has done a great job of automating the log shipping process and making it easier for the user to setup. The GUI describes all the settings needed. Even someone inexperienced could setup log shipping in a short amount of time (given that all the requirements have been met).

Microsoft has simplified the process since the earlier versions of SQL Server, setting up log shipping still takes a lot of time. You have to setup each database individually and most of the time your are doing almost the same.
There are ways to automate this using dynamic T-SQL. However, I found that, at least in my situation, it was prone to errors and it took me more time to troubleshoot it than to set it up manually.

Why we needed to setup log shipping

A couple of months ago my company wanted to create a High Availability solution for our production databases.
Researching a solution like clustering for Availability Groups, we looked at architecture decisions, licensing and costs. The outcome was that we weren’t able to use any SQL Server Enterprise HA solutions.
Log shipping is not a so called hot standby that immediately recovers in case a service or database goes down. Instead it is called a warm standby where there is human interaction needed to recover.

We have about 20 production databases and in my opinion we are a small shop. Looking at the database size and amount of transactions.
We decided to go with log shipping. It’s well established within SQL Server and not yet deprecated like mirroring.

So on a Monday morning, after all the backups were done, I started to configure our databases for log shipping. For me the process of setting up log shipping was very tedious. I basically had to go through each of the databases and use the same settings over and over. The biggest difference would be the database name in the backup or copy directory.

The reason for the command set

In the end it took me about 3 to 4 days to set it up. This was mostly because I didn’t want to stress the network too much by setting up multiple databases at once. At some point I made some mistakes and had to start over again. I really don’t like to do repetitive tasks and in my daily work I automate most of those processes.

To setup log shipping for a single database in the GUI you need about 40 clicks and have to  insert lots of information in different windows.

That’s where I contacted Chrissy LeMaire ( b | t ) from the dbatools project and asked if it would be interesting to create log shipping commands for the PowerShell module. She was really enthusiastic about it and I started to develop the commands.

The command set

Initially it started with the command “Invoke-DbaLogShipping” but soon after a couple of commands followed. The command set contains the commands

This blog will be the first of four posts where we dive into “Invoke-DbaLogShipping” because we first need to have log shipping setup before we can use the other commands.

The Command Invoke-DbaLogShipping

During the development of the command I chose to make the process for setting up the log shipping as easy as possible for the user. My main requirement for the command was that as long as you have the minimal requirements for log shipping you would be able to set it up.

The main command is “Invoke-DbaLogShipping”.

Within log shipping there are a lot of decisions that can be made. All those decisions ended up with lots of parameters. The command has over 90 different parameters ranging from setting the backup folder to setting the name of schedules.

I will explain a couple of the most important ones that I think will be used the most:

General

  • Force: When this parameter is used, lots of assumptions will be made and you need only a minimum of settings to setup log shipping.

Server

  • SourceSqlInstance: The primary instance
  • DestinationSqlInstance: The secondary instance

Database

  • Database: Database (or databases) that need to be log shipped

Backup

  • BackupNetworkPath: Network path where the backups will be saved. This is the parent folder, a child folder for each database will be created in this folder.
  • BackupLocalPath: The local path for the backups. Only used by the primary instance and not mandatory
  • BackupRetention: How long to keep the backups. The default is 3 days.
  • BackupScheduleStartDate: Start date when the backup schedule needs to start
  • BackupScheduleStartTime: Start time of the backup schedule. Maybe the backups only take place a certain time periods of the day.
  • BackupScheduleEndTime: End time of the backup schedule. Maybe the backups only take place a certain time period of the day.
  • CompressBackup: Do you want the backup to be compressed. By default it looks at the SQL Server edition and server settings.
  • FullBackupPath: Path to the full backup. Is only used when log shipping is being setup.
  • GenerateFullBackup: Instead of using an existing full backup you can use this parameter to create a full backup on the fly during setup.
  • UseExistingFullBackup: Besides FullBackupPath and GenerateFullBackup you can also set the option to let the command retrieve the last full backup made for the database.

Copy

  • CopyDestinationFolder: Where do the backups need to be copied to

Restore

  • DisconnectUsers: Important setting if your secondary database is in read-only mode. Users will be disconnected if the restore process starts and this is parameter was set.
  • Standby: Do you want your databases to be in stand-by mode. By default the database will be in no-recovery mode.
  • StandbyDirectory: Directory where the TUF (transaction undo file) file needs to go for a database. This is needed when using the -Standby parameter.

Setup log shipping using Invoke-DbaLogShipping

Before we look into the command we need to go through the prerequisites of log shipping. The items below are the bare minimum to setup log shipping.

Prerequisites

  • At least SQL Server 2000 standard edition
  • The primary database needs to be in Full recovery mode.
  • A network share/drive that’s accessible for the primary and secondary server

A more sophisticated setup would be:

  • A primary server containing the primary databases
  • A secondary server containing the secondary databases
  • A monitoring server logging all the information
  • At least SQL Server 2000 standard edition
  • The primary database needs to be in Full recovery mode.
  • A separate network share for the backup process
  • A separate network share for the copy process

Technically you don’t need multiple servers to setup log shipping. You can set it up with just one single SQL Server instance. In an HA solution this wouldn’t make sense but technically it’s possible.

Having a separate server acting as the monitoring server ensures that when one of the server goes down, the logging of the actions still takes place.

Having a separate network share for both the backup and copy makes it easier to setup security and decide which accounts can access the backups. The backup share needs to be readable and writable by the primary instance and readable by the secondary instance.
The copy share needs to be accessible and writable for only the secondary instance.

Additional information

  • Backup Compression
  • Database Status
  • Schedules
  • Cleanup

Backup Compression

Backup compression was introduced in SQL Server 2008 Enterprise. Beginning in SQL Server 2008 R2, backup compression is supported by SQL Server 2008 R2 Standard and all higher editions. Every edition of SQL Server 2008 and later can restore a compressed backup.

Backup Compression in  SQL Server

Database Status

You can decide whether you want the database to be in a recovery or read-only state. Read-only gives you the ability to let other processes, like SSRS or ETL, read production data without interfering with the actual production instance.

Schedules

When to run the backup of the transaction log. Does it need to run every 15 minutes, which is the default, or every hours, once a day etc. The same goes for the copy and restore schedule.

Cleanup

How long do you need to hold on to the backups. The default is three days which in some cases can be quite long.

Setup

This is where it’s going to be fun. As an example I’ll be using the command with the minimum of parameters. To do this a couple of the requirements need to be met. Let’s go through these prerequisites:

  1. Setup the shares for the backup and copy destination
  2. Privileges for primary and secondary instance to the backup and copy shares are set
  3. Database(s) that need to be log shipped
  4. Set the database in read-only or no-recovery mode
  5. Compress or not to compress the backup
  6. Generate or use an existing full backup

The command below will log ship the database from instance SQL1 to SQL2 by generating a full backup. The backups will be compressed and to initialize the backup a new full backup is generated.
No other settings are made. To make sure all the settings are met the -Force parameter is used to enable all the assumptions.

The result:

logshipping example force

This is only a fraction of the parameters available. The remaining parameters make it possible to configure your log shipping exactly the way you want it to be.

Execute the following command to get more information:

The next blog will be about the command “Test-DbaLogShippingStatus“. It’s responsible for testing if all of your log shipped databases are in a synchronized state.

T-SQL Tuesday #98 – Your Technical Challenges Conquered

Standard

Reading Time: 6 minutes

T-SQL Tuesday is back around again and this time it’s the first one of 2018!

This wonderful idea by Adam Machanic ( Blog | Twitter ), is this time hosted by Arun Sirpal ( Blog | Twitter ).

 

 

The topic Arun has chosen is:

Write about and share with the world a time when you faced a technical challenge that you overcame and you can go really technical with both the issue and solution if you like.

In the years I faced lots of technical challenges and with me lots of other DBAs have. But one that still sticks with me the most is the story of the failing cluster setup.

It was back in 2008/2009 and I was a consultant in a hospital for a job to setup a fail-over cluster with SQL Server 2005. If you’re wondering why SQL Server 2005, it’s because SQL Server 2008 was just released, and the medical industry is not that keen on being an early adopter if it comes to technology.

My task was to setup a fail-over cluster on a couple of HP servers. They were already prepared by the system administrators an installed with Windows Server. Seems simple enough, I had done that many times before and should give my any trouble. Oh boy was I wrong about that!

Having setup all the different parts for Windows Server, I started the installation of SQL Server 2005 for the first node. Everything went smooth and the installation was succesfull.

Now comes the installation of the second node. I start the installation of the node to add it to the cluster and it installs to about 90% and stops. It gives me a general error that something in the installation went wrong and that I should check the installation logs to find out what went wrong.

I checked the log files and didn’t find anything special that could explain why the installation failed. Let’s try this again and go through all the settings an try again. Was it that I entered the wrong credential or some other setting?
After starting the installer it again fails at the 90% mark. The installation rolls back the entire installation and I’m not sure what happened.

Let’s check the installation logs again, hopefully this time it shows me what went wrong. Unfortunately it didn’t show me anything.

I wasn’t going to give up. I tried different variations of the installation settings to see if I made some sort of mistake.
I checked the hardware for any defects, replaced memory, swapped the CPUs of the machines.

Nothing there that could point out why the setup failed.
At this point it’s a good idea to get some higher power involved and we created a ticket with Microsoft support. It even got a few levels higher and even they weren’t able to figure out why the installation failed. Their answer was; “Just use SQL Server 2008, that should work”.

Besides the fact that was not the answer I was looking for, it was not possible for me to install SQL Server 2008 because it was not yet proven technology.

So as the consultant I was kind of in a pickle. How was I going to get the job done because my reputation was on the line. I’m supposed to be the one with the knowledge and there I was struggling for over a week to get a cluster setup.

Because at this point I had to start thinking outside the box and I was looking at the hardware again. The hardware was all approved and supported hardware by Microsoft. We didn’t need any third party drivers or anything like that.

Looking at the hardware, besides the motherboard power supply and some other parts , we only able to control the CPU, memory and the hard disks. The servers were identical with the following configuration:

CPU: 4 x 6 core
Memory: 64 GB
Hard disk:
System disk – 40 GB
Application disk – 30 GB
Data disk: 300 GB
Log disk : 100 GB

I first examined the disks. There was nothing special about them. The system and application disk were the only local disks and the data and log disk were LUNs.
The memory was nothing special either and was setup in 4 x 16GB modules.
The CPU was something I had not seen yet and what caught my attention was the 6 core specification. In total we had 24 cores and for some reason that number was odd to me.

Let me explain why the number 24 is an odd number for me. It’s because when you’re used to working with computers you are used to work with number in the power of 2. We have 1,2,4,8,16,32,64 etc but not the number 24. That’s right between 16 and 32.

It’s possible to let Windows believe there is a different amount of processors in your server. You can change this setting in the MSCONFIG utility by setting the amount of processors in the BOOT.INI tab.

At this point I was good for any new idea so I changed the value for the amount of processors to 2. The number 2 is in the power of 2 so let’s see what happens.
I restarted the server and looked into the task manager and I saw two processors. That all worked now let’s run the setup of the SQL Server node again.

I would not have believed it if this didn’t happen to me but for some reason the setup was successful. At this point it could’ve been a fluke and I wanted to be sure. I removed the node from the cluster, reset the amount of processors to the original version and ran the setup again. This time it failed as I had seen before.

This is still not good enough for me but I was getting closer to a conclusion. Let’s see what happen if we change the amount of processors to 8. I changed the amount, rebooted the server and started the setup again. Again this time it’s successful.

My conclusion was that for some reason the setup couldn’t handle an amount of processors that was not in the power of 2. In my mind this was insane because I had never encountered such a problem before.

Now that everything is installed I can again set the amount of processors back to 24 and get on installing the updates like service pack 3. I changed the amount of processors, rebooted the server and as soon as the server booted up, my cluster installation failed. It couldn’t be the amount of processors right? Again no clear error and because I knew what went wrong I changed the amount of processors back to 8 and rebooted it. This time the server and cluster booted up.

I contacted Microsoft support and told them about my findings. Their reply was that indeed something odd was fixed in SQL Server and it the fix was released in service pack 3.

Back with SQL Server 2005 you weren’t able to slipstream your service packs. You first had to install the initial release and than apply the service pack.

The next step was to install SP 3 with the 8 cores. I started the setup and of course things didn’t go as expected. The installation failed and gave no clear reason why.

Again I was stuck. I had a uncompleted cluster install because we couldn’t use the full amount of resources, I couldn’t install the right update to fix it.

I started to dig into the updates that would applied in SP 3. I noticed some changes to the system databases especially the mssqlresource database.

The mssqlresource database is a system database that’s not visible for the user and is only used internally by SQL Server.

I ran the setup again and this time I wanted to see what happened during the install with the system databases. I saw the master database being replaced by a new version, and the msdb database as well and as soon as the installation wanted to finish the update of the mssqlresource database it disappeared, the original version was placed back and the installation failed.

That’s it! That’s the reason the service pack was failing. By a little research and some gut feeling I saw what happened. But why did it happen. Was the installation not able to rename the database?

To be sure I did something rigorous and don’t do this in production ever! I went into the SQL Server settings and I updated the physical name of the mssqlresource database. I shut down the instance and I renamed the files for the database on disk.
I started the instance sucessfully. I checked the name of the database and it showed the renamed files.

I started the installation and watched the system database files being changed. First the master database, than msdb and finally the mssqlresource database. I see a new mssqlresource database being created next to the original one, the old one being removed and the installation finishes!

That was the problem for SP3, it was not able to do an in place upgrade of the mssqlresource database.

We’re almost done. The only thing I have to change back is the amount of processors and reboot the system. I changed the setting, rebooted the server and everything booted up successfully.

This one hell of a journey that gave the grey hairs I have now. This took about 2 weeks of my life what could’ve been an afternoon.

This was and experience that changed my way of thinking as a DBA and I learned a lot along the way.
The moral of the story is that there is always a solution. Go and dig into the material. Think outside the box and do not let any challenge get the best of you.

Lets get all Posh! Log Shipping

Standard

Reading Time: 5 minutes

This month’s T-SQL Tuesday is brought to us by my good friend Rob Sewell (b | t) and the subject is “Let’s get all Posh – What are you going to automate today?

My post will be about log shipping because setting up log shipping using the GUI is a very tedious task and doing it with TSQL scripting is no fun either. So let’ see how we can do this with PowerShell.

 

 

Log shipping explained

If you’ve been a SQL Server DBA for a while you probably heard about log shipping.

Log shipping makes it possible to increase the database’s availability by automatically backing up the the transaction log, copying the backup file and restoring the backup file to another database instance (or the same instance with a different database name).

Log shipping is an older technology but still has a purpose in lots of environments where it’s being used for reporting, ETL or to give developers and users access to production without the risk of them breaking anything.

Why use PowerShell

If you’ve ever had to setup log shipping in the SQL Server Management Studio (SSMS) than you know how tedious it can be. I actually counted the amount of clicks to setup a single database for log shipping and you need about 40 of them to get the job done.

So one day we decided to create a warm standby for one of our production servers. I had setup log shipping before and if you know what to do it’s not that difficult.
It becomes a problem when you have to setup log shipping for more than 20 databases ranging from 10 GB to 800 GB in size.
I ended up spending about a week setting it up (between other tasks I do have more work to do) and for me that’s way too much time for something as simple as this.

So why use PowerShell, because anything I can do with the SSMS can be done with the SMO. This would save a lot of time for me and anyone using the the functionality.

How did I do it

I contacted Chrissy LeMaire who is the original brainchild of the dbatools PowerShell module and she was instantly excited to have this functionality within the module.

The first thing I had to do was to find out how SQL Server in the background setup the log shipping when I used the GUI. Yeah I know using the GUI is bad when you use a lot of PowerShell but sometimes I like to do it old school.

I quickly found out that SQL Server does a lot of things to set up log shipping. It creates jobs, job steps, job schedules, uses stored procedures to set values in specific tables etc etc.
I went to the dbatools module and used the Find-DbaCommand function to see if there was any functionality to create the agent objects (jobs, steps and schedules) but they weren’t there so the first step was to create a full set of functions to manage those objects.
A couple of thousand lines later I finished the agent functionality and could continue to developing the log shipping function.

That function started pretty small, but like the agent functionality, quickly grew to be some sort of monster with over 70 different parameters that would support every kind of situation.

After about a couple of hundred iterations it was finally committed to the development branch of the dbatools module to get tested. I have great respect for the people who did the testing of that functionality because I didn’t envy them. I knew how complicated it was to build everything and to test all that must have been a lot of work. Thank you to all the testers that accepted the challenge.

The solution

The entire log shipping functionality is now separated between 5 functions. Four of them are used internally and are not visible as a public function because you can easily break stuff if it’s not being used correctly.

The main function, Invoke-DbaLogShipping, is available in the dbatools module for anyone to use.

If you open the GUI for the log shipping you have lots of choices but most of them are already supplied in the GUI itself and you can decide whether to use them or not.
The whole idea behind the functionality was that yit would allow you to quickly setup log shipping using a lot of defaults like you can in the GUI, but if you’re more experienced you can change any setting to your preferences.

it will take you about a minute to set all the parameters and after that the only action that will slow you down is the size of you database files when you use a restore.

Enough talk I want to see some action!

I will demonstrate how to setup log shipping using the least amount of parameters. You actually need six parameters to make it work:

  1. SourceSqlInstance – Which instance need to be used as the source server)
  2. DestinationSqlInstance – Which server needs to be used to log ship the database too
  3. BackupNetworkPath – Where do the backup need to be placed
  4. Database – Which database, or databases, needs to be log shipped
  5. GenerateFullBackup or UseExistingFullBackup – Generate a full backup or use an existing one

The command below will log ship a database from server1 to server2.
It will log ship one database called db1.
The network path for the backups is \\server1\logshipping\backup.
The backups will be created with backup compression.
No new backup will be created, instead it will use the last full backup
In this example the suffix for the database is “LS” because I want to differentiate the log shipped databases.

Below is the result of the log shipping.

What else can be done with this function?

When I developed this function it had a lot of iterations. I’m a developer that will have a good idea what to build and during the process will think of other functionality that needs to be put in.
I had a lot of iterations where I constantly thought of new ideas that would make other people’s work easier.

A couple of examples:

  • To monitor or not to monitor log shipping
  • Enabled or disabled log shipping jobs
  • Naming convention of the jobs
  • Standby or read-only mode for the secondary database
  • To compress or not to compress the backup files
  • Generate a full backup for initialization
  • Log ship multiple databases in one command. This will be executed in serial.
  • Force every setting and discard anything that’s there

The last one needs a little more explanation. I want this function to work really easy and with existing settings I want it to work. The -Force parameter is a really powerful parameter which will assume a lot of the defaults but will also overwrite existing settings like jobs, schedules, database settings etc.

Conclusion

I will never ever use the GUI again to setup log shipping again and I hope you will too. It’s just way too much work and I’ve had it fail on my lots of times not knowing what went wrong having to set everything up from scratch.

There is more functionality coming for the log shipping functions like status checks and fail-over initialization.

If you find that something is missing, or works inefficient or doesn’t work (I hope not), in this function please let us know. You can find us on Slack, GitHub, Twitter, YouTube, LinkedIn. You can find me on on the Slack channel most of the time. If you have any questions you know where to find me.

The last thing I want to say is that dbatools module is a community driven module that enables database administrators to work more efficient. It’s because of the hard work of all the volunteers that this module is such a success.
It’s a group of very hard working and smart people who I have great respect for. It is because of the enthusiasm of all those people, and especially Chrissy, that I love working on the module.

Thank you again Rob for hosting this month’s TSQL Tuesday, it was really fun.

Testing Log Shipping with PowerShell and Pester

Standard

Reading Time: 3 minutes

Thanks to my good friend Rob Sewell (b | t) I got into testing with PowerShell with Pester.
He showed me how to use the Pester module to test about anything you can test with PowerShell.

In my environment I have lots of databases being log shipped. To see the current status of the log shipping I either have to execute a procedure or execute a standard report from SSMS. And I know there are other solutions but in most cases it comes back to these two.

The procedure returns all the databases, the primary and the secondary, with the values for last backup time, copy time, restore time and the thresholds. This doesn’t show me the problems right away but it’s fast.
The report shows me in nice green and red colors which which is good and what’s not but I’d have to do it for every server manually. So it’s pretty but it’s slow.

Why isn’t there anything that has the best of both worlds where I have the speed of the query with the clear results of the report?

That’s where Pester comes in!

I can use Pester to test the values that come back from the procedure and do basically the same thing the report does.

The procedure returns all the databases, the primary and the secondary, with the values for last backup time, copy time, restore time and the thresholds.
There is also an overall value called “status” which shows a 0 when everything is fine and a 1 when something is wrong.

I wanted to keep the check as simple as possible. If everything is fine, I don’t need to go into much detail. If there is something wrong I want to know which part of the log shipping failed, like the backup, copy or restore.

The Overall Test

This test only looks at the the marker the log shipping sets to indicate if everything is OK or not.

Log Shipping Pester Overall

Ooh oh! Something failed. Good thing we have our test.

The Detailed Test

The detailed test goes into the detail of the last backup, last copy and last restore time. We want to know what went wrong and where.

In this case I only want to zoom in on the database with the failed test so I use the -Database parameter to give the name of the database.

Log Shipping Pester Detailed Single DB

Of course it’s possible to execute the detailed test for all the databases. Just remove the database parameter and the same tests are run for all the databases.

Log Shipping Pester Detailed All DB

The entire test took less than two seconds and made it possible for me to quickly see if everything is fine with the log shipping.

The code can be downloaded from github. I hope this helps you out and make your life a little easier as it did for me.

 

 

T-SQL Tuesday: The times they are a-changing

Standard

Reading Time: 4 minutes

sql tuesdayThis months T-SQL Tuesday is is inspired by the blog post Will the Cloud Eat My DBA Job? by Kendra Little. Technology has changed a lot in the past years, especially with cloud/globalization/automation. What an impact has this had on your job? Do you feel endangered?

Over the years I’ve seen a lot of things change with SQL Server. I can remember that somebody told me when I started with SQL Server 200 that T-SQL was never going to be a big thing. How wrong was that guy right?!

I personally have not yet had the chance to do a lot with the cloud like Azure SQL Database.I did get the change to fiddle around with a trial period of Azure to see what I could do and to get a feel for the interface. Unfortunately this was just a trial and after it ended I didn’t pick it up again. I should because I like to learn new skills.

What an impact has this had on your job?

Some things have had a great impact on my work. Take PowerShell for instance. If I didn’t start with that when it first came out I would still be clicking around SSMS like crazy. I like to automate everything that’s repetitive because I like to spend my time efficiently doing stuff that gives me energy and not do things over and over again.

At this moment cloud has not an impact on me. Employers in the past did not see any benefit of it at that moment and my current employer does not yet see use of it either. Maybe something will change in the near future but for now it’s not going to happen 🙁
I would really love to start working with the cloud and move some of our databases because I think it is a valuable addition to traditional architectures.

Do you feel endangered?

No! And I’ll tell you why.

When I first started to work with servers and databases you had a physical server where you had to put in a CD or DVD and run the install from a console to get Windows Server installed. As soon as everything was set up you could remotely log in and do the rest of the work from the office.

The server room would look a little like this:

Servers in serverroom

Nowadays you, or your system administrator either logs in to VMWare or Hyper V. He or she creates a new server, and if they work smart, it’s done with cloning or scripting and the server is done within minutes.
The last time I physically touched a server to do database administrator work was about 8 years ago. If you have the same situation you kind of work in a cloud-like environment.

The second reason I don’t feel endangered, is because as a DBA I have to deal with everything within and around SQL Server. Bluntly said a VM for me is nothing more than a box of resources where my instances do their work. I know people will disagree with me but if you’re not the system administrator the underlying hardware layer is invisible to you.

If your employer is a company with one or more database servers there will always be a need for a DBA. There will be performance issues, new installations, high availability, reporting etc etc.

Or do you have more exciting features/toys to work with?

In the last couple of years so much has changed in SQL Server that it’s impossible to comprehend everything.

I’m excited to work with SQL Server vNext on Linux for example. There are a lot of new features for Business Intelligence with Analysis Services and Power BI.

You can create stretched databases where parts of the database are in the cloud and others are on premise. Imagine a hybrid environment where parts of the network are locally/on premise and other parts are deployed in the cloud.

Microsoft has embraced PowerShell for SQL Server and has now made it open-source. How cool is that! There are more and more people developing in PowerShell and creating modules like dbatools.

There is so much new stuff that I have to cherry pick what to do first.

Do you embrace the change and learn new skills?

I embrace change and love to learn new skills.

The world is changing for the DBA and we as DBAs have to change with it. We’re no longer the strange guy in the cubicle that only shows up when something goes wrong. I see more and more situations that we have to be on the forefront taking action and be visible to our colleagues.

If your employer wants to work in the cloud don’t be afraid of it, embrace it, learn everything you can about it.If you have processes that are inefficient, automate/optimize them and make your life easier.

Be the one that has the vision to get the company to higher level and you’ll see that everything will work out.