Log Shipping With dbatools – Part 2: Test Log Shipping Status

Standard

Reading Time: 3 minutes

In the first part of the series I described the command “Invoke-DbaLogShipping”. This makes it possible to set up log shipping. This blog post will be about the command to test log shipping status.

Before I describe the command, I want to discuss the options that are available in the SQL Server Management Studio (SSMS) and SQL Server.

Out of the box monitoring

Microsoft has made it possible to check the log shipping using SSMS. This can be done using queries or by using the “Transaction Log Shipping Status” report.

Queries

To return the log shipping status using T-SQL execute the following code in a query window in SSMS:

The result is as follows:

Log Shipping Status Query Result

A good method to find any possible issues but you need to check multiple columns to find out if something is wrong.

Transaction Log Shipping Status Report

By default SSMS has a report that shows the status of the log shipping. To open the report right-click on the server, go the “Reports”, Go the “Standard Reports” and click on “Transaction Log Shipping Status”.

Log Shipping Status Report

The result is an overview of each database with the status of that process:

Log Shipping Status Report Result

The report shows in red colored text if something is wrong which is a great way to find any problems. Still we need to go into SSMS and click around before we get to this information.

Why this command

Monitoring your log shipping processes is important. You need the synchronization status of the log shipped databases.

The log ship process consists of three steps; Backup, Copy and Restore. The log shipping tracks the status for these processes.
It registers the last transaction log backup, the last file copied and the last file restored. It also keeps track of the time since the last backup, copy and restore.

But that’s not all. Log shipping also checks if the threshold for the backup and restore has been exceeded.

During the log shipping thresholds are set for the backup and restore process.
The default thresholds are 60 minutes for the backup and 45 minutes for the restore. There is no threshold for the copy process.
If for example, the last time since the last backup exceeds the backup threshold an alert will be triggered.

That’s a lot of information to consider and that’s why this command was developed. It will enable the user to get a complete overview of the log shipping status without having to know all the details.

Test-DbaLogShippingStatus

The commands will return a lot of the information by default. It will collect all the information and based on that returns a status.

To execute the command:

The result is a detailed overview of each database including the status.

Log Shipping Status Command Detailed

This looks a lot like the results you get from the queries of the report we talked about earlier. All this information can be a little overwhelming and it’s not always needed.
If you don’t need all that information there is an option to format and filter the output.

It will only show you the core information to know what the status of the log shipping is.

The result of the command:

Log Shipping Status Command Simple

As you can see there is lot less information. It only shows the instance, database, instance type and the status. In most cases that’s all you need.

More filtering

The command also allows you to filter out the primary and the secondary databases.

There is a way to filter on specific databases using the “-Database” parameter.

Of course there is also an option to exclude certain databases by using the “-ExcludeDatabase” parameter.

 

This concludes the command “Test-DbaLogShippingStatus”. The next post will be about the command “Get-DbaLogShippingError”.

Log Shipping With dbatools – Part 1: Setup Log Shipping

Standard

Reading Time: 7 minutes

This post is the first one of a series of four describing all the different commands. We’ll discuss the commands to setup log shipping with dbatools, how to check the status, to check for errors and to recover in case of an emergency.

What is log shipping

Before we go into the why I want you to know what log shipping is.

Log shipping dates back to the early versions of SQL Server. The first article that I could find dates back to the year 2000 and explains how to setup log shipping with SQL Server version 7.

Microsoft has done a lot of work on the log shipping setup process and we no longer have to setup the “poor man’s log shipping” anymore. The setup contains a nice GUI and it makes easier to setup log shipping for your database(s).

Log shipping is the process of creating a transaction log backup. Copy that backup to a secondary server and restore the log to synchronize the databases. The image below shows the different elements of a log shipping architecture..

Log Shipping

https://docs.microsoft.com/en-us/sql/database-engine/log-shipping/media/ls-typical-configuration.gif

I’m not going to explain in detail what log shipping is or how to set it up using the traditional method. There are lots of websites that have already done a great job describing everything related to log shipping. Here are a couple of websites that could help

Why create a command set

Like I said before, Microsoft has done a great job of automating the log shipping process and making it easier for the user to setup. The GUI describes all the settings needed. Even someone inexperienced could setup log shipping in a short amount of time (given that all the requirements have been met).

Microsoft has simplified the process since the earlier versions of SQL Server, setting up log shipping still takes a lot of time. You have to setup each database individually and most of the time your are doing almost the same.
There are ways to automate this using dynamic T-SQL. However, I found that, at least in my situation, it was prone to errors and it took me more time to troubleshoot it than to set it up manually.

Why we needed to setup log shipping

A couple of months ago my company wanted to create a High Availability solution for our production databases.
Researching a solution like clustering for Availability Groups, we looked at architecture decisions, licensing and costs. The outcome was that we weren’t able to use any SQL Server Enterprise HA solutions.
Log shipping is not a so called hot standby that immediately recovers in case a service or database goes down. Instead it is called a warm standby where there is human interaction needed to recover.

We have about 20 production databases and in my opinion we are a small shop. Looking at the database size and amount of transactions.
We decided to go with log shipping. It’s well established within SQL Server and not yet deprecated like mirroring.

So on a Monday morning, after all the backups were done, I started to configure our databases for log shipping. For me the process of setting up log shipping was very tedious. I basically had to go through each of the databases and use the same settings over and over. The biggest difference would be the database name in the backup or copy directory.

The reason for the command set

In the end it took me about 3 to 4 days to set it up. This was mostly because I didn’t want to stress the network too much by setting up multiple databases at once. At some point I made some mistakes and had to start over again. I really don’t like to do repetitive tasks and in my daily work I automate most of those processes.

To setup log shipping for a single database in the GUI you need about 40 clicks and have to  insert lots of information in different windows.

That’s where I contacted Chrissy LeMaire ( b | t ) from the dbatools project and asked if it would be interesting to create log shipping commands for the PowerShell module. She was really enthusiastic about it and I started to develop the commands.

The command set

Initially it started with the command “Invoke-DbaLogShipping” but soon after a couple of commands followed. The command set contains the commands

This blog will be the first of four posts where we dive into “Invoke-DbaLogShipping” because we first need to have log shipping setup before we can use the other commands.

The Command Invoke-DbaLogShipping

During the development of the command I chose to make the process for setting up the log shipping as easy as possible for the user. My main requirement for the command was that as long as you have the minimal requirements for log shipping you would be able to set it up.

The main command is “Invoke-DbaLogShipping”.

Within log shipping there are a lot of decisions that can be made. All those decisions ended up with lots of parameters. The command has over 90 different parameters ranging from setting the backup folder to setting the name of schedules.

I will explain a couple of the most important ones that I think will be used the most:

General

  • Force: When this parameter is used, lots of assumptions will be made and you need only a minimum of settings to setup log shipping.

Server

  • SourceSqlInstance: The primary instance
  • DestinationSqlInstance: The secondary instance

Database

  • Database: Database (or databases) that need to be log shipped

Backup

  • BackupNetworkPath: Network path where the backups will be saved. This is the parent folder, a child folder for each database will be created in this folder.
  • BackupLocalPath: The local path for the backups. Only used by the primary instance and not mandatory
  • BackupRetention: How long to keep the backups. The default is 3 days.
  • BackupScheduleStartDate: Start date when the backup schedule needs to start
  • BackupScheduleStartTime: Start time of the backup schedule. Maybe the backups only take place a certain time periods of the day.
  • BackupScheduleEndTime: End time of the backup schedule. Maybe the backups only take place a certain time period of the day.
  • CompressBackup: Do you want the backup to be compressed. By default it looks at the SQL Server edition and server settings.
  • FullBackupPath: Path to the full backup. Is only used when log shipping is being setup.
  • GenerateFullBackup: Instead of using an existing full backup you can use this parameter to create a full backup on the fly during setup.
  • UseExistingFullBackup: Besides FullBackupPath and GenerateFullBackup you can also set the option to let the command retrieve the last full backup made for the database.

Copy

  • CopyDestinationFolder: Where do the backups need to be copied to

Restore

  • DisconnectUsers: Important setting if your secondary database is in read-only mode. Users will be disconnected if the restore process starts and this is parameter was set.
  • Standby: Do you want your databases to be in stand-by mode. By default the database will be in no-recovery mode.
  • StandbyDirectory: Directory where the TUF (transaction undo file) file needs to go for a database. This is needed when using the -Standby parameter.

Setup log shipping using Invoke-DbaLogShipping

Before we look into the command we need to go through the prerequisites of log shipping. The items below are the bare minimum to setup log shipping.

Prerequisites

  • At least SQL Server 2000 standard edition
  • The primary database needs to be in Full recovery mode.
  • A network share/drive that’s accessible for the primary and secondary server

A more sophisticated setup would be:

  • A primary server containing the primary databases
  • A secondary server containing the secondary databases
  • A monitoring server logging all the information
  • At least SQL Server 2000 standard edition
  • The primary database needs to be in Full recovery mode.
  • A separate network share for the backup process
  • A separate network share for the copy process

Technically you don’t need multiple servers to setup log shipping. You can set it up with just one single SQL Server instance. In an HA solution this wouldn’t make sense but technically it’s possible.

Having a separate server acting as the monitoring server ensures that when one of the server goes down, the logging of the actions still takes place.

Having a separate network share for both the backup and copy makes it easier to setup security and decide which accounts can access the backups. The backup share needs to be readable and writable by the primary instance and readable by the secondary instance.
The copy share needs to be accessible and writable for only the secondary instance.

Additional information

  • Backup Compression
  • Database Status
  • Schedules
  • Cleanup

Backup Compression

Backup compression was introduced in SQL Server 2008 Enterprise. Beginning in SQL Server 2008 R2, backup compression is supported by SQL Server 2008 R2 Standard and all higher editions. Every edition of SQL Server 2008 and later can restore a compressed backup.

Backup Compression in  SQL Server

Database Status

You can decide whether you want the database to be in a recovery or read-only state. Read-only gives you the ability to let other processes, like SSRS or ETL, read production data without interfering with the actual production instance.

Schedules

When to run the backup of the transaction log. Does it need to run every 15 minutes, which is the default, or every hours, once a day etc. The same goes for the copy and restore schedule.

Cleanup

How long do you need to hold on to the backups. The default is three days which in some cases can be quite long.

Setup

This is where it’s going to be fun. As an example I’ll be using the command with the minimum of parameters. To do this a couple of the requirements need to be met. Let’s go through these prerequisites:

  1. Setup the shares for the backup and copy destination
  2. Privileges for primary and secondary instance to the backup and copy shares are set
  3. Database(s) that need to be log shipped
  4. Set the database in read-only or no-recovery mode
  5. Compress or not to compress the backup
  6. Generate or use an existing full backup

The command below will log ship the database from instance SQL1 to SQL2 by generating a full backup. The backups will be compressed and to initialize the backup a new full backup is generated.
No other settings are made. To make sure all the settings are met the -Force parameter is used to enable all the assumptions.

The result:

logshipping example force

This is only a fraction of the parameters available. The remaining parameters make it possible to configure your log shipping exactly the way you want it to be.

Execute the following command to get more information:

The next blog will be about the command “Test-DbaLogShippingStatus”. It’s responsible for testing if all of your log shipped databases are in a synchronized state.

T-SQL Tuesday #98 – Your Technical Challenges Conquered

Standard

Reading Time: 6 minutes

T-SQL Tuesday is back around again and this time it’s the first one of 2018!

This wonderful idea by Adam Machanic ( Blog | Twitter ), is this time hosted by Arun Sirpal ( Blog | Twitter ).

 

 

The topic Arun has chosen is:

Write about and share with the world a time when you faced a technical challenge that you overcame and you can go really technical with both the issue and solution if you like.

In the years I faced lots of technical challenges and with me lots of other DBAs have. But one that still sticks with me the most is the story of the failing cluster setup.

It was back in 2008/2009 and I was a consultant in a hospital for a job to setup a fail-over cluster with SQL Server 2005. If you’re wondering why SQL Server 2005, it’s because SQL Server 2008 was just released, and the medical industry is not that keen on being an early adopter if it comes to technology.

My task was to setup a fail-over cluster on a couple of HP servers. They were already prepared by the system administrators an installed with Windows Server. Seems simple enough, I had done that many times before and should give my any trouble. Oh boy was I wrong about that!

Having setup all the different parts for Windows Server, I started the installation of SQL Server 2005 for the first node. Everything went smooth and the installation was succesfull.

Now comes the installation of the second node. I start the installation of the node to add it to the cluster and it installs to about 90% and stops. It gives me a general error that something in the installation went wrong and that I should check the installation logs to find out what went wrong.

I checked the log files and didn’t find anything special that could explain why the installation failed. Let’s try this again and go through all the settings an try again. Was it that I entered the wrong credential or some other setting?
After starting the installer it again fails at the 90% mark. The installation rolls back the entire installation and I’m not sure what happened.

Let’s check the installation logs again, hopefully this time it shows me what went wrong. Unfortunately it didn’t show me anything.

I wasn’t going to give up. I tried different variations of the installation settings to see if I made some sort of mistake.
I checked the hardware for any defects, replaced memory, swapped the CPUs of the machines.

Nothing there that could point out why the setup failed.
At this point it’s a good idea to get some higher power involved and we created a ticket with Microsoft support. It even got a few levels higher and even they weren’t able to figure out why the installation failed. Their answer was; “Just use SQL Server 2008, that should work”.

Besides the fact that was not the answer I was looking for, it was not possible for me to install SQL Server 2008 because it was not yet proven technology.

So as the consultant I was kind of in a pickle. How was I going to get the job done because my reputation was on the line. I’m supposed to be the one with the knowledge and there I was struggling for over a week to get a cluster setup.

Because at this point I had to start thinking outside the box and I was looking at the hardware again. The hardware was all approved and supported hardware by Microsoft. We didn’t need any third party drivers or anything like that.

Looking at the hardware, besides the motherboard power supply and some other parts , we only able to control the CPU, memory and the hard disks. The servers were identical with the following configuration:

CPU: 4 x 6 core
Memory: 64 GB
Hard disk:
System disk – 40 GB
Application disk – 30 GB
Data disk: 300 GB
Log disk : 100 GB

I first examined the disks. There was nothing special about them. The system and application disk were the only local disks and the data and log disk were LUNs.
The memory was nothing special either and was setup in 4 x 16GB modules.
The CPU was something I had not seen yet and what caught my attention was the 6 core specification. In total we had 24 cores and for some reason that number was odd to me.

Let me explain why the number 24 is an odd number for me. It’s because when you’re used to working with computers you are used to work with number in the power of 2. We have 1,2,4,8,16,32,64 etc but not the number 24. That’s right between 16 and 32.

It’s possible to let Windows believe there is a different amount of processors in your server. You can change this setting in the MSCONFIG utility by setting the amount of processors in the BOOT.INI tab.

At this point I was good for any new idea so I changed the value for the amount of processors to 2. The number 2 is in the power of 2 so let’s see what happens.
I restarted the server and looked into the task manager and I saw two processors. That all worked now let’s run the setup of the SQL Server node again.

I would not have believed it if this didn’t happen to me but for some reason the setup was successful. At this point it could’ve been a fluke and I wanted to be sure. I removed the node from the cluster, reset the amount of processors to the original version and ran the setup again. This time it failed as I had seen before.

This is still not good enough for me but I was getting closer to a conclusion. Let’s see what happen if we change the amount of processors to 8. I changed the amount, rebooted the server and started the setup again. Again this time it’s successful.

My conclusion was that for some reason the setup couldn’t handle an amount of processors that was not in the power of 2. In my mind this was insane because I had never encountered such a problem before.

Now that everything is installed I can again set the amount of processors back to 24 and get on installing the updates like service pack 3. I changed the amount of processors, rebooted the server and as soon as the server booted up, my cluster installation failed. It couldn’t be the amount of processors right? Again no clear error and because I knew what went wrong I changed the amount of processors back to 8 and rebooted it. This time the server and cluster booted up.

I contacted Microsoft support and told them about my findings. Their reply was that indeed something odd was fixed in SQL Server and it the fix was released in service pack 3.

Back with SQL Server 2005 you weren’t able to slipstream your service packs. You first had to install the initial release and than apply the service pack.

The next step was to install SP 3 with the 8 cores. I started the setup and of course things didn’t go as expected. The installation failed and gave no clear reason why.

Again I was stuck. I had a uncompleted cluster install because we couldn’t use the full amount of resources, I couldn’t install the right update to fix it.

I started to dig into the updates that would applied in SP 3. I noticed some changes to the system databases especially the mssqlresource database.

The mssqlresource database is a system database that’s not visible for the user and is only used internally by SQL Server.

I ran the setup again and this time I wanted to see what happened during the install with the system databases. I saw the master database being replaced by a new version, and the msdb database as well and as soon as the installation wanted to finish the update of the mssqlresource database it disappeared, the original version was placed back and the installation failed.

That’s it! That’s the reason the service pack was failing. By a little research and some gut feeling I saw what happened. But why did it happen. Was the installation not able to rename the database?

To be sure I did something rigorous and don’t do this in production ever! I went into the SQL Server settings and I updated the physical name of the mssqlresource database. I shut down the instance and I renamed the files for the database on disk.
I started the instance sucessfully. I checked the name of the database and it showed the renamed files.

I started the installation and watched the system database files being changed. First the master database, than msdb and finally the mssqlresource database. I see a new mssqlresource database being created next to the original one, the old one being removed and the installation finishes!

That was the problem for SP3, it was not able to do an in place upgrade of the mssqlresource database.

We’re almost done. The only thing I have to change back is the amount of processors and reboot the system. I changed the setting, rebooted the server and everything booted up successfully.

This one hell of a journey that gave the grey hairs I have now. This took about 2 weeks of my life what could’ve been an afternoon.

This was and experience that changed my way of thinking as a DBA and I learned a lot along the way.
The moral of the story is that there is always a solution. Go and dig into the material. Think outside the box and do not let any challenge get the best of you.

Reflect on 2017 And My Learning Goals for 2018

reflection
Standard

Reading Time: 4 minutes

nfortunately I missed the last T-SQL Tuesday of 2017 but I’ll share my learning goals for 2018. I also want to reflect on last year because I think it’s good to see if I was able to improve.

Reflect on last year

At the end of 2016 I set myself the following goals:

  • Do more presentations
  • Get better in PowerShell development
  • Play a bigger part in the SQL Server community

I got really excited in 2016 by getting drawn into the SQL Server community by presenting and help people out. That gave me a lot of energy and I wanted that to continue in 2017. And oh boy did it continue!

I got really involved with the dbatools project. I developed a bunch of functions to support the SQL Server Agent, log shipping and a bunch of others. With that I hit two birds with on stone; Play a bigger part in SQL Server community and getting better in PowerShell development.

I also wanted to do more presentations. I submitted sessions to lots of events and I got to present a couple. Not the smallest ones because I got a session in SQL Saturday Oregon and a lightning talk at the PASS Summit.

One thing I really wanted to get better at was unit testing. As a DBA with development experience I never came in contact with this concept. Though I got excited about it due to Rob Sewell (b | t) who showed my some interesting uses of PowerShell unit testing with Pester.
The unit testing concept was a real eyeopener and a big learning curve. At the end of 2017 I was able to create sophisticated unit tests for SQL Server (tSQLt) and PowerShell (Pester) which helped me a lot with my development.

Goals for next year

So the T-SQL Tuesday post had the following questions:

  • What do you want to learn?
  • How and when do you want to learn?
  • How do you plan to improve on what you learned?

What do you want to learn?

I want always want to learn a lot of things but if I would make a shortlist it would contain the following

  • Query Store
  • SQL Server 2017
  • Continuous Integration
  • More consistent blogging

I’m a database administrator that has knowledge in database development and other processes. I’ve done small projects with business intelligence but that subject does not really suit me. I understand the technology and the thought process, and that’s good, but I could not switch to it full time.
I get excited when I see an inefficient process or code and when I’m able to improve that process by automation or by redeveloping the procedures.
That’s where my strength lies and that’s where I want to continue at.

Query Store

I went to PASS Summit 2017 and there were a of of talks about Query Store. I have not yet had the chance to take a deep dive into the material but I will very soon. I think this is one of the things that will impact my work the most

SQL Server 2017

That’s a big topic and something I’ve been looking forward for. I have been doing tests with SQL Server 2017 when it was first released but this year we’re going to put into production.

Continuous Integration

You already read about me being active with unit testing with tSQLt and Pester. Now I want to take the next step and implement CI for the database development.

It costs me a lot of time to create new release scripts and to make sure everything works as as expected. I just want to automate that and make my life and that of the other developers easier.

More consistent blogging

Last year was a really busy year for me in both the work and personal aspect. Because of all the pressure I was not able to blog as much as I wanted and the only thing that I was able to do was to create some fast drafts. I have about 50 drafts laying around that I could finish and I probably should.

For me to improve this year is to at least create a new blog post at least every two weeks.

How and when do you want to learn?

As with many other people in the tech industry we have a full time job and finding time to study can be tough. We never have a 40 hour work week and sometimes we make 10 or more hours a day.

I have the privilege that my employer enables me to do self-study. That means that I can spend about 10% of my time learning new things or improve my current skills.

But that’s mostly not enough. When I want to take a deep dive I go and get into the details of a subject. I’m going to ask questions that in most cases would not make sense or I’ll try to break things to see what would happen in a production environment.

That answers the question about when I learn things but now how. The “how” I learn largely depends on the topic at hand. In most cases I go and search already published articles and google for any information that I can find.

In most cases that will give me enough information to get started. Another place that does have a a lot of content is the Microsoft Academy. If all of that still does not satisfy me, which does not happen very often, I will turn to the SQL Server community and ask questions. One place that has lots of IT professionals present is the SQL Server Community Slack channel.

How do you plan to improve on what you learned?

The subjects I explained earlier are important to me because they can make my life easier. They will improve my skills as a DBA and will make sure less mistakes are made.

I already mentioned that I want to spend more time on my blog. I think blogging is an important part of my career and it enabled to connect to the rest of the world.
If you blog about something you need to know all the ins-and-outs of the subject. You cannot just write about something and not be confident about the information you’re presenting. Making sure that you have all the information by diving into the subject will make me better at that.

Someone ones told me, if you can tell me in children’s language about a particular subject you’ll know enough about it to present it to others. That’s true for both blogging or presenting a session.

Talking about presenting, I’m also going to spend more time developing presentations and submitting them to events. I love to present about the stuff I do and if I can help just one person I’ve already that’s already good for me.
All the new stuff I’ll be learning could also end up in a session so expect me to submit lots of sessions this year.

This is going to be a busy year but I like that. Lots of things to get better at.

Thoughts on PASS without a SQL Saturday

SQL Saturday
Standard

Reading Time: 2 minutes

I was kind of surprised when I heard about PASS’s plans to no longer spend money on SQL Saturday and I want to share my thoughts about it.

Constantine wrote a blog post about this matter and I felt that I should do a blog post about it too.

There is a quote in the #SQLSaturday channel in the SQL Community Slack that blew my mind.

They [the PASS board] are already signaling they don’t want to spend $$ on SQL Sat and a few of the board members would just as soon SQL Sats die.

Let me be clear, if there were no local user groups and/or SQL Saturdays I do not think that PASS would exist and let me explain why.

I went to my first SQL Saturday a long time ago and I immediately felt that I was part of something big. The amount of effort that was put in by all the volunteers and sponsors to let us professionals grow was incredible.

For any data professional to grow you need content that’s up-to-date. You need interaction with other experts to get new ideas.
The SQL Saturday volunteers, the presenters, the organizers, the sponsors and all the people in the background make it possible to get the most up-to-date content there is.
There is no organization in the world that gives that amount of content for free (with sometimes a small fee) and let you interact with professionals in such a pleasant manner where I don’t have to spend thousands of dollars to attend the PASS Summit.

Although PASS does offer lots of content through virtual chapters I think that’s only partially important to data professionals. SQL Saturdays give us a platform to connect with other people from our field, provide networking opportunities and also job opportunities.

I can surely say that if the SQL Saturdays would not be there I would not be at the point I’m right now.

The volunteers have a really tough job arranging venues, food and drinks etc etc to organize the events and they all do it out of love for the SQL Server community.
Some of the SQL Saturdays would not survive without the help of the PASS organization. If PASS would cut the funding of those events I bet they would disappear really fast.

To conclude, cutting funds for SQL Saturdays will cause the events to disappear.
The disappearing events will cause the organization to no longer have the platform they had to communicate to the data professionals.
No longer having the platform will let people find other ways to get the content and will probably skip PASS Summit causing a loss in revenue and that in the end could be the end of the PASS organization.

Looking back at my first lightning talk

Standard

Reading Time: 2 minutes

I had the opportunity to speak at the PASS Summit 2017 with a lightning talk.

This was the first time I ever did a lightning talk and it was different than a normal session.

It all boils down to the fact that you only have about 10 to 15 minutes for the entire talk.
This brings a couple of complications because suddenly you have to make sure your content fits within that time frame. You also have to make sure with the amount of content that you don’t go too fast. Going to too fast in a lightning talk is disastrous because attendees will not be able to follow you.

I normally do 60 minute sessions where I have the time to send to dig a little deeper than originally planned.

So how can I do such a short session and still make sure that the audience gets a bang for their buck?

After thinking about it I made a couple of steps:

  1. Write down the subjects I wanted to talk about
  2. Make the content for the subject and made sure it was short and to the point
  3. Present the content out loud and record it

During the recording I would watch the timer in PowerPoint to see when I would hit the 10 minute mark.

If I had gone over I would go into the content again and try to adjust it.
If I made it within those 10 minutes I would watch the recording and pay attention to how fast I was talking. I had to adjust multiple times to make sure I wasn’t going too fast.

After a couple iterations I was satisfied with the preparation and I could go into the presentation with confidence.

How did the session go?

This was actually really funny. I noticed on my itinerary for my flight that I would have to leave to the airport before 10:00 AM. The lightning talk sessions were from 9:30 AM to 10:30 AM so I had to make sure I was the first one to present because otherwise I wouldn’t be able to make it to my flight.

Fortunately the other presenters were so kind to let me go first and I thank them for it.

Because I prepared this session pretty well everything went smooth. I was able to do my talk and show some a couple of demos and finish within the 10 minute frame.

A couple of weeks later I got the feedback from several people from the audience and I was excited about that. Lots of people liked the content and the overall session so that was a big win for me.

Conclusion

Looking back this was a very good experience for me. I find doing a lightning talk is way more difficult than a normal session. It all comes down to preparation and placing yourself into the audience.

If you attended the lightning talks as PASS Summit 2017, please leave a comment because I’d really like to know your opinion about what went well and what didn’t. There is always an opportunity to learn and I like to get better with every session I do.

 

Lets get all Posh! Log Shipping

Standard

Reading Time: 5 minutes

This month’s T-SQL Tuesday is brought to us by my good friend Rob Sewell (b | t) and the subject is “Let’s get all Posh – What are you going to automate today?

My post will be about log shipping because setting up log shipping using the GUI is a very tedious task and doing it with TSQL scripting is no fun either. So let’ see how we can do this with PowerShell.

 

 

Log shipping explained

If you’ve been a SQL Server DBA for a while you probably heard about log shipping.

Log shipping makes it possible to increase the database’s availability by automatically backing up the the transaction log, copying the backup file and restoring the backup file to another database instance (or the same instance with a different database name).

Log shipping is an older technology but still has a purpose in lots of environments where it’s being used for reporting, ETL or to give developers and users access to production without the risk of them breaking anything.

Why use PowerShell

If you’ve ever had to setup log shipping in the SQL Server Management Studio (SSMS) than you know how tedious it can be. I actually counted the amount of clicks to setup a single database for log shipping and you need about 40 of them to get the job done.

So one day we decided to create a warm standby for one of our production servers. I had setup log shipping before and if you know what to do it’s not that difficult.
It becomes a problem when you have to setup log shipping for more than 20 databases ranging from 10 GB to 800 GB in size.
I ended up spending about a week setting it up (between other tasks I do have more work to do) and for me that’s way too much time for something as simple as this.

So why use PowerShell, because anything I can do with the SSMS can be done with the SMO. This would save a lot of time for me and anyone using the the functionality.

How did I do it

I contacted Chrissy LeMaire who is the original brainchild of the dbatools PowerShell module and she was instantly excited to have this functionality within the module.

The first thing I had to do was to find out how SQL Server in the background setup the log shipping when I used the GUI. Yeah I know using the GUI is bad when you use a lot of PowerShell but sometimes I like to do it old school.

I quickly found out that SQL Server does a lot of things to set up log shipping. It creates jobs, job steps, job schedules, uses stored procedures to set values in specific tables etc etc.
I went to the dbatools module and used the Find-DbaCommand function to see if there was any functionality to create the agent objects (jobs, steps and schedules) but they weren’t there so the first step was to create a full set of functions to manage those objects.
A couple of thousand lines later I finished the agent functionality and could continue to developing the log shipping function.

That function started pretty small, but like the agent functionality, quickly grew to be some sort of monster with over 70 different parameters that would support every kind of situation.

After about a couple of hundred iterations it was finally committed to the development branch of the dbatools module to get tested. I have great respect for the people who did the testing of that functionality because I didn’t envy them. I knew how complicated it was to build everything and to test all that must have been a lot of work. Thank you to all the testers that accepted the challenge.

The solution

The entire log shipping functionality is now separated between 5 functions. Four of them are used internally and are not visible as a public function because you can easily break stuff if it’s not being used correctly.

The main function, Invoke-DbaLogShipping, is available in the dbatools module for anyone to use.

If you open the GUI for the log shipping you have lots of choices but most of them are already supplied in the GUI itself and you can decide whether to use them or not.
The whole idea behind the functionality was that yit would allow you to quickly setup log shipping using a lot of defaults like you can in the GUI, but if you’re more experienced you can change any setting to your preferences.

it will take you about a minute to set all the parameters and after that the only action that will slow you down is the size of you database files when you use a restore.

Enough talk I want to see some action!

I will demonstrate how to setup log shipping using the least amount of parameters. You actually need six parameters to make it work:

  1. SourceSqlInstance – Which instance need to be used as the source server)
  2. DestinationSqlInstance – Which server needs to be used to log ship the database too
  3. BackupNetworkPath – Where do the backup need to be placed
  4. Database – Which database, or databases, needs to be log shipped
  5. GenerateFullBackup or UseExistingFullBackup – Generate a full backup or use an existing one

The command below will log ship a database from server1 to server2.
It will log ship one database called db1.
The network path for the backups is \\server1\logshipping\backup.
The backups will be created with backup compression.
No new backup will be created, instead it will use the last full backup
In this example the suffix for the database is “LS” because I want to differentiate the log shipped databases.

Below is the result of the log shipping.

What else can be done with this function?

When I developed this function it had a lot of iterations. I’m a developer that will have a good idea what to build and during the process will think of other functionality that needs to be put in.
I had a lot of iterations where I constantly thought of new ideas that would make other people’s work easier.

A couple of examples:

  • To monitor or not to monitor log shipping
  • Enabled or disabled log shipping jobs
  • Naming convention of the jobs
  • Standby or read-only mode for the secondary database
  • To compress or not to compress the backup files
  • Generate a full backup for initialization
  • Log ship multiple databases in one command. This will be executed in serial.
  • Force every setting and discard anything that’s there

The last one needs a little more explanation. I want this function to work really easy and with existing settings I want it to work. The -Force parameter is a really powerful parameter which will assume a lot of the defaults but will also overwrite existing settings like jobs, schedules, database settings etc.

Conclusion

I will never ever use the GUI again to setup log shipping again and I hope you will too. It’s just way too much work and I’ve had it fail on my lots of times not knowing what went wrong having to set everything up from scratch.

There is more functionality coming for the log shipping functions like status checks and fail-over initialization.

If you find that something is missing, or works inefficient or doesn’t work (I hope not), in this function please let us know. You can find us on Slack, GitHub, Twitter, YouTube, LinkedIn. You can find me on on the Slack channel most of the time. If you have any questions you know where to find me.

The last thing I want to say is that dbatools module is a community driven module that enables database administrators to work more efficient. It’s because of the hard work of all the volunteers that this module is such a success.
It’s a group of very hard working and smart people who I have great respect for. It is because of the enthusiasm of all those people, and especially Chrissy, that I love working on the module.

Thank you again Rob for hosting this month’s TSQL Tuesday, it was really fun.

Create Your Own PowerShell Profile

Standard

Reading Time: 3 minutes

Being a fan of automation I like to create my own PowerShell profile. It enables me to load various settings that normally take more time. The PowerShell profile resides in your home directory and if you work in an AD environment with roaming data you’ll have the same profile on every computer.

PowerShell profiles are not new and dates back to PowerShell v2.0. Others people have written about this subject before but I wanted to share my take on it.

Create the profile

To check if you have a profile execute the following command:

Profile Test Existence

If this returns false it means you don’t have a profile and you have to create one. The easiest way to do that is by executing the following:

Profile Create

From this point on you can put in anything that helps you speed up.

PowerShell drives

I have several directories which I use regularly. Instead of having to type the entire path I can create a PowerShell drive.

Profile PS-Drive Example

Now instead having to type the entire path I can simply use the new drive. This not only speeds up the way you enter your directory but also simplifies the reference to your current working directory.

Credentials

The next thing I do is when I’m part of a domain I usually need another user account to access my database servers. So I call the next piece of code to enter the credentials for my administrative account:

Get-Credential Window

You can reference the $Credential parameter to other functions that need more privileges. For instance, if you use dbatools, a lot of the commands have this parameter to make sure you connect to the SQL Server instance with the right credential.

Color schemes

Color schemes are important. There have been studies that show that certain fonts and colors make working on a console easier for the eyes.

I don’t like the colors for errors and warnings and if I have to go through a lot of them it’s hard to read the bright red on the blue background.

Profile Color Example

At least I can read this better than the original color scheme. Play around with this and you’ll see it makes handling messages from the console a lot easier.

Window and buffer size

It’s possible to change these values to make sure all your data fits inside your console.

Setting the location

For me it’s not always a good thing that the working directory is set to my home folder. This happens by default when you open a PowerShell console. If you open it with administrative privileges it points to  system32 folder of you Windows directory.

Conclusion

In the end you can load all kinds of settings in your profile to make your life a little easier. From the settings above to creating aliases, starting other scripts etc.

I hope this was informative for you and maybe you’ll get started with profiles too.