Microsoft Gold Azure and Dynamics CRM Partner | Salesforce Experts

Frequent monitoring of server status is a critical DBA task. This is compounded when there are multiple servers running many jobs through the SQL Server Agent. Recently we were faced with assessing the job load on multiple servers, particularly the number and frequency of these jobs.

SQL Server exposes virtually all of this information and it can be easily accessed through queries. My first humble attempt looked like this:

This is the result:

Name Step Step Name Type Enabled
Update DW 1 Create Snapshot TSQL Yes
Backup Transaction Logs 1 Backup Transaction Logs SSIS Yes

So far I know which jobs are currently running. What is missing is how often each job is scheduled to run. These servers have been operational for years. During that time new jobs are created without considering the load already placed on the server.

The schedule dialog for jobs in SQL Server Management Studio is very straightforward. This is not the case for the SQL required to return the same information. There are several joins required to access the schedule data and there are dependent columns that require manipulation. The Microsoft documentation for dbo.sysschedules clearly defines these columns.

This is the information I need to retrieve:

  • Interval: Monthly, Weekly, Daily
  • Frequency: every x days, hours, minutes
  • Start time
  • End time

Surprisingly, the start and end times were a lot more difficult to decipher than the rest. Both are stored as integer values in the ‘HHMMSS’ format. If this was a string it would be simple to parse as the value would always have a length of six characters. Integers do not store leading zeroes and therefore 2:35 am is represented as ‘23500’ and midnight is simply ‘0’.

I did find a post here when searching on ‘active_start_time’ but it a) did not handle all permutations and b) everything was returned in 24 instead of 12 hour format. By implementing a few nested CASE statements I was able to return exactly the information we needed.

This is the result:

Name Step Step Name Type Enabled Frequency Start Time End Time
Update DW 1 Create Snapshot TSQL Yes Every 15 minutes(s) 12:00:00 AM 11:59:59 PM
 Backup Transaction Logs 1 Backup Transaction Logs SSIS  Yes Once daily 02:30:00 AM 11:59:59 PM

 Handling Multiple Servers

This script is fine for a single server. In our case we need to monitor approximately 40 servers. How can this be automated? There are several approaches and the one I feel is by far the most scalable and customizable uses SSIS. The post entitled Centralizing Data Collecting provides step-by-step instructions on how to create an SSIS project and log files. It literally took me 10 minutes and worked flawlessly the first time.