by William Hartz

Validate Views

Sometimes after moving databases between servers some views may be broken. This query will generate statements to test the views in your database.

SELECT
‘SELECT TOP 1 * FROM ‘ + SCHEMA_NAME(SCHEMA_ID) + ‘.’ + name
FROM sys.views
ORDER BY name;

Easier SSRS log querying

I’m working on setting up a scale-out SSRS installation.  We’ll be shutting down the old URL soon and everyone should be off it.  I can tell who is still hitting the old URL, which leads to the old server by using this query.

Use ReportServer
select * from ExecutionLog2
WHERE
instancename = 'OLDSERVER\MSSQLSERVER' AND
requesttype = 'Interactive'
AND TimeStart > GETDATE() - 1
--AND ReportPath NOT LIKE '/CIS%'
order by TimeStart DESC

We’re using the ExecutionLog2 view in the reportserver database. Much easier than joining tables together and trying to build paths to reports with a CTE.

More information on this view.

http://blogs.msdn.com/b/robertbruckner/archive/2009/01/05/executionlog2-view.aspx

by NASA Marshall

SQL Server Reporting Services scale-out cluster problems

I’ve been trying to get an SSRS scale-out cluster working the past few months.  I’d given up today and resorted to calling MSFT support.  I’d read all the manuals and supplemental articles like a proper admin should.  I’d posted questions on the MSFT forums to no avail.  Time to bite the bullet and sit on the phone with an expert.

I was getting 401 errors with not much help from the error logs.  A 401 is a security error.  The problem was intermittent, sometimes happening, sometimes not.  The problem always went away when a node was removed from the load balancer rotation.  Major hint there.  When you connect to the load balancer, you are routed to the same SSRS node – until you time out at 10 minutes – this is called “Sticky Sessions”.  The SSRS servers also got this treatment from the load balancer – and that was the problem.

The tech and I went through hours of checking configurations until he asked me to add an entry to the hosts file for the F5 name which reporting services was using.  This would make node 1 always request data from SSRS on node 1 and node 2 always request data from node 2.  Fixed the issue.  I imagine we can configure this on the load balancer as well, which I will probably have done.

 

DBase

Here at work, we need to import from DBase files every now and again.  DBase is a strange breed of database if you’ve been working in SQL.  To import from SQL, you use a OLE DB connection and then pick the folder that contains your DBase file – which is actually just one table.  http://msdn.microsoft.com/en-us/library/aa337084.aspx

I seem to remember running into a problem with the length of the DBase file name a year or so ago, so if you’re still having problems try shortening the name.

Inserting results of FILELISTONLY to a table

I was looking for a code example of this on the web, but couldn’t find it, so here it is. You would need to create the @cmd of course with your own variables.

SELECT  @cmd = 'restore filelistonly  from disk = ''' + @localBackupPath
+ @dbname + '\' + @filename + ''''
PRINT @cmd
CREATE TABLE #files
(
[LogicalName] NVARCHAR(128) ,
[PhysicalName] NVARCHAR(260) ,
[Type] CHAR(1) ,
[FileGroupName] NVARCHAR(128) ,
[Size] NUMERIC(20,0) ,
[MaxSize] numeric(20,0) ,
[FileId] BIGINT ,
[CreateLSN] NUMERIC(25, 0) ,
[DropLSN] NUMERIC(25, 0) ,
[UniqueId] UNIQUEIDENTIFIER ,
[ReadOnlyLSN] NUMERIC(25, 0) ,
[ReadWriteLSN] NUMERIC(25, 0) ,
[BackupSizeInBytes] BIGINT ,
[SourceBlockSize] INT ,
[FileGroupId] INT ,
[LogGroupGUID] UNIQUEIDENTIFIER ,
[DifferentialBaseLSN] NUMERIC(25, 0) ,
[DifferentialBaseGUID] UNIQUEIDENTIFIER ,
[IsReadOnly] BIT ,
[IsPresent] BIT ,
[TDEThumbprint] VARBINARY(32)
)
INSERT #files
EXEC ( @cmd
)

Restore Database From Latest Restore

Here is a bit of code, adapted from Nigel Rivett’s original. You can use this to copy one database over another or restore the most recent database over itself.

--BEGIN TRAN

IF EXISTS ( SELECT *
FROM dbo.sysobjects
WHERE id = OBJECT_ID(N'[dbo].[s_RestoreLatestBackup]')
AND OBJECTPROPERTY(id, N'IsProcedure') = 1 )
DROP PROCEDURE [dbo].[s_RestoreLatestBackup]
GO

CREATE PROC s_RestoreLatestBackup
@dbname VARCHAR(128) ,
@destdbname VARCHAR(128) ,
@localBackupPath VARCHAR(200) ,
@localDBPath VARCHAR(200) ,
@localLogPath VARCHAR(200) ,
@recipients VARCHAR(128),
@filenamesuffix VARCHAR(50)
AS /*
exec s_RestoreLatestBackup
@dbname = 'hrmprot' ,
@destdbname = 'HRMPROT2',
@localBackupPath = 'K:\Backups\' ,
@localDBPath = 'I:\Data\' ,
@localLogPath = 'J:\Logs\' ,
@recipients = 'myemailaddress',
@filenamesuffix = '_2'
*/

DECLARE @cmd VARCHAR(2000) ,
@filename VARCHAR(128) ,
@s VARCHAR(128) ,
@i INT ,
@d DATETIME ,
@sql NVARCHAR(2000) ,
@StartDate DATETIME

SELECT @StartDate = GETDATE()

-- drop database
IF EXISTS ( SELECT *
FROM master..sysdatabases
WHERE name = @destdbname )
BEGIN
SELECT @cmd = 'drop database ' + @destdbname
EXEC (@cmd)
END

-- get latest backup filename
SELECT @cmd = 'dir /B ' + @localBackupPath + @dbname + '\*.*'
PRINT @CMD
CREATE TABLE #a ( s VARCHAR(2000) )
INSERT #a
EXEC master..xp_cmdshell @cmd

DELETE #a
WHERE s IS NULL
OR s NOT LIKE '%_backup_%'

SELECT *
FROM #a

SELECT @filename = MAX(s)
FROM #a

-- Get files in backup
SELECT @cmd = 'restore filelistonly from disk = ''' + @localBackupPath
+ @dbname + '\' + @filename + ''''
PRINT @cmd
CREATE TABLE #files
(
[LogicalName] NVARCHAR(128) ,
[PhysicalName] NVARCHAR(260) ,
[Type] CHAR(1) ,
[FileGroupName] NVARCHAR(128) ,
[Size] NUMERIC(20,0) ,
[MaxSize] numeric(20,0) ,
[FileId] BIGINT ,
[CreateLSN] NUMERIC(25, 0) ,
[DropLSN] NUMERIC(25, 0) ,
[UniqueId] UNIQUEIDENTIFIER ,
[ReadOnlyLSN] NUMERIC(25, 0) ,
[ReadWriteLSN] NUMERIC(25, 0) ,
[BackupSizeInBytes] BIGINT ,
[SourceBlockSize] INT ,
[FileGroupId] INT ,
[LogGroupGUID] UNIQUEIDENTIFIER ,
[DifferentialBaseLSN] NUMERIC(25, 0) ,
[DifferentialBaseGUID] UNIQUEIDENTIFIER ,
[IsReadOnly] BIT ,
[IsPresent] BIT ,
[TDEThumbprint] VARBINARY(32)
)
INSERT #files
EXEC ( @cmd
)

SELECT *
FROM #files
-- buld the restore command
SELECT @cmd = NULL ,
@s = ''

WHILE @s < ( SELECT MAX(LogicalName)
FROM #files
)
BEGIN
SELECT @s = MIN(LogicalName)
FROM #files
WHERE LogicalName > @s


SELECT @cmd = COALESCE(@cmd + ', move ', '') + '''' + @s
+ ''' to '''
+ CASE WHEN #files.Type = 'D' THEN @localDBPath
ELSE @localLogPath
END
+ CASE WHEN #files.Type = 'D' THEN + LogicalName + @filenamesuffix + '.mdf'
ELSE LogicalName + @filenamesuffix + '.ldf'
END
+ ''''
FROM #files
WHERE LogicalName = @s
END
SELECT @cmd = 'restore database ' + @destdbname + ' from disk = '''
+ @localBackupPath + @dbname + '\' + @filename + ''' with move '
+ @cmd -- + ', standby = ''' + @localBackupPath + 'standby.fil'''

PRINT @cmd

--restore the database
select (@cmd)

exec (@cmd)

drop table #files
drop table #a

-- Check that the restore was successful
select @sql = 'select @i = count(*) from ' + @dbname + '..sysobjects select @d = crdate from master..sysdatabases where name = ''' + @dbname + ''''
exec sp_executesql @sql, N'@i int out, @d datetime out', @i out, @d out

/*
if @d > @StartDate and @i > 20
begin
if @recipients is not null
begin
select @cmd = 'restore ' + @filename + ' completed successfully - started ' + convert(varchar(8), @StartDate, 112) + ' ' + convert(varchar(8), @StartDate, 112) + ' ' + convert(varchar(8), @StartDate, 108)
exec master..xp_sendmail @subject = @cmd, @recipients = @recipients, @message = @@servername
end
end
else
begin
select @cmd = '********************* restore ' + @filename + ' failed - started ' + convert(varchar(8), @StartDate, 112) + ' ' + convert(varchar(8), @StartDate, 108)
if @recipients is not null
begin
exec master..xp_sendmail @subject = @cmd, @recipients = @recipients, @message = @@servername
end
raiserror(@cmd, 16, -1)
end
*/
go

--COMMIT TRAN

Creating Restore Statements With Powershell

WARNING: Use caution in running scripts you find on the internet. You are responsible for the well being of your systems!

We’re sitting around waiting for the SAN monkeys to fling some crap our way. Our primary cluster is down, which runs 100+ databases. Maybe the disks will come up – or maybe they won’t.

Assessing the situation, I found that the MSDB was not in full recovery mode. People, please don’t run around changing DBs to simple to ‘save space’ without thinking the ramifications through. In this case, we lose all backup history so we now need to figure out what backup file is the most recent and then which log files come after that .bak file – then we slam them together into a file, leaving the NORECOVERY clause off the last log restore statement. If you think I’m doing this by hand, you’re crazy.

We make an assumption here. Each file holds only one backup set. This is how we do our backup files, so I’m OK with this assumption.


$DatabaseBackupRoot = “\\MYSERVA\e$\Backups\"
$fix = ""

$DatabaseBackupDirs = gci $DatabaseBackupRoot

foreach ($DatabaseBackupDir in $DatabaseBackupDirs)
{
$DB = $DatabaseBackupDir.Name

$BackupRestore = ""
$LogRestore = ""
$LogRestore = ""
$BackupRestore = ""
# Parameters

$Path = “\\MYSERVA\e$\Backups\$DB”;
$FullPattern = “*.bak”;
$LogPattern = “*.trn”;

# Enumerate Last Full Back
$FullBackup = Get-ChildItem $Path -Filter ($DB + $FullPattern) `
| sort Name `
| Select-Object -Last 1;

# Enumerate Log Backups
$LogBackups = Get-ChildItem $Path -Filter ($DB + $LogPattern) `
| Where-Object { $_.LastWriteTime -gt $Fullbackup.LastWriteTime} `
| sort Name;

$LogBackupsCount = $LogBackups.Count

$BackupFileName = $FullBackup.Name
# if the backupfile is null, skip this
if ($BackupFileName)
{

if ($LogBackupsCount > 0)
{
$BackupRestore = "RESTORE DATABASE [$db] FROM DISK = N'$DatabaseBackupRoot$DB\$BackupFileName' WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 10
GO
"
}
else
{
$BackupRestore = "RESTORE DATABASE [$db] FROM DISK = N'$DatabaseBackupRoot$DB\$BackupFileName' WITH FILE = 1, NOUNLOAD, STATS = 10
GO
"
}

}
else{
#We get a list in the output of emtpy directories.
Write-Output $DB
}

$counter = 0
$len = $LogBackups.Count

foreach ($LogBackup in $LogBackups)
{
$counter++

$LogBackupFile = $LogBackup.Name;
#Test for no log/empty directory
if($LogBackupFile)
{
if($counter -lt $len)
{
$LogRestore = $LogRestore + "
RESTORE LOG [$db] FROM DISK = N'$DatabaseBackupRoot$DB\$LogBackupFile' WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 10
GO
"
}
else
{
$LogRestore = $LogRestore + "
RESTORE LOG [$db] FROM DISK = N'$DatabaseBackupRoot$DB\$LogBackupFile' WITH FILE = 1, NOUNLOAD, STATS = 10
GO
"

}
}

}

$RestoreOneDatabase = $BackupRestore + $LogRestore

$RestoreAllDatabases = $RestoreAllDatabases + $RestoreOneDatabase

}

$RestoreAllDatabases | out-file "c:\temp\restore-MYSERVA.sql"

So now we’ve got a SQL statement with all these restores compiled together and separated with GO statements. Pretty neat. I’d probably run small blocks of them and fix any issues the script might have.

Monitoring SSAS for process failures

When a cube fails to process, you don’t want to hear it from your customer. As the last step in our datawarehouse load, I was processing a cube, using the script generated from the SSMS dialog. That was working fine most of the time, but sometimes we’d have a development failure after some changes had been made during the day.

Here is what worked. Notes on this – don’t use $( anywhere in your PS code or SQL will parse it as a token and tell you to escape it. My first thought was to send an email to DBAs when the process failed – or rather hadn’t happened within the last 12 hours, but opted for the throw instead. Throw raises a nice error in the job and brings it to the attention of our central monitoring system.


## Add the AMO namespace
$loadInfo = [Reflection.Assembly]::LoadWithPartialName(“Microsoft.AnalysisServices”)

## Connect and get the edition of the local server
$connection = “SERVER01”
$server = New-Object Microsoft.AnalysisServices.Server
$server.connect($connection)

$databases = $server.get_Databases()
$WF_DB = $databases.GetByName("SSASDB1")

$d = Get-Date #did this because SQL job step doesn't like dollar sign(get-date).AddHours...
$date = $d.AddHours(-12)

if ( $WF_DB.LastProcessed -lt $date)
{# Cube not updated since our threshold date

$dateprocessed = $WF_DB.LastProcessed
$lastupdate = $WF_DB.LastUpdate
$parent = $WF_DB.Parent

$message = "SSASDB1 on $parent has not processed since $date. The last process was on $dateprocessed. Manually reprocess the cube to see error message or view the previous job step for additonal information. Thanks!"
throw $message
}
else
{
echo "Cube updated after $date"

}

SQL Powershell

I’m getting a little tired of always clicking around in SSMS. Here goes a shot at powershell. I’ve done a few tutorials on powershell, but don’t use it enough to remember. The only thing I remember is Get-Child-Items and it’s alias GCI. That’s a start. Follow along with me.

User asks if a database was backed up last night. Of course it was, but let’s just make sure.

Right click on the databases node of the server I’m interested about in ssms and ‘Start PowerShell’

I end up at the databases path and try to copy the text out of the window with a mouse swipe to paste it in here. No dice, so I pop into properties and enable quick edit mode – I prefer it on in the CMD window. A highlight and a right-click will copy text out of the window.

PS SQLSERVER:\SQL\SERVER1\DEV08\Databases>

You can see the Server, instance name and then the databases node.

I enter my one command: GCI

A long list of databases fill the screen, but not the backup information I’m looking for. I know there is a powershell cmdlet (command-let a powershell command object) which will tell me more about the children of the database node – the databases. Fortunately, everyone on the web likes to talk about it. It’s the Get-Member cmdlet, which will tell you the methods and properties of whatever object you feed or ‘pipe’ it. Let’s pipe those database objects to the Get-Member alias.

GCI | GM

Whoah. That’s alot of output. I could use a little help sorting through that. Maybe I can pipe that list to another cmdlet and search for a string. There is probably a cmdlet called Find.

GCI | GM | FIND ‘backup’
FIND: Parameter format not correct

So how do I know what a cmdlet is expecting?

find /?

That gives me some help info, but changing the single quotes to double quotes doesn’t seem to do the trick.

Get-Help find

There’s a listing of somewhat unrelated commands. What did I just do there? What happens if I don’t give it a parameter.

Get-Help

TOPIC
Get-Help

SHORT DESCRIPTION
Displays help about Windows PowerShell cmdlets and concepts.

LONG DESCRIPTION

SYNTAX
get-help { | }
help { | }
-?

I expected to see a help page a bit more like this when I typed in Get-Help find, but instead I saw a listing. Find is being interpreted as a TopicName. So let’s type Get-Help find again – maybe there’s a more appropriate cmdlet I can use to find my search string in output of Get-Member. Select-String stands out to me – Finds text in strings and files.

GCI| GM | Select-String “Backup”

System.Void DropBackupHistory()
System.Data.DataTable EnumBackupSetFiles(int backupSetID), System.Data.DataTable EnumBackupSetFiles()
System.Data.DataTable EnumBackupSets()
System.Boolean IsDbBackupOperator {get;}
System.DateTime LastBackupDate {get;}
System.DateTime LastDifferentialBackupDate {get;}
System.DateTime LastLogBackupDate {get;}

There it is – LastBackupDate – and even more.

So I’ll accomplish my task at hand.

GCI | SELECT NAME, LASTBACKUPDATE, LASTLOGBACKUPDATE

But I’m only interested in a couple databases which start with “Risk”.

GCI | where { $_.name -like “Risk*”} | SELECT NAME, LASTBACKUPDATE, LASTLOGBACKUPDATE

The $_ represents the object being sent from the previous cmdlet. So you can filter on any property you found in the get-members listing.

Mission accomplished! That gives me what I need and I can feel smug about not clicking around in SSMS. True, it took me five minutes to learn how to do this and would take me a little longer than clicking on the two databases which were returned, but learning this will let me complete other more complex tasks.

Oh yeah, one other thing. Out-Host -Page will let you page through the sometimes lengthy manual pages. OH -p also works.