RSS

Tag Archives: Problem Solving

DPM… and “Why are my SQL Transaction Logs filling up?” (the answer… really)


Here at Avanade we run a lot of pre-release software from Microsoft, so we can work the bugs out, so to speak, and thus be able to get better solutions to the customers we’ve got.

One of the software packages that we use is the System Center Suite – including, in this case, the Data Protection Manager part.  It’s now in production, released product for some time, and we now run all our backups with it.

The thing that is totally weird for me as a DBA is that all of the backup jobs I had on the boxes got disabled, and DPM took over. It was like driving a car with an automatic transmission for the first time. “What do you mean I don’t have to do backups?”

If you have things set up properly (which is a deeper subject than this blog post is meant for) – it’s close to hands off – which, come to think of it, is still weird.

So, over the time we’ve had it – I’ve found that there are several things to be aware of in the implementation of it.  This is not an all-inclusive list of surprises, this is just a couple of the things that I found out – after researching them myself and finding a lot of bad information out there.

The information below is from my own experience, from conversations with Microsoft, and from research I’ve done.  That said, my goal is to help keep you out of the DPM weeds by helping you

  1. Understand how DPM handles SQL backups in the varying recovery models you can have (full, simple, etc.)
  2. Understand where it needs drive space. (this can be an absolutely evil ‘gotcha’ if you’re not careful)
  3. Edge cases.
  4. How items 2 and 3 can intertwine and get you into deep doo-doo and what you want to do to stay out of it.

So, ready?

Oh – assumptions:

You’ve got DPM installed, and for the most part, configured.  It’s working, but you have transaction log drives filling up on some of your servers, and it’s not really clear why.

Wanna know why?

Here’s the answer:

It’s because the UI is very unclear, because the documentation is unclear, (there was a hint of it on page 83) and because the things that would be obvious to a DBA simply aren’t mentioned.

So, having said that – let’s explain a little.

After a few years of running it, and we flog it to within an inch of its life, I’ve come to – if not make friends with it, then at least I’ll give it grudging respect.

But you have to know what you’re doing.

So first off: Settings.

You organize your backups in DPM in what are called protection groups.

Think of it as a glorified schedule, with lots of properties you can adjust.

Now when you’re creating the protection group, you can create it to back up file servers, Exchange servers, and SQL servers.  We’ll talk about SQL servers only here.

So when you back up a SQL box, you might have some databases (let’s say the system ones, Master, Model, and MSDB) in simple recovery mode, and the user databases in full recovery.

What you’d do is create two protection groups.

One for the system databases, for simple recovery.

And one for the user databases, for full recovery.

And this is where the first gotcha comes into play.

See, when you create that protection group, you’re going to come across a tab in the creation of it that gives you this choice… It’s like the matrix… Blue pill? Red pill?

And, when we were setting it up, we scoured the documentation to try to figure out how to set it up for databases that were in full recovery mode as well as doing mirroring.

And we kept running into problems with full transaction logs.

It turned out I wasn’t alone in this.

I googled the problem…

I binged the problem..

Heck, I even duckduckgo’ed the problem.

Everyone, it seemed, had the same question.

And the answers were surprisingly varied.

And most, honestly, were wrong.  (Note: that’s not to sound arrogant, this was a tough one to crack, so a lot of folks were coming up with creative ways to try to work around this issue)

Some folks were doing what we’d done initially *just* to keep the systems running. (manual backup, flip to simple recovery, shrink the logs, flip back to full recovery) – yes, we absolutely shredded the recovery chain, just shredded it – but the data in that application was very, very transient, so keeping it up and functioning was more important than keeping the data forever.

So while we were frantically – okay, diligently – searching for answers, managing the problem, we were also looking for a cure to the problem, because there was no possible way this could be an enterprise level application if it was behaving so badly… Right? There had to be some mistake, some setting we (and everyone else in those searches above) weren’t seeing, and it finally ended up in a support call.

My suspicion was that the transaction logs weren’t being backed up at all, even though that’s what we thought we were setting.

I’d been around software enough to know that clicking on an insignificant little button could wreak catastrophic results if that’s not the button you meant to push.

And this was one of them.

See, the databases that were in full recovery (namely those in that app that had the mirroring) were our problem children.  Databases in simple recovery weren’t.

It made me wonder why.

And one day, I was on the phone about another DPM issue, (for another post) and I asked the question, “So what exactly is happening if I click on this button versus that one? Because my suspicion is that the tlog is not getting backed up at all.”

And then I asked the more crucial question for me, and likely for you who are reading this:  “What code is being executed behind these two options?”

And the fellow over at Microsoft researched it for me and came back with this:

“In DPM, when we setup synchronization for SQL protection, those Syncs are log backups. DPM achieves that by running the following SQL Query

BACKUP LOG [<SQL_DB>] TO DISK = N'<Database_location_folder>\DPM_SQL_PROTECT\<Server_Name>\<Instance>\<DB_Name>_log.ldf\Backup\Current.log'

Recovery Points are Express Full backups. If we setup a group to run sync just before recovery points, DPM will create a snapshot from the replica and then create a Full Database backup (sync).

BACKUP DATABASE [<SQL_DB>] TO VIRTUAL_DEVICE='<device_name>'

WITH SNAPSHOT,BUFFERCOUNT=1,BLOCKSIZE=1024

In this case we will never have log backup thus no log truncation should be expected.”

What does this mean?

It means that if you have a database in full recovery, you will want to put it in a protection group that is set to schedule the backup every X minutes/hours like this:

In DPM, click on “Protection” tab (lower left), then find the protection group.

Right click on it and choose ‘Modify’ as below.

GEEQL_DPM_Modify_Protection_Group

Expand the protection group and pick the server you’re trying to set up backups for there – you’ll do some configuring, and you’ll click next a few times, but below is the deceptively simple thing you have to watch…  This dialogue box below – which will have the protection group name up at the very top (where I’ve got ‘GROUP NAME HERE’ stenciled in) can bite you in the heinie if you’re not careful.  So given what I’ve written, and from looking at this and reading what I wrote above – can you tell whether this is backing up databases in full or simple recovery mode?

GEEQL_DPM_Backup_Full_Recovery

See how it’s backing up every 1 hour(s) up there?

That means the code it’s running in the background is this:

BACKUP LOG [<SQL_DB>] TO DISK = N'<Database_location_folder>\DPM_SQL_PROTECT\<Server_Name>\<Instance>\<DB_Name>_log.ldf\Backup\Current.log'

We’ll get into more detail in a bit, but this means you won’t have full transaction logs.  This is the setting you want for the protection group you’ve got to backup your databases in full recovery mode (and the ones that are mirrored or in Availability Groups). The other option you have is to back up “Just before a recovery point” – which, if you’re thinking in terms of SQL and transaction logs, really doesn’t make a lot of sense.  We went through the documentation at one point, and I think we were right around 83 pages in before it gave an indication of what it *might* be doing here – but even so it wasn’t clear, but now we know.  So what you’d want to have in this protection group would be a bunch of databases in full recovery mode.  You might want to create different protection groups for different servers, or different schedules, that’s all up to you… The crux is, if it’s a database in full recovery mode, this is how you want to set it up, by backing up every X minutes/hours… Making sense?

Okay, let’s take a look at the other option…

GEEQL_DPM_Backup_Simple_Recovery

If you have a database in simple recovery, you’ll want to put it in a protection group that does backups just before the recovery point.  And that’s what the screenshot above does.    When you click on that radio button, the code it runs in the background if you’re backing up SQL databases, is this:

BACKUP DATABASE [<SQL_DB>] TO VIRTUAL_DEVICE='<device_name>'

WITH SNAPSHOT,BUFFERCOUNT=1,BLOCKSIZE=1024

And you should be set.

You can change the frequency of the express full backups by clicking on the ‘modify’ button in the dialogue above, and you’ll have quite a few options there.

Understand, you have several different buckets to put your databases in.

  1. Simple recovery (see above)
  2. Full recovery (see above)
  3. Whatever frequency you need for your systems (from above)
  4. Whatever schedule you need for your systems (from above)

Believe it or not, that’s it.

Put the right things in the right place, and DPM is close to a ‘set it and forget it’ kind of a deal.

However…

…there are some Gotchas and some fine print.  This is stuff I’ve found, and your mileage may vary – but just be aware of the below:

  • If you put a db that’s in simple recovery into the protection group meant for databases that are in full recovery, you’ll likely get errors with DPM complaining that it can’t backup the log of a database that’s in simple recovery mode. Since you manually put that db (in simple recovery mode) into that protection group (that’s configured to back up databases in full recovery mode), it will be your job to get it out and put it in the right protection group.  That will make the alerts go away.
  • If you put a db that’s in full recovery mode into the protection group meant for simple, you’ll fill up your transaction logs, fill up your disks, and your backups will fail, and you may, depending on a few factors, hork up your database pretty bad… (this is what most people complain about, and that will solve the disk space issue). And, since you (or someone on your team) likely put the db in the wrong protection group, putting it in the right protection group will be the first thing to do… Having enough space on your log drives is critical at this point – because DPM will start to make copies of your transaction logs as part of its backup process, and will need the room (as in, half of your transaction log drive).  More details below.
  • I’ve found a couple of Corollaries to go with this:
    • Corollary 1: DPM creates a folder on your transaction log drive called “DPM_PROTECT” -it stores copies of the transaction logs in there.  Those are the ‘backups’.
      • You have a choice between compressing backups and encrypting them…
      • If you encrypt them, they’re full sized, even if they’re empty.
      • So if you have transaction logs filling up 50% of your t-log drive – guess what’s filling up the other half?  (DPM’s t-log backups).  That DPM_PROTECT folder is a staging folder and is sometimes full, sometimes not (making it devilishly hard to monitor for), but you need to be aware that if that folder fills up half the drive, you’re running very close to disaster, and that’s when you have to start getting creative in your problem solving (see ‘examples’ below)
    • Corollary 2: DPM can be configured to put the DPM_PROTECT folder on a separate drive, which may suit your needs, and is a topic that will have to be discussed in a separate post, but if you run your transaction log drives pretty full, and have cheaper storage available, this might be an option for you to consider.  We don’t have ours architected that way, so it’s an option I’ve not tried.
  • Examples of things that can go very wrong (like the disaster mentioned above)
    • If you are working with a clustered SQL server and are getting alerts because your log file can’t grow, chances are it’s because your transaction log drive (or mountpoint) is full, and it will be full of both transaction logs, and DPM’s staged backups of the transaction logs.To fix this, you will either need to
      • Extend the drive/make it bigger  (assuming that’s an option for you) and then restart your DPM backups.
        • Note: DPM will likely want to run a ‘validation’ at this point, which will take some time.  My recommendation is to let it do that, but there’s a huge “it depends” associated with this one.  Sometimes – depending on how long things have been broken before you were able to get to them, you might find yourself essentially taking that database out of DPM’s protection group and starting over.  It breaks the recovery chain, but can be faster than letting DPM do its validation of what it thinks your latest backup is compared to your existing one (where you likely broke the recovery chain with the manual log backups)… Like I said, it depends..
      • (not advised, but if you’ve run out of options) backup the database once, and backup the log manually repeatedly (via SQL, not DPM, because you’re trying to empty the drive that DPM has filled up) until you can shrink the transaction log so you have space on the drive for DPM to actually stage a copy of the log file for backup.
        • Once you’ve done that, remember, you’ll have fixed one issue but created another, namely, your recovery chain ends in DPM where you started doing manual SQL backups.  Now you have backups in DPM, and a full + a bunch of log backups in SQL.  Make sure you have a full set of backups in case things go wrong.
    • You’re working with a mirrored server or a server that’s part of an availability group and the databases are in the wrong protection group (simple instead of full recovery)…. You’ve got transaction logs filling up from both the replication that’s involved in this, and transaction logs filling up because they’re not being backed up… It gets ugly.  I ran into this (and wrote about our resolution to it here) where we had issues with log file growth, high numbers of virtual log files, and an availability group with multiple geographically dispersed secondaries… It was, um… “fun…” <ahem>

So…  All this to say something very very simple: Pick the right recovery group, know what’s going on behind the curtains that are in front of what DPM is doing behind the scenes, and honestly, you should be good.  If you understand what radio button to select when you’re configuring the protection group, you as a DBA are about 90% of the way there.  Make sure your transaction log file drives are twice as big as you think they should be (or configure DPM to store them elsewhere), because chances are, you’ll be using half of the transaction log drives for the logs themselves, and the other half for temporary storage of the backups of those transaction logs.

Know what your protection groups will do for you… Know the gotchas, and DPM will, strangely enough, be your friend…

Take care out there – and keep your electrons happy.

Advertisements
 
11 Comments

Posted by on June 21, 2016 in Uncategorized

 

Tags: , , , , ,

Thinking outside the Box in Disaster Recovery


I had an interesting ‘aha’ moment awhile back.

I was trying to take a well-earned day off from work, visit my mom on her birthday, and in general, do a little decompressing.

I’d recently taken her portrait (a profession from a former life) and my goal was to give it to her and later take her to an appointment and then lunch.  We were between the appointment and lunch when my phone buzzed.

It was one of my Faithful Minions from the land of Eva Perón, who was in a bit of a bind.

Seems that a server had a database that had gone corrupt.

No, not the Chicago style “vote early, vote often” corruption.

Database level corruption.

It seems that this server had been giving us some trouble with backups (it didn’t want to back certain databases up, depending on what node of the cluster it was on) – and the team responsible for fixing that had been working on it for a few days – with limited success.

Aaaand another one of my Faithful Minions, from the land of Chicken Curry, had tried several things, one of which included detaching the database.

Which worked.

It’s just that once you detach it, you likely want to reattach it.

But trying to reattach a database that’s been munged to some degree is a bit of an issue, as SQL tries to recover the database and chew through the transaction log file prior to bringing it online.

That’s as it should be.

Problem is, this db had some level of corruption that would cause an issue when it got to 71% recovered. Stack dump, the whole nine yards.

It was, in a word (or several) a perfect example of a disaster recovery exercise. Yay! I’d always wanted to have one of my very own…

(well, not really, and not on my day off, and…)

Hmm…

I gave my Faithful Minion from the land of Perón some instructions while sitting in the car in front of the restaurant Mom and I were going to have lunch at, and then tried to enjoy birthday lunch with her, but still had my mind being pulled dripping out of the clam chowder and back to the database while Minion worked on his Google Fu a bit to try to figure the error out. Mom and I finished up lunch and I took her home, where I logged in to see what Minion and I could accomplish.

So the thing is, we use DPM (part of Microsoft’s System Center suite of products) to do backups. That’s Microsoft’s Data Protection Manager, a program – no – a system that really takes some getting used to. There will be another post in this series about it – but one of the big things you have to get used to as a DBA is that you don’t do backups anymore…

You don’t use Maintenance Plans.

You don’t use Ola Hallengren’s or any of the other amazing tools out there to handle backups.

You really don’t do backups any more.

DPM does.

No backup jobs on your servers…

No backup drives…

No…

Backups…

DPM handles them all semi-invisibly…

It is – well… weird.

It took me a long time to wrap my head around it.

But it works.

And that’s where this situation kind of got a little strange.  (Well, strange*r*)

See, DPM creates Recovery Points (its version of a backup) and it will stage the backup on disk either locally on the SQL box or in the DPM system where you have MDF and LDF files created with some sort of wizardry before it’s shipped off to tape.

So, we poked and prodded, and did all sorts of technical things, until my next set of Minions from The Pearl of the Orient came online – and we tried to find the backup of the database from DPM.  This took a good while.

Longer than we were expecting, actually.

While they were looking and I was troubleshooting, much time had passed in the land of rain (a bit south of Seattle) and lunch was long, long past.  Mom made some dinner while I was working on my laptop, sitting in my dad’s old chair, while I said goodbye to and thanked a very tired Minion in the land of Perón,   Meanwhile, Minions in the Pearl of the Orient, and Minions (and a Minion in Training) in the land of Curry were all trying to help, trying to troubleshoot, and in the case of Minion in Training, trying to learn by soaking up individual droplets of knowledge from a fire hose of a real live DR exercise.

Which is where it got interesting…

See, with DPM, you can choose a recovery point you want to restore to, and what it will do is simply replace the MDF and LDF files on your SQL box with the ones it’s reconstructed in all its magic.

The good news about that? Once you’ve learned how DPM works, anyone can pretty much restore anything that’s been backed up. That means Exchange servers, file servers, SQL servers, you name it…

It is (and I hesitate to use this word at all) kind of cool.

You need a restore?

All you have to do is open up DPM, find the database you need…

<click>

<typety type>

<find the backup that happened just prior to whatever blew up.>

<click> <click>

<click>

<pick the recovery point you want>

<click… scrollllllllll…. Scrollscrollscroll <click> <click>

Then you wait till DPM says it’s ready.

And tadaa…

That is…

If DPM had backed that database up.

But for that particular database…

On that particular node…

Of that particular cluster…

That database had not been backed up.

By DPM.

For… Let’s just say it was out of SLA, shall we?

It was truly a perfect storm.

The team *had* been working on it – it just hadn’t gotten actually fixed yet, and the crack that this database had slipped through grew wider and wider as it became clear to us what had happened.

It’s the kind of storm any one of us could run into, and this time it was me.

We looked high and low for a DPM backup set, and found one on that was supposed to be on tape.  But, given that this was Thursday where I was and Friday where various minions and co-minions were, the earliest we could get the tape would be by Monday…

More likely Tuesday…

On a scale of one to bad, we were well past bad.  We’d passed Go, had not collected our $200.00 – and…

Well, it was bad.

Several days’ worth of waiting on top of the existing time lost wasn’t really going to cut it.

Then, Co-Minion found that there was a DPM backup on disk – but a day older than the one on tape.

I could have that one in two hours (the time it took to copy the mdf/ldf files over and then put them in the right location.)

Hmmmm. A backup in the hand is worth two in the – um… Tape library?

So we copied it over.

And I attached it…

And now we had a database on the box that worked.

Yay, system was up and running.

But…

it wasn’t the database I wanted.

Like any of you out there, I wanted the backup from just before the database had blown up, and because of all the troubleshooting that had happened before I got involved, some of which actually made the situation quite a bit more challenging, it turned out I couldn’t do anything with the files I had… so reluctantly, I had to go with the older files, simply because I could get them quicker.

And the next day, while trying to fix another issue on that cluster, I got to talking to the guy who runs the SAN and the VM’s, and I explained what all we’d gone through over the course of getting that database back, and he said, “Wait, when did you need it for?”

I told him.

“Oh, I’ve got copies on the SAN of those drives – we could mount it if you need – 12 hours ago, 24 hours ago, whatever…”

He… You…

What?

I realized what he was telling me was that if I’d contacted him (or, frankly, known to contact him at that time of night) – I could have had a much more recent backup sooner instead of spending so many hours trying to fix the existing, corrupt database.

What it meant, was I was thinking inside the various boxes I was dealing with…

Trying to get a *SQL* backup…

Trying to get a *DPM* backup…

When what I needed, frankly, was bits on disk.

I needed MDF and LDF files.

Not only could DPM get me MDF and LDF files, but so could the SAN.

And the SAN backups had them.

I just didn’t know it.

By the time I found out – the old database we’d attached had had data written into it and had been up for a good while. Merging the two would have been far, far more of an issue than it was already (I’d had experience with that once before as an end user) .

So where does this leave us?

It means that if you’ve got a SAN as your storage mechanism that’s backing itself up, you might find yourself thinking outside the box you’re used to thinking of when trying to restore/recover a database.  Go buy the person or people running your SAN a cup of decent coffee and just chat with them, and see if this kind of stuff is possible in your shop.

They might be able to help you out (and save you substantial amounts of time) in ways you quite literally couldn’t have imagined.

In the end, I learned a lot…

My team learned a lot.

My mom learned a lot.

I will be making some changes based on what we learned, with the goal of being able to have some more resiliency in both the hardware resources we have as well as my faithful Minions who worked hard at trying to fix things.

Ultimately, one of the takeaways I got from this was something simple, but also something very profound.

Make sure the stuff you’re doing in your day job continues to work, so you can actually have a day off to spend with your loved ones when you need it.

Oh – and the portrait of mom? She loved it.

It’s here:

The portrait of Mom I wanted to deliver to her.

The portrait of Mom I delivered to her.

 

Take care out there, folks…

Tom

 

 
Leave a comment

Posted by on April 1, 2016 in Uncategorized

 

Tags: , , , , , ,

Adventures in Fixing VLF issues on a database in an Availability Group, backed up with DPM, with multiple secondaries


Hey all,

We had a server failover recently during a demonstration to one of the C-level folks, and as you might imagine, it was brought to my attention. The reason for the failover wasn’t something I could control, but the length of time the relevant database took to come up was something I could – so when it finally came up, I noticed the errorlog with this little note:

Database <dbname> has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files.

Hmm…

More than 1000, huh?

Slow recovery from failover, huh?

Hmm…

Where had I seen this before?  Oh yeah, some years back, when we were doing datacenter maintenance, and the *one* sharepoint database  that had all our documentation on (ahem) “How to bring up a datacenter.”  in it took a half an hour to come back – I remember now!

So if you don’t know – having a metric buttload (note – that’s a real term there, go look it up) of virtual log files means that every time your server fires up a database, it has to replay the transaction log to get all up to date and everything, and if the transaction log’s got a bunch of virtual log files, it’s like trying to run data backwards through a shredder… it takes time, and that’s the issue.

I remembered Michelle Ufford’s ( blogtwitter ) post that I’d found back then about counting Virtual log files and tried it on this server now that it was up.

And got this nice little message:

Msg 213, Level 16, State 7, Line 1

Column name or number of supplied values does not match table definition.

Back to “Hmmm…”

Well, I knew dbcc loginfo(dbname) from her script was inserting into a table that the script created, and it had worked before. It had been awhile since I’d run it – so I figured maybe there was some change in what it returned based on the SQL version. So I decided to see what it was inserting…

Well hey – the original script was missing a field (first column above)

So I added it (RecoveryUnitID – the first one below)

…and it worked fine.

Turns out SQL 2008R2 (where the original script worked) returns different fields than 2012 and 2014 (where it didn’t).

I figured I didn’t want to find out which version of the script to use every time I needed to run it on a server, so I told the script to figure that out by itself, and then run the appropriate hunk of code (example below)

So what you see in that code is a combination of this:

https://msdn.microsoft.com/en-us/library/ms174396.aspx from Microsoft to check the version number,

and this

http://sqlfool.com/2010/06/check-vlf-counts/ from Michelle Ufford.

A couple of things to understand here:

  1. This code won’t fix the problem (read Kimberly’s posts below for that)
    1. http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/
    2. http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/
  2. It will, however, show you that a problem exists.
  3. …that you have to fix and manage (Back to Kimberly’s posts)

You’ll want to do some fixing that involves:

  1. Lots of log backups and log file shrinking, then log growing (See Kimberly’s posts for more info on this)
  2. Adjusting the auto-growth setting from the 1 mb/10% default setting to something significantly larger (Meanwhile, back in Kimberly’s posts… Seriously, check them out)
  3. Oh, and make those adjustments on the Model database so that every new database you create has decent growth patterns instead of the 1 mb/10% bit)
  4. Tadaa – you should find much faster failovers on your clustered systems (assuming you’re still using them and not Availability groups)

So… that made it look easy, right? Well, let me give you the Paul Harvey Version, “The Rest of the Story.”

What I learned, and what complicated things a bit…

My system was an Availability group with two nodes running synchronously, and two nodes in two different datacenters running asynchronously.

They’re backed up by Microsoft’s Data Protection Manager (DPM) that is configured to back up the secondary node.

So my plan was to back up the secondary – only when I did, it told me that the last recovery point (what a backup’s called in DPM) had something to do with Marty McFly, namely that it had been backed up in the future – about 8 hours into the future.

Hmm… Okay, that told me the secondary it was backing up was one of the asynchronous ones, and it was either in the land of fish and chips, or the land of beer and pretzels, either way – it was a weird way to think about trying to preserve the recovery chain.

I created recovery points through DPM, which did it on one of those secondaries, which, strangely, helped to clear the log on the primary.

Then I shrunk the log, in stages, on the primary, which then replicated out to the various secondaries locally and across the pond.

I was able to get the VLF count down from well north of 33,000 to just over 600. The log wouldn’t shrink any more than that, which told me there was a transaction hanging on toward the end of one of the the virtual log files, and frankly, for right then, that was okay.

To fix it further, I almost entertained the idea of flipping the recovery model from full to simple, clearing the log file completely, then flipping it back to full, but that would have totally broken my recovery chain, and with three secondaries, I had no desire to try to fix all the fun that would have created. (Translation: Rebuilding the availability group – Uh… No.)

I then grew the log file to an appropriate size for that database, then increased the growth rate on the database files to something far more in line with what Kimberly mentioned above.

All of this was done on the primary, and the changes I made propagated out nicely, so I only had to make them in one spot.

Oh, while I was researching this…

(I’d done this before – reducing VLF counts, but never on an Availability Group, much less one with multiple secondaries in different time zones where DPM is backing up one of the secondaries, where it’s not obvious which one of the secondaries is the one being backed up)

…I noticed other people had clearly run into this same issue (reducing VLF counts) – and had not only entertained the idea of automating VLF reduction, but taken the idea out drinking, dancing, and somehow ended up with a receipt for a cheap motel room – and then put it all into a stored procedure.

That ran…

In a job…

On a schedule…

Um.

I decided that breaking the recovery chain was bad enough, but shredding it was so much worse – so this part will not be automated.

At all.

Oh – and here’s the code. It’s Michelle’s, with a couple of tweaks from me and one from Microsoft.

--Code to determine how many virtual log files there are in a database.
--modified from the original code here: http://sqlfool.com/2010/06/check-vlf-counts/
--to check for multiple versions of SQL because of schema changes between versions
--if any of the results are > 1000, do multiple log backups (or DPM Recovery points
--to clear out the transaction log + shrinking the tlog to get rid of
--the extra virtual log 'fragments'.
--As always - test this in your test environments first.  Your mileage may vary. Some 
--settling of contents may have occurred during shipping.  You know the drill folks.
--Best reference out there for managing this:
-- http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/
-- http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/
--the code below handles SQL 2008R2, 2012, and 2014. If you're running something other than that,
--just run a dbcc loginfo(YourDatabaseName) and make sure the #stage table has the same
--columns in it
--if it doesn't, add a section below.
DECLARE          @sqlversion SYSNAME
SET              @sqlversion =(SELECT CONVERT(SYSNAME,SERVERPROPERTY('ProductVersion')))
PRINT            @sqlversion
-- SQL 2008R2

IF @sqlversion LIKE '10.50.%'
     BEGIN
          CREATE TABLE #stage
                                  (
                                     FileID        INT
                                   , FileSize      BIGINT
                                   , StartOffset   BIGINT
                                   , FSeqNo        BIGINT
                                   , [Status]      BIGINT
                                   , Parity        BIGINT
                                   , CreateLSN     NUMERIC(38)
                                  );

          CREATE TABLE #results
                                  (
                                     Database_Name SYSNAME
                                   , VLF_count     INT
                                  );
          EXEC sp_msforeachdb  N'Use [?];
                                Insert Into #stage
                                Exec sp_executeSQL N''DBCC LogInfo([?])'';
                                Insert Into #results
                                Select DB_Name(), Count(*)
                                From #stage;
                                Truncate Table #stage;'
         SELECT         *
         FROM           #results
         ORDER BY       VLF_count DESC
         DROP TABLE     #stage
         DROP TABLE     #results
     END
     --sql 2012/2014 - note - new temp table names because sql thinks old ones are there.
IF @sqlversion >= '11'
     BEGIN
          CREATE TABLE #stage2
                                  (
                                     RecoveryUnitID INT
                                   , FileID         INT
                                   , FileSize       BIGINT
                                   , StartOffset    BIGINT
                                   , FSeqNo         BIGINT
                                   , [Status]       BIGINT
                                   , Parity         BIGINT
                                   , CreateLSN      NUMERIC(38)
                                   );
          CREATE TABLE #results2
                                  (
                                   Database_Name SYSNAME
                                   , VLF_count INT
                                   );
          EXEC        sp_msforeachdb N'Use [?];
                                     Insert Into #stage2
                                     Exec sp_executeSQL N''DBCC LogInfo([?])'';
                                     Insert Into #results2
                                     Select DB_Name(), Count(*)
                                     From #stage2;
                                     Truncate Table #stage2;'
          SELECT         *
          FROM           #results2
          ORDER BY       VLF_count DESC
          DROP TABLE     #stage2
          DROP TABLE     #results2
     END

Thanks to the #sqlhelp folks for their advice, suggestions, good-natured harassment, and ribbing:

Allen McGuire (blog | twitter)
Argenis Fernandez (blog | twitter)
Tara Kizer @tarakizer (blog | twitter)
Andre Batista @klunkySQL (blog | twitter)

and of course, the ones whose reference material I was able to use to put this together:

Michelle Ufford (blog | twitter)
Kimberly Tripp (blog | twitter)
Paul Randal (blog | twitter)

<outttakes>
When I told Argenis about the DPM involvement and secondaries in multiple timezones, his response 🙂

 
4 Comments

Posted by on January 5, 2016 in Uncategorized

 

Tags: , , , , ,

Problem Solved: Lync Reporting issues


Awhile back we had a pretty significant series of issues with our Lync implementation that took some time to resolve. Now that it is, I thought I’d write up something for one of them to help others that have run into the same problem so that it can be fixed up once and for all.

Note: these notes come from my own personal experience with a dash of a case we had open with MSFT at the time.

Lync, for those of you who aren’t familiar with it, is Microsoft’s corporate instant messenger suite – which brings voice, (both individual and conferencing), sharing, and IM into one package. If there was ever any program I’ve ever used that’s changed how I work, this is it.  Calls, conferences, desktop sharing, all from the same app.  Truly well done.

That’s on the front end.

On the backend, however, it gets a little more involved – and for those running the app, there’s a lot of reporting that will allow you to monitor things like call quality, what the system is doing, and if there is trouble in paradise, as it were, you can use this reporting to start narrowing down the issues you’re running into, and that will help you both troubleshoot and resolve them.

And that’s the part that was giving us trouble.

It’d work for a bit, then give us weird errors, and the folks trying to troubleshoot global call/connection issues were completely blocked. So they were frustrated, and we needed to figure out how to fix it – not just this once, but fix it period.

The problem:

Lync reporting, when we got to the Monitoring Dashboard report, would often render an error like this:

What you'll get if the stored procedures haven't been run

What you’ll get if the stored procedures haven’t been run

If you can’t read the image, there are two error messages:

“Report processing stopped because too many rows in summary tables are missing in the call detail recording (CDR) database.  To resolve this issue, run dbo.RtcGenerateSummaryTables on the LcsCDR database.”

“Report processing stopped because too many rows in summary tables are missing in the Quality of Experience (QoE) database.  To resolve this issue, run dbo.RtcGenerateSummaryTables on the QoEMetrics database.”

You can find that I’m not the only one that ran into this error by just using your favorite search engine – Google, Bing, DuckDuckGo, whatever. (the results are pretty similar)

Bottom line – the error is the same: Some level of activity that’s been happening on the server is not reflected in the troubleshooting reports you want – and the error message has both good and bad parts to it.

The good: As far as error messages go, this one is surprisingly clear. Basically do what it says, and, depending on how much it has to summarize, it takes a few minutes and sums up a bunch of information that is amazingly useful in the reports our Lync team needs.

The bad: It exposes names of objects that someone might not actually want to have exposed, though people who see this are often the people who need to, so it’s a bit of a two edged sword.  The other thing is that there’s nothing to give us any indication of how often what’s mentioned in the message needs to run. I figured (as did many of the others who’ve run into this) there was some process that called this set of stored procedures at some defined period, but from what I saw in my research and what I experienced myself, that was not happening for the folks who were running into this. On a hunch, while was on a call with MSFT on a related issue, I discovered that the stored procedure referenced in the above screenshot needs to be run at least daily.

Well that’s not hard…

So my suggestions below are based on the following assumptions:

That you’ve got SQL installed on your server with SQL Agent running – this was something that seemed to be the culprit in a lot of the issues in the links above.

We depend on SQL Agent to run our automation, so it was running but the process/job/scheduler to run the needed code wasn’t there at all.  The below instructions fix that.

So I created a SQL job, and scheduled it to run once daily. Since the stored procedures are in different databases, I just wrote the execution commands (below) fully qualified, so you could do the same.

I also created some output files on the jobs just to be sure I could monitor what the jobs were doing for the first little bit, and guess what?

It worked.

Problem solved.

So – if you’re experiencing the issues described above and don’t have a job scheduled on your Lync SQL Server, do this:

  1. Create a SQL job on the SQL server that has your QoEMetrics and LcsCDR databases on it.
  2. Give the job a name that makes sense and fits your job naming conventions..
    1. I named mine: Lync Reporting Monitoring Dashboard Summary Numbers
    2. Put something like the below in the description field so that once the braincells you’re now using to solve this problem have been overwritten, you’ll still have a clue as to why you did this:
      1. LYNC – Error appears in reporting when tables haven’t been populated frequently enough. This code was missing and it is confirmed that it needed to be run daily with MSFT. It is scheduled as such. Questions? Contact <you or your SQL operations team>
    3. Click on ‘steps’
    4. Click’new’
    5. Add one step – you’ll have to name it… I named mine: RTC Generate Summary Tables – because that’s what it does.
    6. I leave the database set to master and paste the following code in.

–this code should be run daily per MSFT

EXEC      QoEMetrics.dbo.RtcGenerateSummaryTables

GO

EXEC      LcsCDR.dbo.RtcGenerateSummaryTables

GO

  1. If you want an output file, find your log drive and paste this in there (editing the <drivepath> below as appropriate):
    1. <drivepath>\MSSQL\Log\LyncPopulateSummaryTables_$(ESCAPE_SQUOTE(JOBID))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(STRTDT))_$(ESCAPE_SQUOTE(STRTTM)).txt (that’s a little snippet from some of the best code I’ve seen at http://ola.hallengren.com)
  2. I scheduled mine to run at some hour when I’m asleep – and it now takes about 12 seconds to run daily.  You may want to adjust the schedule as needed for your environment
  3. Do that, and you should be able to forget about that report, and your Lync team should be able to know that it’s there every day – whenever they need it.

Take care – and good luck.

Tom

 
Leave a comment

Posted by on September 15, 2014 in Uncategorized

 

Tags: , , , , , , ,

Communication, Snapshots, and Chickens (no, really)


I saw a little note the other day about “snapshots” – which reminded me of a situation we had at work awhile back.

It seems that one of the things that’s really helpful at work is being on the same page as your colleagues/coworkers.

And the way you do that is by communicating and understanding each other – and the fact that we often use different words to mean the same thing – or sometimes, we use the same words to mean different things, can present a problem.

Allow me to explain – and of course, I’ll do it with a very non-technical story…

A number of years ago, my mom was at a church social event where they had this icebreaker kind of activity, and one of the things they were supposed to do in this one was to form groups, and they were all handed cards with the name of a farm animal on it, and they had to make the sounds of these animals, and all of the like ‘animals’ were supposed to find each other, and gather together in groups.

Mom’s group was chickens.

Chickens are chickens, right?

Because chickens – or – roosters anyway, go cock-a-doodle-doo, right?

Well… If you grew up in America, roosters go cock-a-doodle-doo.

So there were a bunch of middle aged ladies, walking around this room, flapping their arms and sounding just like a barnyard. (okay, I just checked with her – they weren’t flapping their arms, but that image is too fun to let go… so with apologies to mom, I’ll let that burn in for a moment… J )

There were cows, horses, pigs – and chickens, well, roosters.

There were cock-a-doodle-doo roosters, and then there was this one gickerigeek rooster, and –

Waitaminute…

What the heck was a gickerigeek rooster?

Well, it turns out that if you’re a chicken (well, rooster) in Germany, you go ‘gickerigeek’.

You don’t go ‘cock-a-doodle-doo’.

And because of that, all of the animals who were looking for each other, found each other.

Except this one little forlorn German chicken (well, rooster), running around the room, flapping her arms (okay, not really), making the most plaintive ‘gickerigeek’ you’ve ever heard.  Come to think of it, it’s likely the only ‘gickerigeek’ you’ve ever heard.  But the thing was, as accurate as this sound was in describing a rooster’s sound, it was a sound that no one recognized.

Eventually this got someone’s attention – and suddenly there was this entire barnyard full of little old ladies interrogating a very accurate, very fun loving, and yet, very stubborn little old lady (my mom).

They asked her what was up with this whole gickerigeek thing, and the truth came out.  It became clear to them that there are different words to mean the same thing, it just depends on where you come from, and from there on out, they knew that the sound that a rooster made in the morning was heard as ‘gickerigeek’ by some, and as ‘cock-a-doodle-doo’ by others.

I ran into something like this at work the other day, where the same word was being used to mean two radically different things.  There was this rather heavy duty discussion about snapshot backups and databases – and my take was that they were absolutely not a valid backup solution… I’m thinking of it from a SQL perspective.

The fellow I was talking to was the guy who runs our SAN – and he was thinking of the word ‘snapshot’ from a totally different perspective, that of the SAN itself. Used that way, the way he was doing it, it was indeed a valid backup solution.  Not what I would have liked, but valid nonetheless.

Problem was, we were both hearing the same sound, but those working on the hardware end of things were essentially talking English, and thinking ‘cock-a-doodle-doo, while I, working in SQL, was hearing it in German, thinking that sound only meant ‘gickerigeek’.

Interestingly, it turns out that we were both right, but it took the digital equivalent of me running around the room, flapping my arms going ‘gickerigeek’ for quite some time before we were able to clear it up.

The end result was that everyone learned why, when we were doing our hourly full snapshot backups through the SAN, and while everything else looked right, the transaction log kept getting full.  The thought was that the log file shouldn’t grow, but it did, to the point of filling up a good percentage of the drive.  Some config changes, a lot of learning and understanding, and we were ready to go, problem solved.

Moral of the story? Well, just like the rooster my mom was hearing, it was so important to discover and understand that the “rooster” we were hearing really didn’t sound the same to everyone.  All the technical smarts in the world won’t solve your problems if you can’t get on the same page, and eventually we did. It was good to have it cleared up, and it saved us hundreds of gigs of drive space as a result.

H/T to David Klee (T|B) for the spark to write this, which included a link to the technical explanation in more detail.

 
2 Comments

Posted by on January 9, 2014 in Uncategorized

 

Tags: , ,

Now WHERE did I put that backup?


Ever run into a situation where you’re in a hurry, working on a dev box, for a presentation, and you back up your database in an “easy” place, and then when it’s time to do the restore, can’t remember where you put it?

Yeah, me neither…

<this is your cue to chuckle a bit, because I think we’ve all done it a time or two>

In fact, it was Karen Lopez (twitter | blog) who got me looking for this script, and I thought I’d put it out there.  Nothing proprietary about it – it’s just a script to find a backup – but it can be useful when you’re trying to find out where in all the clutter of files you’ve got on boxes the one backup you actually need really is.

At any rate, some time ago I wrote this script that showed me where the last backup was of a database – run from within that database.

Take a look – let me know what you think.  If you can improve it, by all means, feel free to let me know that too.  Enjoy.

DECLARE @BACKUPNAME SYSNAME
SELECT @BackupName = (
SELECT     TOP (1) backupmediafamily.physical_device_name
FROM         msdb.dbo.backupmediaset AS backupmediaset_1
INNER JOIN        msdb.dbo.backupmediaset
ON                msdb.dbo.backupmediaset.media_set_id = backupmediaset_1.media_set_id
INNER JOIN        msdb.dbo.backupset
ON                msdb.dbo.backupmediaset.media_set_id = msdb.dbo.backupset.media_set_id
AND                backupmediaset_1.media_set_id = msdb.dbo.backupset.media_set_id
INNER JOIN      msdb.dbo.backupfile
INNER JOIN        msdb.dbo.backupfilegroup
ON                msdb.dbo.backupfile.backup_set_id = msdb.dbo.backupfilegroup.backup_set_id
ON                msdb.dbo.backupset.backup_set_id = msdb.dbo.backupfile.backup_set_id
AND             msdb.dbo.backupset.backup_set_id = msdb.dbo.backupfilegroup.backup_set_id
INNER JOIN        master.sys.sysdatabases
ON                backupset.database_name = master.sys.sysdatabases.name
INNER JOIN         msdb.dbo.backupmediafamily
ON                msdb.dbo.backupmediaset.media_set_id = msdb.dbo.backupmediafamily.media_set_id
AND             backupmediaset_1.media_set_id = msdb.dbo.backupmediafamily.media_set_id)
print        ‘backupname ‘
print        @backupname

 
Leave a comment

Posted by on May 7, 2012 in Uncategorized

 

Tags: , ,

Male Logic, Female Logic, how it all applies to Tech


It’s been said that part of being smart is knowing what you’re dumb at.  Part of being smart is letting other people know it – but – how do you say this…  There are several ways of solving problems.  I happen to know of several ways on this, and before anyone gets all PC on me on this next bit, hear me out.  There is nothing I admire more than someone who can think their way through a problem to a solution.  I love it if they can explain it – but if they get the answer, awesome.  Now a word used to describe problem solving like this logic.  Many years ago, Bill Cosby did a little schtick on Male Logic and Female Logic – and the way he said it, the male logic was very – well, ‘logical’.  If there was a 10 step process to go through, well, it took a guy 10 steps to go through it.

It went something like this:

Male logic

One… <click>

Two…<click>

Three…<click>

… all the way up to 10.

It was simple, elegant, and – to be honest, some might consider it a little boring…

Female logic

Now female logic, he said, was a lot different – and, as much as people want to think men and women are the same, we’re not – not by a long shot.  He pointed out that giving a female the very same 10 step process, she’d go at it like this:

One…<click>

Fourteen…<click>

10.

How the heck did that happen?

In fact, ladies out there – how do you do that?

See – that 14 is COMPLETELY outside our frame of reference, we don’t think about it – it’s not there.  We can ask you how or why it happens, but explaining to someone the concept of 14 when the numbers only go up to 10 is just baffling to us guys – and the way you relate things together that have you making connections and drawing conclusions that we might not see gives you a stunning edge in some incredible areas.

Call it intuition.  Call it what you will, but and it’s like women have a math coprocessor or something so they can shove a problem over there, let it get solved, and come back in a bit with an answer that even they don’t know how they got.

And often the guy’s on step 3, the gal’s at 10,

In fact, I gave the bulk of this blog post as part of a presentation once and got an astonishing piece of feedback from a woman who had “made it” in the industry.  It came to me sanitized, and I couldn’t tell you who it was if I wanted to, but what you read above was written by a guy, me.  What you’ll read below was written by a gal, is in her words, and she quite literally wished to remain anonymous.

When I ended up in the technology industry, I was usually the first female technical peer any of the guys had worked with, and the first technical female any of the (universally male) managers had tried to manage, and it gets ugly when you’re asked for status and all of the guys are all at steps 2-4 and you’re doing something … completely different.   It’s ugly then, and it’s uglier when, without being able to explain it or teach it … you get at least the same result, faster.  And as you gain confidence, much faster. I still remember being assigned to teach a pretty talented guy who have had all the right experience and skills, how to triage a network sniffer trace.  In my mind it starts with “you scroll quickly through the packets looking for something that’s not quite right” … and I knew the exercise was doomed when I looked at his notebook and he’d neatly written “Step 1.” I knew then I would never be able to explain the concept of not really reading, just relaxing with cup of coffee and scrolling through, knowing that your subconscious will throw up an alert if it sees something that’s “not quite right”. There’s your “skip to #14” bit – I can write or use a parsing app for a network trace, but before I go the effort, I’ll just use my eyes.  Sit back with a cup of coffee and scroll through the trace, looking for patterns that “aren’t quite right”.  I’m not looking closely, I haven’t the slightest notion what I’m looking for, and when something strikes me as “not quite right” it can take me some time to figure out exactly why, but it’s often not the sort of thing I could have ever found using traditional analytics and parsing tools.

I was just thrilled to find this info.  It proved my point, not that men are better or women are better. But that they’re different, and that difference needs to be recognized, acknowledged, even if, as guys, we may not completely understand it.

In fact, please understand – this is not a jab at either gender.  There’s no way I’d say one is right or wrong, they’re just different.   And this whole “14” thing – women seeing/feeling/perceiving relationships intuitively where men might not see them in the same way, I think, honestly, that’s why, while IT is traditionally a male dominated field, that database administration (hint: relational databases…) seems to have a greater percentage of females in it.

So, is there more to the story? Yes, there is.

In a nutshell: just because a guy doesn’t understand the existence of 14 doesn’t mean her 14 isn’t valid.

And just because a gal can’t explain the 14 to a guy doesn’t mean he’s an idiot.

It takes a wise manager to see that the problem-solving skills that employees bring to the technical table, whether 1-2-3-4-5-6-7-8-9-10 or 1-14-10, can actually be quite complimentary.

 
7 Comments

Posted by on March 9, 2012 in Uncategorized

 

Tags: , , ,