Tampa Bay Lightning – Stanley Cup Champions..

As a sports fan, I’m what you might call a “bandwagon jumper” (a cardinal sin for a ‘true sports fan’ i’m told).  I have my favorite teams just like everyone else, but once they’ve lost, I have no problems with picking a new team to cheer for out of the remaining teams.  In my opinion, sports are purely for entertainment, so as long as i’m entertained, I’m a happy guy.  If ‘my team’ wins, so much the better..  For example, the St Louis Rams are ‘my team’ in football.  Back a few years ago when they won the superbowl, boy, that was awesome.  Two years later, when they *lost* the superbowl to the New England Patriots, that was still a great game.  I was entertained, I was happy.  Once the team I’m rooting for is gone, I promptly switch to the team I think will be the most entertaining to me.  Last year, when the Rams were eliminated from the playoffs by the Carolina Panthers the very next week I was cheering on the Panters to beat the Eagles (which they did)..  I’ve never understood that whole ‘loyalty’ thing with sports teams, but that’s just me.

Anyway, since I rarely watch sports (any sport) outside of the playoffs, it’s always exciting when I finally get around to seeing a game.  I don’t really think there is anything better in sports than a game 7, and a game 7 in the finals for the Stanley Cup.. Well, that is the *best* thing going.  The only thing possibly better is if it went into overtime.  There’s such a finality to it.  Given the last few minutes of the game tonight, I thought this one was going to overtime like the last two.  It would have been ‘poetic’ if it had. In my opinion, Khabibulin should have won the Conn Smythe award because without him, they never would have won this series.  They were way outplayed for long stretches, and he kept them in it.  Sure, the 10/16 points on game winning goals is an awesome stat, but really..

Oh, and i was rooting for the Flames.  After all was said and done though, it was a great game.  I was entertained, that’s all that matters.

Congratulations to the Tampa Bay Lightning. Stanley Cup Champions..  Now, will rooting for the Lakers work out? =)

The Renderloop Re-Revisted…

Ah, the good ol’ render loop.  Everyone’s favorite topic of conversation.  As I’m sure everyone is aware, the Managed DirectX samples that shipped with the DirectX9 SDK as well as the Summer 2003 update used the ‘dreaded’ DoEvents() loop I speak so negatively about at times.  People also probably have realized my book used the ‘infamous’ Paint/Invalidate method.  I never really made any recomendations in the earlier posts about which way was better, and really, I don’t plan on it now.  So why am I writing this now?!?!

If you read David’s post about the upcoming 2004 Update, you may have noticed that he mentions the DoEvents() methods that samples used to employ is gone.  In all reality, along with the new sample framework, the samples themselves actually never use the Windows Forms classes anymore either.  The actual render window and render loop are all run through P/Invoke calls into win32, and I figured I’d take a quick minute to explain the reasoning behind it.

Obviously the idea of using DirectX is for game development.  Sure, there are plenty of other non-game development scenarios that DirectX is great for (data visualization, medical imaging, etc), but what drives our API are the game developers.  If you know any game developers (or are one yourself), you’re probably vastly aware that while the game is running (and rendering), things need to happen quickly, and predictably.  With all the benefits of managed code, one thing that can be hard to achieve is that ‘predictability’, particularly when you’re dealing with the garbage collector.

So let’s say you decided to use Windows Forms for your rendering window, and you wanted to watch what the mouse was doing, so you hook the MouseMove event.  Aside from the ‘cost’ of the Invoke call to call into your handler, a managed object (the mouse event arguments) is created.  *Every* time.  Now, the garbage collector is quite efficient, and very speedy, so this alone could be easily handled.  The problem arises when your own ‘short lived’ objects get promoted to a new generation due to the extra collections these events are taking.  Generation 0 collections won’t have any effect on your game, generation 2 collections on the other hand will.

Thus the new sample framework doesn’t rely on these constructs at all.  This is probably one of the ‘most efficient’ rendering loop available in the managed space currently, but the code doesn’t necessarily follow many of the constructs you see in the managed world.  So, when deciding on the method you want to use to drive your rendering, you need to ask yourself what’s more important?  Performance, or conformance?  In the case of the updated sample framework, we’ve chosen performance.  Your situation may be different.

Direct3D and the FPU..

I had an email this morning about Managed Direct3D ‘breaking’ the math functions in the CLR.  The person who wrote discovered that this method:

public void AssertMath()
{
double dMin = 0.54797677334988781;
double dMax = 4.61816551621179;
double dScale = 1/(dMax  – dMin);
double dNewMax = 1/dScale + dMin;
System.Diagnostics.Debug.Assert(
dMax == dNewMax);
}

Behaved differently depending on whether or not a Direct3D device had been created.  It worked before the device was created, and failed afterwords.  Naturally, he assumed this was a bug, and was concerned.  Since i’ve had to answer questions similar to this multiple times now, well that pretty much assures it needs it’s own blog entry.

The short of it is this is caused by the floating point unit (FPU).  When a Direct3D device is created, the runtime will change the FPU to suit its needs (by default switch to single precision, the default for the CLR is double precision).  This is done because it has better performance than double precision (naturally).

Now, the code above works before the device is created because the CLR is running in double precision.  Then you create a Direct3D device, the FPU is switched to single precision, and there are no longer enough digits of precision to accurately calculate the above code.  Thus the ‘failure’.

Luckily, you can avoid all of this by simply telling Direct3D not to mess with the FPU at all.  When creating the device you should use the CreateFlags.FpuPreserve flag to keep the CLR’s double precision, and have your code functioning as you expect it.