You can download the first beta of Visual Studio 2005 right now.
Tampa Bay Lightning – Stanley Cup Champions..
As a sports fan, I’m what you might call a “bandwagon jumper” (a cardinal sin for a ‘true sports fan’ i’m told). I have my favorite teams just like everyone else, but once they’ve lost, I have no problems with picking a new team to cheer for out of the remaining teams. In my opinion, sports are purely for entertainment, so as long as i’m entertained, I’m a happy guy. If ‘my team’ wins, so much the better.. For example, the St Louis Rams are ‘my team’ in football. Back a few years ago when they won the superbowl, boy, that was awesome. Two years later, when they *lost* the superbowl to the New England Patriots, that was still a great game. I was entertained, I was happy. Once the team I’m rooting for is gone, I promptly switch to the team I think will be the most entertaining to me. Last year, when the Rams were eliminated from the playoffs by the Carolina Panthers the very next week I was cheering on the Panters to beat the Eagles (which they did).. I’ve never understood that whole ‘loyalty’ thing with sports teams, but that’s just me.
Anyway, since I rarely watch sports (any sport) outside of the playoffs, it’s always exciting when I finally get around to seeing a game. I don’t really think there is anything better in sports than a game 7, and a game 7 in the finals for the Stanley Cup.. Well, that is the *best* thing going. The only thing possibly better is if it went into overtime. There’s such a finality to it. Given the last few minutes of the game tonight, I thought this one was going to overtime like the last two. It would have been ‘poetic’ if it had. In my opinion, Khabibulin should have won the Conn Smythe award because without him, they never would have won this series. They were way outplayed for long stretches, and he kept them in it. Sure, the 10/16 points on game winning goals is an awesome stat, but really..
Oh, and i was rooting for the Flames. After all was said and done though, it was a great game. I was entertained, that’s all that matters.
Congratulations to the Tampa Bay Lightning. Stanley Cup Champions.. Now, will rooting for the Lakers work out? =)
The Renderloop Re-Revisted…
Ah, the good ol’ render loop. Everyone’s favorite topic of conversation. As I’m sure everyone is aware, the Managed DirectX samples that shipped with the DirectX9 SDK as well as the Summer 2003 update used the ‘dreaded’ DoEvents() loop I speak so negatively about at times. People also probably have realized my book used the ‘infamous’ Paint/Invalidate method. I never really made any recomendations in the earlier posts about which way was better, and really, I don’t plan on it now. So why am I writing this now?!?!
If you read David’s post about the upcoming 2004 Update, you may have noticed that he mentions the DoEvents() methods that samples used to employ is gone. In all reality, along with the new sample framework, the samples themselves actually never use the Windows Forms classes anymore either. The actual render window and render loop are all run through P/Invoke calls into win32, and I figured I’d take a quick minute to explain the reasoning behind it.
Obviously the idea of using DirectX is for game development. Sure, there are plenty of other non-game development scenarios that DirectX is great for (data visualization, medical imaging, etc), but what drives our API are the game developers. If you know any game developers (or are one yourself), you’re probably vastly aware that while the game is running (and rendering), things need to happen quickly, and predictably. With all the benefits of managed code, one thing that can be hard to achieve is that ‘predictability’, particularly when you’re dealing with the garbage collector.
So let’s say you decided to use Windows Forms for your rendering window, and you wanted to watch what the mouse was doing, so you hook the MouseMove event. Aside from the ‘cost’ of the Invoke call to call into your handler, a managed object (the mouse event arguments) is created. *Every* time. Now, the garbage collector is quite efficient, and very speedy, so this alone could be easily handled. The problem arises when your own ‘short lived’ objects get promoted to a new generation due to the extra collections these events are taking. Generation 0 collections won’t have any effect on your game, generation 2 collections on the other hand will.
Thus the new sample framework doesn’t rely on these constructs at all. This is probably one of the ‘most efficient’ rendering loop available in the managed space currently, but the code doesn’t necessarily follow many of the constructs you see in the managed world. So, when deciding on the method you want to use to drive your rendering, you need to ask yourself what’s more important? Performance, or conformance? In the case of the updated sample framework, we’ve chosen performance. Your situation may be different.
Direct3D and the FPU..
I had an email this morning about Managed Direct3D ‘breaking’ the math functions in the CLR. The person who wrote discovered that this method:
public void AssertMath()
{
double dMin = 0.54797677334988781;
double dMax = 4.61816551621179;
double dScale = 1/(dMax – dMin);
double dNewMax = 1/dScale + dMin;
System.Diagnostics.Debug.Assert(
dMax == dNewMax);
}
Behaved differently depending on whether or not a Direct3D device had been created. It worked before the device was created, and failed afterwords. Naturally, he assumed this was a bug, and was concerned. Since i’ve had to answer questions similar to this multiple times now, well that pretty much assures it needs it’s own blog entry.
The short of it is this is caused by the floating point unit (FPU). When a Direct3D device is created, the runtime will change the FPU to suit its needs (by default switch to single precision, the default for the CLR is double precision). This is done because it has better performance than double precision (naturally).
Now, the code above works before the device is created because the CLR is running in double precision. Then you create a Direct3D device, the FPU is switched to single precision, and there are no longer enough digits of precision to accurately calculate the above code. Thus the ‘failure’.
Luckily, you can avoid all of this by simply telling Direct3D not to mess with the FPU at all. When creating the device you should use the CreateFlags.FpuPreserve flag to keep the CLR’s double precision, and have your code functioning as you expect it.
Time flies when a house is being built..
While in reality it’s only been a few months since I first mentioned I was having a new house built, in my mind it’s been decades. I’m quite anxious to move into the new place, and all the waiting around is agonizing to say the least. Aside from the fact that I will be saving about an hour a day on my commute, there is something to be said for actually having the house built that *you* wanted. Start with nothing, and then end up with a house. It’s an interesting proposition.
Of course, then you have all of ‘Nothing’. And you have this ‘nothing’ for what seems like a long time.
Then all of a sudden, you have a foundation down!
So you’d think that with something going on, you’d be happy.. Like, WOOHOO! Something’s going on! But that’s not really what happens.. Instead you’re thinking “Man, what’s taking so long! Aren’t they done yet?!!“ Maybe that wouldn’t be so bad, but it’s only been 15 minutes since the foundation was down..
Next thing you know, they’re actually building a structure.
Then it becomes maddening. You can phsyically see the thing there, and you still can’t do anything but wait.. It is quite awesome to see it coming together piece by piece (”Hey, if you imagine a wall right here, I’m gonna put my tv right there!”), and one of the ‘coolest’ experiences i’ve had, but there’s still two months left before it’s done and i’ll be able to move in.
I’m convinced these two months will take at least another 8 years to get here.
Hey, it is difficult to write a book?
Ever since I published my first book, this question seems to come up a lot. In all honesty, I asked that same question a few times to other author’s I knew before I wrote that book. Now that my second book is coming out in a few short weeks, I figured I’d write a bit about these experiences.
First, let me point out that the differences between writing the first book and the second book were enormous. The basic things I was hearing when I was asking around about writing a book was that it was a major time sync. It was difficult, and time consuming. Yet minor things like weren’t going to deter me!
For anyone who knows me, they probably know that I’m verbose. I tend to ‘type’ a lot, and many times they think it’s just because I like to read the words I’m thinking on screen. For me, writing the first book wasn’t difficult at all. I found the experience rewarding, and while not something I would consider ‘easy’, given my knowledge of the subject, and my knack for rambling on while writing, it just seemed to come naturally.
The second book on the other hand, was much more difficult. For one, my real job beckoned, and the little free time I had when writing the first book basically disappeared. It doesn’t matter how good of an author you are, you suck if you can’t write. Then, when you only have a short period of time any given day or week to write, you barely get anything written. Then with long stretches between writing sessions, you forget where you were in your train of thought, and have to go back and re-read the last sections you’ve already written to get back into the writing mode.
So, to answer the underlying question, I suppose the best answer is ‘It depends!’. If you have a passion for writing and know you’ll have the time to dedicate to writing (at least an hour or two a night, 3-5 days a week), then no, not really. On the other hand, if you lack those two things, it can be a taxing experience to say the least.
It’s definitely not something I regret, let me tell you that much.
I’ve been scooped!
So you say that startup time is slow?
One of the first things that people may notice when they’re running managed applications is that the startup time is slower than a ‘native’ application. Without delving into details, the major cause for this ‘slow down’ is the actual compilation that needs to happen (called JIT – Just in time compiling). Since the managed code has to be compiled into native code before it’s executed, this is an expected delay when starting the application. Since Managed DirectX is built on top of this system, code you write using the Managed DirectX runtime will have this behavior as well.
Since the JIT compilation can do a lot of optimizations that just can’t be done during compile time (taking advantage of the actual machine the code is running on, rather than the machine it was compiled on, etc.) the behavior here is desired, the side effects (the slowdown) is not. It would be great if there was a way to have this cost removed. Luckily for us, the .NET Framework includes a utility called NGen (Native Image Generator) which does exactly this.
This utility will natively compile an assembly and put the output into what is called the ‘Native Assembly Cache’, which resides within the ‘Global Assembly Cache’. When the .NET Framework attempts to load an assembly, it will check to see if the native version of the assembly exists, and if so, load that instead of doing the JIT compilation at startup time, potentially dramatically decreasing the startup time of the application using these assemblies. The downsides of using this utility are two-fold. One, there’s no guarantee that the startup or execution time will be faster (although in most cases it will be – test to find out), and two the native image is very ‘fragile’. There are a number of factors which cause the native images to be invalid (such as a new runtime installation, or security settings changes). Once the native image is invalid, it will still exist in the ‘Native Assembly Cache’, and never be used. Plus, if you want to regain the benefits, you’ll need to ngen the assemblies once more, and unless you’re watching closely, you may not even notice that the original native assemblies are now invalid.
If you’ve decided you would still like to ngen your Managed DirectX assemblies, here are the steps you would take:
- Open up a Visual Studio.NET 2003 Command Prompt Window
- If you do not wish to open that command window, you could simply open up a normal command prompt window, and ensure the framework binary folder is in your path. The framework binary folder should be located at %windir%microsoft.netframeworkv1.1.4322 where %windir% is your windows folder.
- Change the directory to %windir%microsoft.netManaged DirectX, where %windir% is your windows folder.
- Go into the folder for the version of Managed DirectX you wish to ngen from here (the later the version, the more recent the assembly).
- Run the following command line for each of the assemblies in the folder:
- ngen microsoft.directx.dll (etc)
- If you’re command prompt supports it you may also use this command line instead:
- for /R %i in (*.dll) do ngen %i
If you later decide you did not want the assemblies compiled natively, you can use the ngen /delete command to remove these now compiled assemblies.
- Note that not all methods or types will be natively compiled by ngen, and these will still need to be JIT’d. Any types or methods that fall into this category will be output during the running of the ngen executable.
Shows you how much I pay attention..
It wasn’t until I saw a comment from someone on one of my earlier posts that I realized my second book was now listed on the major book sites (although it won’t be ‘available’ until this summer)..
You can read more about it here.
Now i’ll try to answer some of the questions raised by that reply.. First, this book is *not* the one I discuss during the .NET Show episode (that’s coming after this one).. Second, Space Tag also isn’t included (if you want a 3D space game, David Weller’s book has a 3D version of space war).. The three games discussed in this text are a simple puzzle game, a 3d ‘tank’ game, and a racing game. The last game is written entirely using HLSL, and avoids the fixed function pipeline..
Exciting!
Managed DirectX – Have you used it?
So I asked before what types of features you would like to see in Managed DirectX (and the feedback was awesome – I’m still interested in this topic).. What I’m also interested in that I didn’t ask about back then though was what types of things people are using it for currently?
Are you using it to write some tools? Game engines? Playing around on the weekends? What experiences have you had working with the API?