Now that the Whidbey Beta is out, what about Managed DirectX?

With the release of the Whidbey (err, I mean Visual Studio.NET 2005) beta a few weeks ago, my thoughts have drifted towards how Managed DirectX may work in this new release.  For those of you who haven’t seen VS2005, or the new CLR, the changes there are magnificent.  Across the board, improvements have been made, and the thing is looking wonderful.

So my thoughts roll over to MDX, and I think to myself, “Self: wouldn’t it be awesome if we took advantage of some of those features?”  Who wouldn’t want to declare the vertex data they’re about to use like:

VertexBuffer vb = null;
IndexBuffer ib = null;

Of course that just scratches the surface of the possibilities that would lie in a VS2005 specific version of Managed DirectX.  Which begs the question, what do you think?

The Renderloop Re-Revisted…

Ah, the good ol’ render loop.  Everyone’s favorite topic of conversation.  As I’m sure everyone is aware, the Managed DirectX samples that shipped with the DirectX9 SDK as well as the Summer 2003 update used the ‘dreaded’ DoEvents() loop I speak so negatively about at times.  People also probably have realized my book used the ‘infamous’ Paint/Invalidate method.  I never really made any recomendations in the earlier posts about which way was better, and really, I don’t plan on it now.  So why am I writing this now?!?!

If you read David’s post about the upcoming 2004 Update, you may have noticed that he mentions the DoEvents() methods that samples used to employ is gone.  In all reality, along with the new sample framework, the samples themselves actually never use the Windows Forms classes anymore either.  The actual render window and render loop are all run through P/Invoke calls into win32, and I figured I’d take a quick minute to explain the reasoning behind it.

Obviously the idea of using DirectX is for game development.  Sure, there are plenty of other non-game development scenarios that DirectX is great for (data visualization, medical imaging, etc), but what drives our API are the game developers.  If you know any game developers (or are one yourself), you’re probably vastly aware that while the game is running (and rendering), things need to happen quickly, and predictably.  With all the benefits of managed code, one thing that can be hard to achieve is that ‘predictability’, particularly when you’re dealing with the garbage collector.

So let’s say you decided to use Windows Forms for your rendering window, and you wanted to watch what the mouse was doing, so you hook the MouseMove event.  Aside from the ‘cost’ of the Invoke call to call into your handler, a managed object (the mouse event arguments) is created.  *Every* time.  Now, the garbage collector is quite efficient, and very speedy, so this alone could be easily handled.  The problem arises when your own ‘short lived’ objects get promoted to a new generation due to the extra collections these events are taking.  Generation 0 collections won’t have any effect on your game, generation 2 collections on the other hand will.

Thus the new sample framework doesn’t rely on these constructs at all.  This is probably one of the ‘most efficient’ rendering loop available in the managed space currently, but the code doesn’t necessarily follow many of the constructs you see in the managed world.  So, when deciding on the method you want to use to drive your rendering, you need to ask yourself what’s more important?  Performance, or conformance?  In the case of the updated sample framework, we’ve chosen performance.  Your situation may be different.

Direct3D and the FPU..

I had an email this morning about Managed Direct3D ‘breaking’ the math functions in the CLR.  The person who wrote discovered that this method:

public void AssertMath()
{
double dMin = 0.54797677334988781;
double dMax = 4.61816551621179;
double dScale = 1/(dMax  – dMin);
double dNewMax = 1/dScale + dMin;
System.Diagnostics.Debug.Assert(
dMax == dNewMax);
}

Behaved differently depending on whether or not a Direct3D device had been created.  It worked before the device was created, and failed afterwords.  Naturally, he assumed this was a bug, and was concerned.  Since i’ve had to answer questions similar to this multiple times now, well that pretty much assures it needs it’s own blog entry.

The short of it is this is caused by the floating point unit (FPU).  When a Direct3D device is created, the runtime will change the FPU to suit its needs (by default switch to single precision, the default for the CLR is double precision).  This is done because it has better performance than double precision (naturally).

Now, the code above works before the device is created because the CLR is running in double precision.  Then you create a Direct3D device, the FPU is switched to single precision, and there are no longer enough digits of precision to accurately calculate the above code.  Thus the ‘failure’.

Luckily, you can avoid all of this by simply telling Direct3D not to mess with the FPU at all.  When creating the device you should use the CreateFlags.FpuPreserve flag to keep the CLR’s double precision, and have your code functioning as you expect it.

Managed DirectX – Have you used it?

So I asked before what types of features you would like to see in Managed DirectX (and the feedback was awesome – I’m still interested in this topic)..  What I’m also interested in that I didn’t ask about back then though was what types of things people are using it for currently?

Are you using it to write some tools?  Game engines?  Playing around on the weekends?  What experiences have you had working with the API?

Post show ramblings after the Game Developers Conference…

All in all i think it was a really good show this year.  I met with lots of interesting people doing lots of interesting projects.  I also heard ramblings that the show next year might be in San Francisco rather than San Jose.  It was awfully crowded, so maybe a change of venue might be in order, but really i don’t know if it’s going to happen or not..

The expo itself was decently sized this year with lots of good booths.  Renderware had a large booth once again, but it seemed a little more enclosed this time, so i didn’t actually go in there.  Nokia’s N-Gage booth (which was huge and popular last year) was smaller and much less popular this year..  Part of that probably has to do with it’s sub-prime location this year compared to last year, part, but not all..  ATI and nVidia’s booths both had interesting presentations happening throughout the day, and the AMD64 booth was quite popular as well.  The Intel booth was huge as always, and they once again had the ‘contests’ where 6 people would play an online game for 5 minutes, and the winner would get the game (this year the game was ‘Call of Duty’)..  I played once and came in third place (i sucked), but did get a stuffed intel bunny-man doll.

My talk seemed to be received very well too.  I covered most of the basic areas for managed code in gaming, showed some demos, failed in showing other demos (doh!), and got some good questions..  One demo in particular really stood out for the crowd and i was asked many questions on that one after the talk and throughout the show.  It’ll be released in an upcoming DirectX SDK Update..

I loved the award shows Wednesday night, we announced XNA, and i think it was an all around great show.

Game Developers Conference…

So next week is the Game Developers Conference which is always an exciting time around here.  The ‘main’ conference runs next week from Wednesday through Friday, although there are tutorials and sessions on Monday and Tuesday as well, just no show floor, etc.

I will be giving a talk on managed code in gaming during one of these sessions Tuesday morning, which should be pretty exciting.. It’s always great to get the chance to actually talk with the customers and find out issues they may be having and answering questions.  Everytime i’ve given a talk i’d say the Q&A session at the end is always the best time.  People always come up with some great questions, and most times it gives a good insight into the types of things they’re trying to accomplish, and how they expect things to work.

As for the rest of the show, i’m pretty excited about that as well.  There are always lots of interesting things to see on the expo floor, there are literally hundreds of different sessions to see, booths for everything, and an all around great vibe.  I’m looking forward to a great show.

Test Driven Development…

Recently during the ‘first official meeting of the Managed DirectX fanclub‘ (as Dave called it), Craig mentioned something he’s been doing recently called ‘Test Driven Development

It *sounds* like a lot more upfront work, but the process intrigues me.  Anything that can help eliminate bugs, and regressions has got to be a good thing.  I’m curious what other people’s experience in this field is.  Craig even mentioned than he found he was even more productive, which was at least somewhat surprising given the extra work involved.

Of course, it makes me wonder.. If I have to write the test before i implement the method and i’m designing a library, i can’t even make the test compile until i’ve defined the method.  It’s like a catch-22! =)

To shader or not to shader, that is the question…

So i’m finishing up my second book (an introduction to 3d game development), which is intended to be a ‘beginners’ book, and i find myself continually arguing amongst myself about whether or not i should use shaders in the last ‘sample game’.  Couple this with the fact that my ‘advanced’ book which will be out a few short months after this beginner book is virtually entirely shader driven, with next to nothing using the fixed function pipeline.

The argument i’m having with myself is the potential that the shader code in the beginners book would be too difficult to be classified as ‘beginner’, while at the same time i don’t want to simply ‘ignore’ the shaders because they can be quite powerful.  Right now i’m leaning towards some basic shaders for the last game, just as a small ‘introduction’ that hopefully won’t catch anyone off guard.

I’d rather have someone complain about too much (or too difficult) information than not enough.

What would stop you from using Managed DirectX?

This is a question that is interesting in more ways than one.  One of the more common answers to this question i hear is naturally centered around performance, even though many times the person with the ‘fear’ of the performance hasn’t actually tried to see what types of performance they could get?  I would love to hear about specific areas where people have found the performance to be lacking, and the goals they’re trying to accomplish when hitting these ‘barriers’.

But above and beyond that, what other reasons would you have for not using Managed DirectX.  Do you think the working set is too high?  Do you not like the API design?  Do you just wish that feature ‘XYZ’ was supported, or supported in a different way?

At the same time, what about the users who are using Managed DirectX currently.  What do you like, and why?

You can consider this my highly unscientific survey on the current state of the Managed DirectX runtime. =)

The speed of Managed DirectX

It seems that at least once a week i’m answering questions directly regarding the performance of managed code, and Managed DirectX in particular.  One of the more common questions i hear is some paraphrase of “Is it as fast as unmanaged code?”.

Obviously in a general sense it isn’t.  Regardless of the quality of the Managed DirectX API, the fact remains that it still has to run through the same DirectX API that the unmanaged code does.  There is naturally going to be a slight overhead for this, but does it have a large negative impact on the majority of applications?  Of course not.  No one is suggesting that one of the top of the line polygon pushing game coming out today (say, Half Life 2 or Doom 3) should be written in Managed DirectX, but that doesn’t mean that there isn’t a whole slew of games that could be.  I’ll get more to that later.

I’m also asked quite a bit things along the lines of “Why is it so slow?”  Sometimes the person hasn’t even ran a managed application, they just assume it has to be.  Other times, they may have run numerous various ‘scenarios’ comparing against the unmanaged code (including running the SDK samples) and have found that in some instances there is large differences.

Like I’ve mentioned earlier in this blog, all of the samples in the SDK use the dreaded ‘DoEvents’ loop, which can artificially slow down the application due to allocations and the subsequent large amounts of colllections.  The fact that most of the samples run with similar frame rates as the unmanaged API is a testament to the speed of the API to begin with.

The reality is that for many of the developers out there today, they simply don’t know how to write well performing managed code.  This isn’t through any shortcoming of the developer, but rather the newness of the API, combined with not enough documentation on performance, and how to get the best out of the CLR. Luckily, this is changing, for example, see Rico Mariani’s blog (or his old blog). For the most part, we are all newbies in this area, but things will only get better.

It’s not at all dissimilar to the change from assembler to C++ code for games.  It all comes down to a simple question.  Do the benefits outweigh the negatives?  Are you willing to sacrifice a small bit of performance for the easier development of managed code?  The quicker time to market?  The greater security?  The easier debugging?

Like i said earlier, there are certain games today that aren’t good fits for having the main engine written in managed code, but there are plenty of titles that are.  The top 10 selling PC games a few weeks ago included two versions of the Sims, Zoo Tycoon (+ expansion), Age of Mythology, Backyard Basketball 2004, Uru: Ages beyond myst, any of which could have been written in managed code.

Anyone who’s take the time to write some code in one of the managed languages normally realizes the benefits pretty quickly.