Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

Monday, July 14, 2014

Clipping Lines to a Rectangle using the Cohen-Sutherland Algorithm

For the last six or eight months, off and on, I’ve been trying to write some code that will create a Voronoi diagram from a set of random points, inspired by Amit Patel’s Polygonal Map Generation demo.  In the last week or so, I had a bit of a break-through, in that I finally managed to get an implementation of Fortune’s Algorithm put together that would actually work and generate the Voronoi edges and vertices correctly so that I could render them.  Since most of the existing implementations I’ve been trying to work from are either broken or an enormous pain in the ass to try to understand, I’m planning on writing up my implementation, once I’m finally happy with it.  I’ve still got a fair way to go, since right now I’m only outputting the graph edges and vertices successfully, and while that renders prettily, I haven’t gotten to the point that I have all of the really useful graph connectivity information being output yet.  Hopefully, if everything goes well, I’ll manage to finish that up by the end of the week, but this has proven to be a real bugger of a problem to solve.

As I mentioned, most of the implementations I’ve studied are either incomplete or broken.  Particularly, I haven’t found an example yet that correctly clips the edges of the generated Voronoi polygons to a rectangular area.  So long as all you care about is rendering the graph to the screen, this isn’t a big problem, since most 2D graphics libraries will happily draw lines that extend beyond the drawable area of the screen.  However, if you’re trying to subdivide a rectangular region into Voronoi polygons, its kind of nice to have your edges actually clipped to that region.  What I’m envisioning doing with this code eventually is using it to render 3D maps, subdivided into territories, but with an overlaid rectangular grid for moving units – think of the strategic map in Total War games or Lords of the Realm. 

After I got frustrated with trying to make the broken clipping code I was working from perform correctly, I trolled Google and came across the Cohen-Sutherland line-clipping algorithm.  This looked to be exactly what I needed, and wonder of wonders, the Wikipedia page actually featured a readable, reasonable example, rather than the obtuse academic pseudo-code you usually find there (see the Fortune’s Algorithm article…).  The only thing I would caution you about with the Wikipedia example is that it uses a coordinate system where the origin is the lower-left bounds of a rectangle as usually encountered in mathematics, rather than the upper-left origin we commonly use in computer graphics, so some tweaking is necessary.

The code for this example can be downloaded from my github repository, at https://github.com/ericrrichards/dx11.git.  The algorithm is included in the Algorithms project, while the example code is in the CohenSutherlandExample project.  The example code is a quick-and-dirty mess of GDI drawing code, so I’m going to focus on the algorithm code.

image

After clipping:

image

Thursday, July 10, 2014

One Year Later…

Tuesday was the anniversary of my first real post on this blog.  For the most part, I’ve tried to keep my content here on the technical side of things, but, what the hell, this is a good time to reflect on a year of blogging – what went well, what went poorly, and where I’m going from here. 

What I Meant to Accomplish (And What I actually Accomplished…)

Content

I restarted this blog about a year ago to document my attempt at learning DirectX 11, using Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0.  In my day job, I mostly code ASP.NET and Winforms applications using C#, so I decided to convert the C++ examples from Mr.Luna’s book into C#, using SlimDX as my managed DirectX wrapper.  SlimDX appeared to me to be slightly more mature than its main competitor, SharpDX, as its documentation was a little more complete, and it had been out in the wild a little bit longer, so there was a bit more third-party information (StackOverflow, other blogs, GameDev.net forum postings, etc.) on it.  I suppose I could have also gone with XNA, although Microsoft appears to have abandoned any new development on it (Last release 9/16/2010…), and I felt like SlimDX’s simple wrapper around the DirectX library would be easier to translate than shoehorning into the XNA model, not to mention wrassling with the XNA Content Pipeline.

My initial goal was to work through the book, converting each of the examples presented and then blogging about the process.  Except for the chapters on Compute Shaders and Quaternions (which I have yet to tackle, mostly because the examples are not terribly interesting), I completed that goal by the middle of November of last year.  From there, I started incorporating elements from Carl Granberg’s Programming an RTS Game with Direct3D into the terrain rendering code that Luna’s book presented, as well as dabbling in integrating Direct2D and SpriteTextRenderer to handle 2D drawing.

After that, my intention was to start working my way through Ian Millington’s book, Game Physics Engine Development.  This is about where I ran out of steam.  Between the hassle of trying to reconcile the code examples from this book, which were based on OpenGL and somewhat less self-contained than what I had been working on previously, various issues in my personal and work life, and the general malaise of an especially cold, dark winter here in New England, my impetus to work on my side projects faded away spectacularly.  If any of you followed this regularly, I’m sure you’ve noticed that it’s been almost four months since I’ve posted anything new, and before that there was another dry spell of more than a month. 

With the arrival of summer and a move that reduces both the strain on my finances and the number of hours per day I spend driving back and forth to work considerably, I’ve found that my mood has improved by leaps and bounds, and I have the extra energy to expend on coding outside of work again, finally.  Right now, I’ve got a number of new things I’m working through, and a number of ideas in the pipeline that should be making their way up here in the near future.

Sharing

In addition to learning things myself, another of my goals was to share what I learned with anyone else out there interested in graphics and game programming.  Putting this content up here was the first step, but the internet is littered with interesting information buried and inaccessible because the GoogleBots either can’t find it or deem it unimportant.  I don’t pretend to know anything about SEO, and I’m not sure its something I really want to get involved in – everything I’ve read seems to indicate that you either need to get lucky and go viral, or else spend a bunch of time or money doing vaguely unethical things, like spamming links or hiring a shady outfit in the Ukraine or China to do it for you. 

So, my limited efforts at promotion concentrated on GameDev.net, Facebook, Twitter, and HackerNews. 

  • GameDev.net was probably the most effective.  The main source of views came from cross-posting each post I made here on my developer journal there (http://www.gamedev.net/blog/1703-richards-software-ramblings/).  A couple of these posts got picked up by the admins and made the front page, under the Featured Developer Journals section, which resulted in a pretty big boost in views.  I also added links to here on my forum signature, which I don’t believe amounted to much.  To some extent, I also trolled the Beginner and DirectX forums, suggesting that anybody that was having a problem I had covered check out the relevant post.  I hope that this was somewhat helpful, and not just spammy…
  • Facebook was probably not worth bothering with…  Generally my friends on Facebook are split between people I knew in high school, family members, my fraternity brothers from college, and other Dartmouth people who trend towards being law students, med students, teachers, or I-bankers.  It’s not exactly a fertile demographic for tutorials on 3D graphics programming.  Now, if I was writing Top-X lists of Marvel characters, I might do better there, but that’s the nature of Facebook…
  • Twitter is also kind of a non-starter.  Given that I have a grand total of 11 followers, and I don’t really believe in the whole idea of Twitter, that’s probably not surprising.
  • HackerNews was very hit or miss.  Most of the time I would post a link, and it would languish on the back pages forever.  Once in a while, however, something would get some bizarre traction with one of the Hacker News aggregators (usually Feedly), and I’d see a big spike in views for a post.  I have no idea what the rhyme or reason for this is; my best guess is that I happened to post at a particularly dead time, and my link stayed near the top of the newest links page for longer than normal.

However, the vast majority of people who made it to my site came from Google searches.  I’m not sure how, but my site has come to rank pretty highly for certain Google searches.  Some examples, as of today:

Some Charts…

image

Oddly enough, the biggest growth in traffic came after I ran out of steam and stopped posting.  I wonder if this was a missed opportunity to continue growing.

image

Kind of an interesting split in the most popular pages on the site.  Mostly, this is my more interesting and more advanced content, although the two most basic examples on the site are also represented.  Also, my tutorial index page is doing pretty well.

Finances

Making money isn’t really my goal with this site, which is a good thing, since it certainly has not been a mint.  Fortunately, maintaining the site really doesn’t cost me anything: I use Blogger for hosting, which is free, and my GoDaddy registration only costs $10 for the year.

Hopefully, the banner ads I’ve got on the site are not that obnoxious.  Probably most of you who would see this use AdBlock anyway.  They have more than doubled my monetary investment in the site over the past year – as of today, I’ve earned just over $25 from AdSense.  At this rate, I’ll hit the minimum payout threshold in another three years, haha.

If I were to take a wild guess at the number of hours I’ve spent coding and writing up these posts over the last year, and then calculating an hourly rate based on my AdSense revenue, I’m probably earning about a nickel an hour… Even mowing grass or stacking firewood would be orders of magnitude more lucrative, but whatever – I’ve arguably learned more about programming and computer science in a year of porting C++ to C# and fighting with shaders than I did during my courses for my Comp Sci minor.  Even better, kicking around the dusty corners of the .NET library to match up with C++ constructs has broadened my knowledge of what is available, and spending so much time in Visual Studio coding and debugging has taught me all sorts of tricks that I can use in the day job too.

What’s Next?

In a perfect world, I’d continue to churn out high-quality, interesting content on a regular basis, and this site would become a go-to resource for SlimDX and general C# game programming, similar to the LazyFoo or RasterTek tutorial series.  I’ll settle for just getting back to a more consistent posting schedule, though.  There are so many interesting topics to consider, and both coding and then explaining what I’ve done is the best way I have found so far to cement my understanding of algorithms.

As I mentioned, I have a bunch of stuff in the pipeline that I’m hoping to finish and write up in the near future, so hopefully there will be some new content here shortly.

I’ve been toying with the idea of writing a book, since SlimDX does not appear to be very well covered in print (At present, Amazon only lists one title).

Anyway, thanks for reading over the past year, and I hope that this site has been useful.  Here’s to an even better second year of http://www.richardssoftware.net/!

Thursday, March 20, 2014

A Dynamic ASP.NET MVC Controller using CSScript

At my day job, we are working on a large enterprise system.  One of the feature areas of this product is an array of different charts of performance metrics.  All of these charts accept a common set of parameters, and generate some json that is used by our client side to render a chart for the requested metric.

At the moment, we are using a standard MVC controller to define the actions necessary to calculate and collate these stats.  The other day, we were kicking around the idea of how we would be able to add new charts in for a client, after they already have the product installed.  With the current design we are using, there isn’t really an easy way to do that without dropping in the updated build that has the updated controller.  That’s not really an ideal situation, since reinstalling and reconfiguring the application is kind of a heavy-handed approach when all we want to do is add a couple of new charts in, without impacting the rest of the application.  It also doesn’t give us many options if our customer, for whatever reason, wants to disable certain charts.

Probably, what we’ll end up doing, is using the database to define the chart calculation SQL, either using by storing the SQL statements to retrieve the metrics in a table, or via stored procedures.  I’ll admit, I’m not crazy about this approach, since I’ve worked on a couple of other products that used this method, and I have some slight PTSD from trying to troubleshoot convoluted SQL that was poorly written and not checked into source control other than in the master database creation script.  With a sane process, this will probably be an effective option; one of the reasons we are likely going to need to go this route is that we are developing parallel .NET and Java versions, due to API restrictions of the underlying technology stacks that we are targeting, while trying to use a common UI and database.

This did get me thinking about how it would be possible to dynamically populate the actions of an MVC controller.  I had come across some posts on CSScript, which is a library which allows you to run arbitrary C# code interpretively.  Depending on how you use it, you can load up entire classes, methods or even statements, and invoke them at run-time, without the script code being compiled and embedded in your .dll.  So, theoretically, it should be possible to define the controller action that calculates the metrics for a given chart in a CSScript file, load that up, execute it using the CSScript interpreter, passing any parameters needed in, and return the results.

The other piece of this is figuring out how to get an MVC controller to accept action routes that are not explicitly defined for it.  Fortunately, with the latest version of MVC, you are able to tweak the default behavior of the controller to a considerable degree, if you are willing to delve into overriding some of the virtual methods the the Controller class gives you access to.  So, after a little digging, I was able to figure out a slightly crazy way to wrestle MVC into doing what I wanted it to do – accepting dynamic controller actions, executing a scripted action, and then returning the result to the webpage.

This is a very quick and dirty example, really just a proof of concept prototype, so I make no promises as to its performance or robustness.  But, it was cool to figure out that such a thing was possible, and I figured that I would share it as a jumping-off point in case anyone else encounters a similar crazy requirement.  The code is available on my GitHub repository, at https://github.com/ericrrichards/MVCScriptedController.git.

As a simple example, let’s assume that we know that we are going to have to support controller actions with a signature like this:

ActionResult Action( string i )

An example implementation of an action like this would look like this (from Scripts/Reports/Foo.cs in the project):

public ActionResult Foo(string i) {
return new JsonResult() {
Data = "Foo - " + i,
JsonRequestBehavior = JsonRequestBehavior.AllowGet
};
}

We’ll start with an empty MVC5 project, and then add a new empty Controller to the project.  Deleting the auto-generated Index action, that leaves us with this:

using System.Web.Mvc;

namespace ScriptedController.Controllers {
public class ReportController : Controller {

}
}

The Controller class offers us a virtual method, void HandleUnknownAction(string actionName), which is fired when a request is routed to the controller and the requested action does not match any of the defined controller actions.  By default, this method just throws an HTTPException with a 404 error code.  Interestingly enough, there must be something else in the Controller class which catches this error, because if you were to actually hit a non-existent controller action without overriding HandleUnknownAction, you’ll get a 200 response with no content.  However, we can override this method to allow us to do something else, which in our case will be to look for a dynamically loaded action method delegate and execute that instead.


To support this, we are going to add a delegate to match the action signature that we are expecting, and a dictionary to map action names to these delegates to our controller. We’ll make this dictionary static, so that it can be shared between all instances of the controller that we are creating – this will save us some time on each request, since MVC creates a new Controller instance for each incoming request, so that we would otherwise need to load up our dynamic Actions for every request. 

private volatile static Dictionary<string, MvcAction> _actions;
private delegate ActionResult MvcAction(string i);

We can now overload the HandleUnknownAction method, so that it checks this dictionary for an action which matches the passed actionName.  If the action is found, we will execute it, and write the result to the http response, using the ActionResult.ExecuteResult() method.  Otherwise, we will attempt to invoke the base HandleUnknownAction method.  For whatever reason, when we do this, the resulting exception is not caught, so we need to handle that and set the response code to 404.

protected override void HandleUnknownAction(string actionName) {
if (_actions.ContainsKey(actionName)) {
_actions[actionName](HttpContext.Request.Params["i"]).ExecuteResult(ControllerContext);
} else {
try {
base.HandleUnknownAction(actionName);
} catch (Exception ex) {
HttpContext.Response.StatusCode = 404;
}
}
}

Getting the parameters to the action is somewhat hacky.  We can access any of the parameters passed along with the HTTP request by using the HTTPContext.Request.Params dictionary.  In this situation, where we know that all of the dynamic actions for this controller will share a common set of parameters, we can hard-code the parameters that we extract – at the moment, I’m not totally sure how I would handle a more general case, so that bears some thought.


At this point, we have a controller that can handle actions requested of it that are not defined explicitly in the class, assuming that they are contained in its dictionary of dynamic actions.  The next step is figuring out how to populate that dictionary with the dynamic actions.  For that, we need to use CSScript.  We are going to add another method to our controller, named EnsureActionsLoaded(), which will handle this for us. 


What this method will do, is check whether the actions dictionary has already been loaded, and if not, load the actions from a directory which contains a series of script files, where each file contains a single action method, with the filename matching the action name.  We’ll use CSScript to dynamically load each method as a delegate, which we can then push into our dictionary. 

private void EnsureActionsLoaded() {
if (_actions != null) {
return;
}
lock (SyncRoot) {
if (_actions == null) {
_actions = new Dictionary<string, MvcAction>();
var path = Server.MapPath(ScriptsDirectory);

foreach (var file in Directory.GetFiles(path)) {
try {
var action = CSScript.Evaluator.LoadDelegate<MvcAction>(System.IO.File.ReadAllText(file));
var actionName = Path.GetFileNameWithoutExtension(file);
if (!string.IsNullOrEmpty(actionName)) {
_actions[actionName] = action;
}
} catch (Exception ex) {
Console.WriteLine(ex.Message);
}
}
}
}
}

If you have dealt with the singleton pattern in C#, you may have noticed that we are using a similar double-check locking method to determine if the action dictionary has already been loaded.  Since the dictionary is static, and because of the multi-threaded nature of web applications, it is possible that two or more simultaneous requests will arrive at once, and if both attempt to load the actions dictionary at the same time, they have the potential to stomp on each other.


Since we are using the Server.MapPath() function to determine the location of our script directory, we need to also override the Initialize() method of the controller to call our EnsureActionsLoaded method.  Ideally, we could do this in the constructor, but at that point, the Server variable is not yet initialized.

protected override void Initialize(RequestContext requestContext) {
base.Initialize(requestContext);
EnsureActionsLoaded();
}

At this point, we have a working controller which can load CSScript files as controller actions, and execute them.  There are a couple of simple examples of these scripts in the Scripts/Report folder of the project, like the following Fizz.cs:

using System.Web.Mvc;
public ActionResult Fizz(string i) {
return new JsonResult() {
Data = "Fizz - " + i,
JsonRequestBehavior = JsonRequestBehavior.AllowGet
};
}

One thing to note is that you will either need to use fully-qualified type names in your scripts, or else include the necessary using statements for CSScript to properly parse and compile your scripts.  I haven’t tried anything more complicated yet using assemblies that are not referenced in the project itself, so be aware that there may be some gotchas there.


Altogether, our dynamic controller looks like this:

using System;
using System.Collections.Generic;
using System.Web.Mvc;
using System.IO;

namespace ScriptedController.Controllers {
using System.Web.Routing;

using CSScriptLibrary;

public class ReportController : Controller {
private const string ScriptsDirectory = "~/Scripts/Reports";

private static readonly object SyncRoot = new object();
private volatile static Dictionary<string, MvcAction> _actions;
private delegate ActionResult MvcAction(string i);

protected override void Initialize(RequestContext requestContext) {
base.Initialize(requestContext);
EnsureActionsLoaded();
}

private void EnsureActionsLoaded() {
if (_actions != null) {
return;
}
lock (SyncRoot) {
if (_actions == null) {
_actions = new Dictionary<string, MvcAction>();
var path = Server.MapPath(ScriptsDirectory);

foreach (var file in Directory.GetFiles(path)) {
try {
var action = CSScript.Evaluator.LoadDelegate<MvcAction>(System.IO.File.ReadAllText(file));
var actionName = Path.GetFileNameWithoutExtension(file);
if (!string.IsNullOrEmpty(actionName)) {
_actions[actionName] = action;
}
} catch (Exception ex) {
Console.WriteLine(ex.Message);
}
}
}
}
}

protected override void HandleUnknownAction(string actionName) {
if (_actions.ContainsKey(actionName)) {
_actions[actionName](HttpContext.Request.Params["i"]).ExecuteResult(ControllerContext);
} else {
try {
base.HandleUnknownAction(actionName);
} catch (Exception ex) {
HttpContext.Response.StatusCode = 404;
}
}
}
}
}

If you run the project, you should be able to hit both /Report/Foo and /Report/Fizz, and see results like the below:


image


image


Attempting to hit an unregistered controller action will give you the lovely ASP.NET 404 error page:


image




I’m not sure if I will ever get a chance to actually use this, but it was a fun little experiment.  Hopefully if any of you find yourself with a similar problem, this will give you a starting point to work from.

Wednesday, February 5, 2014

Rendering Text using SlimDX SpriteTextRenderer

Howdy.  Today, I’m going to discuss rendering UI text using the SlimDX SpriteTextRenderer library.  This is a very nifty and light-weight extension library for SlimDX, hosted on CodePlex.  In older versions of DirectX, it used to be possible to easily render sprites and text using the ID3DXSprite and ID3DXFont interfaces, but those have been removed in newer versions of DirectX.  I’ve experimented with some other approaches, such as using Direct2D and DirectWrite or the DirectX Toolkit, but wasn’t happy with the results.  For whatever reason, Direct2D doesn’t interop well with DirectX 11, unless you create a shared DirectX 10 device and jump through a bunch of hoops, and even then it is kind of a PITA.  Likewise, I have yet to find C# bindings for the DirectX Toolkit, so that’s kind of a non-starter for me; I’d either have to rewrite the pieces that I want to use with SlimDX, or figure out the marshaling to use the C++ dlls.  So for that reason, the SpriteTextRenderer library seems to be my best option at the moment, and it turned out to be relatively simple to integrate into my application framework.

If you’ve used either the old DirectX 9 interfaces or XNA, then it’ll be pretty intuitive how to use SpriteTextRenderer.  The SpriteRenderer class has some useful methods to draw 2D sprites, which I haven’t explored much yet, since I have already added code to draw scree-space quads.  The TextBlockRenderer class provides some simple and handy methods to draw text up on the screen.  Internally, it uses DirectWrite to generate sprite font textures at runtime, so you can use any installed system fonts, and specify the weight, style, and point size easily, without worrying about the nitty gritty details of creating the font.

One limitation of the TextBlockRenderer class is that you can only use an instance of it to render text with a single font.  Thus, if you want to use different font sizes or styles, you need to create different instances for each font that you want to use.  Because of this, I’ve written a simple manager class, which I’m calling FontCache, which will provide a central point to store all the fonts that are used, as well as a default font if you just want to throw some text up onto the screen.

The new code for rendering text has been added to my pathfinding demo, available at my GitHub repository, https://github.com/ericrrichards/dx11.git.

font

Saturday, January 18, 2014

Simple Particle Physics

As I mentioned last time, I’m going to move on from fiddling with my Terrain class for a little while, and start working on some physics code instead.  I bought a copy of Ian Millington’s Game Physics Engine Development some months ago and skimmed through it, but was too busy with other things to really get into the accompanying source code.  Now, I do have some free cycles, so I’m planning on working through the examples from the book as my next set of posts.

Once again, the original source code is in C++, rather than C# as I’ll be using.  Millington’s code also uses OpenGL and GLUT, rather than DirectX.  Consequently, these aren’t going to be such straight ports like I did with most of Frank Luna’s examples; I’ll be porting the core physics code, and then for the examples, I’m just going to have to make something up that showcases the same features.

In any case, we’ll start off with the simple particle physics of Chapters 3 & 4, and build a demo that simulates the ballistics of firing some different types of projectiles.  You can find my source for this example on my GitHub page, at https://github.com/ericrrichards/dx11.git.

Here you can see the four projectile types: 1.) a pistol-type round, 2.) a large artillery shell, 3) a fireball, 4.) a bolt from a railgun or energy weapon

Saturday, January 11, 2014

Pathfinding III: Putting it All Together

Watch the intrepid red blob wind its way through the mountain slopes!

Last time, we discussed the implementation of our A* pathfinding algorithm, as well as some commonly used heuristics for A*.  Now we’re going to put all of the pieces together and get a working example to showcase this pathfinding work.

We’ll need to slightly rework our mouse picking code to return the tile in our map that was hit, rather than just the bounding box center.  To do this, we’re going to need to modify our QuadTree, so that the leaf nodes are tagged with the MapTile that their bounding boxes enclose.

We’ll also revisit the function that calculates which portions of the map are connected, as the original method in Part 1 was horribly inefficient on some maps.  Instead, we’ll use a different method, which uses a series of depth-first searches to calculate the connected sets of MapTiles in the map.  This method is much faster, particularly on maps that have more disconnected sets of tiles.

We’ll also need to develop a simple class to represent our unit, which will allow it to update and render itself, as well as maintain pathfinding information.  The unit class implementation used here is based in part on material presented in Chapter 9 of Carl Granberg’s Programming an RTS Game with Direct3D.

Finally, we’ll add an additional texture map to our rendering shader, which will draw impassible terrain using a special texture, so that we can easily see the obstacles that our unit will be navigating around.  You can see this in the video above; the impassible areas are shown with a slightly darker texture, with dark rifts.

The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.

Thursday, January 2, 2014

Pathfinding II: A* and Heuristics

In our previous installment, we discussed the data structures that we will use to represent the graph which we will use for pathfinding on the terrain, as well as the initial pre-processing that was necessary to populate that graph with the information that our pathfinding algorithm will make use of.  Now, we are ready to actually implement our pathfinding algorithm.  We’ll be using A*, probably the most commonly used graph search algorithm for pathfinding.

A* is one of the most commonly used pathfinding algorithms in games because it is fast, flexible, and relatively simple to implement.  A* was originally a refinement of Dijkstra’s graph search algorithm. Dijkstra’s algorithm is guaranteed to determine the shortest path between any two nodes in a directed graph, however, because Dijkstra’s algorithm only takes into account the cost of reaching an intermediate node from the start node, it tends to consider many nodes that are not on the optimal path.  An alternative to Dijkstra’s algorithm is Greedy Best-First search.  Best-First uses a heuristic function to estimate the cost of reaching the goal from a given intermediate node, without reference to the cost of reaching the current node from the start node.  This means that Best-First tends to consider far fewer nodes than Dijkstra, but is not guaranteed to produce the shortest path in a graph which includes obstacles that are not predicted by the heuristic.

A* blends these two approaches, by using a cost function (f(x)) to evaluate each node based on both the cost from the start node (g(x)) and the estimated cost to the goal (h(x)).  This allows A* to both find the optimum shortest path, while considering fewer nodes than pure Dijkstra’s algorithm.  The number of intermediate nodes expanded by A* is somewhat dependent on the characteristics of the heuristic function used.  There are generally three cases of heuristics that can be used to control A*, which result in different performance characteristics:

  • When h(x) underestimates the true cost of reaching the goal from the current node, A* will expand more nodes, but is guaranteed to find the shortest path.
  • When h(x) is exactly the true cost of reaching the goal, A* will only expand nodes along the shortest path, meaning that it runs very fast and produces the optimal path.
  • When h(x) overestimates the true cost of reaching the goal from the current node, A* will expand fewer intermediate nodes.  Depending on how much h(x) underestimates the true cost, this may result in paths that are not the true shortest path; however, this does allow the algorithm to complete more quickly.

For games, we will generally use heuristics of the third class.  It is important that we generate good paths when doing pathfinding for our units, but it is generally not necessary that they be mathematically perfect; they just need to look good enough, and the speed savings are very important when we are trying to cram all of our rendering and update code into just a few tens of milliseconds, in order to hit 30-60 frames per second.

A* uses two sets to keep track of the nodes that it is operating on.  The first set is the closed set, which contains all of the nodes that A* has previously considered; this is sometimes called the interior of the search.  The other set is the open set, which contains those nodes which are adjacent to nodes in the closed set, but which have not yet been processed by the A* algorithm.  The open set is generally sorted by the calculated cost of the node (f(x)), so that the algorithm can easily select the most promising new node to consider.  Because of this, we usually consider the open list to be a priority queue.  The particular implementation of this priority queue has a large impact on the speed of A*; for best performance, we need to have a data structure that supports fast membership checks (is a node in the queue?), fast removal of the best element in the queue, and fast insertions into the queue.  Amit Patel provides a good overview of the pros and cons of different data structures for the priority queue on his A* page; I will be using a priority queue derived from Blue Raja’s Priority Queue class, which is essentially a binary heap.  For our closed set, the primary operations that we will perform are insertions and membership tests, which makes the .Net HashSet<T> class a good choice.

Monday, December 30, 2013

Pathfinding 1: Map Representation and Preprocessing

This was originally intended to be a single post on pathfinding, but it got too long, and so I am splitting it up into three or four smaller pieces.  Today,we’re going to look at the data structures that we will use to represent the nodes of our pathfinding graph, and generating that graph from our terrain class.

When we were working on our quadtree to detect mouse clicks on the terrain, we introduced the concept of logical terrain tiles; these were the smallest sections of the terrain mesh that we wanted to hit when we did mouse picking, representing a 2x2 portion of our fully-tessellated mesh.  These logical terrain tiles are a good starting point for generating what I am going to call our map: the 2D grid that we will use for pathfinding, placing units and other objects, defining areas of the terrain, AI calculations, and so forth.  At the moment, there isn’t really anything to these tiles, as they are simply a bounding box attached to the leaf nodes of our quad tree.  That’s not terribly useful by itself, so we are going to create a data structure to represent these tiles, along with an array to contain them in our Terrain class.  Once we have a structure to contain our tile information, we need to extract that information from our Terrain class and heightmap, and generate the graph representing the tiles and the connections between them, so that we can use it in our pathfinding algorithm.

The pathfinding code implemented here was originally derived from Chapter 4 of Carl Granberg’s Programming an RTS Game with Direct3D.  I’ve made some heavy modifications, working from that starting point, using material from Amit Patel’s blog and BlueRaja’s C# PriorityQueue implementation.  The full code for this example can be found on my GitHub repository, https://github.com/ericrrichards/dx11.git, under the 33-Pathfinding project.

graph_thumb4

Thursday, December 12, 2013

OutOfMemoryException - Eliminating Temporary Allocations with Static Buffers in Effect Wrapper Code

I came across an interesting bug in the wrapper classes for my HLSL shader effects today.  In preparation for creating a class to represent a game unit, for the purposes of demonstrating the terrain pathfinding code that I finished up last night, I had been refactoring my BasicModel and SkinnedModel classes to inherit from a common abstract base class, and after getting everything to the state that it could compile again, I had fired up the SkinnedModels example project to make sure everything was still rendering and updating correctly.  I got called away to do something else, and ended up checking back in on it a half hour or so later, to find that the example had died with an OutOfMemoryException.  Looking at Task Manager, this relatively small demo program was consuming over 1.5 GB of memory!

I restarted the demo, and watched the memory allocation as it ran, and noticed that the memory used seemed to be climbing quite alarmingly, 0.5-1 MB every time Task Manager updated.  Somehow, I’d never noticed this before…  So I started the project in Visual Studio, using the Performance Wizard to sample the .Net memory allocation, and let the demo run for a couple of minutes.  Memory usage had spiked up to about 150MB, in this simple demo that loaded maybe 35 MB of textures, models, code and external libraries…

memprofiling

Looking through the performance dump, the vast majority of the memory allocated appeared to be Matrix[]’s and byte[]’s, allocated in Core.FX.BasicEffect, inside of SetBoneTransforms(), SetDirLights(), and SetMaterial().  Looking at the code, which is posted below, there were evidently a bunch of temporary arrays being allocated, either to hold the results of Util.GetArray(), or as a result of transforming the List of bone matrices into an array, in order to pass them into the GPU.  I was under the assumption that these temporary arrays ought to be garbage collected, but after some googling around and trying to force the garbage collector to run, I was making no headway.

public void SetDirLights(DirectionalLight[] lights) {
System.Diagnostics.Debug.Assert(lights.Length <= 3, "BasicEffect only supports up to 3 lights");
var array = new List<byte>();
foreach (var light in lights) {
var d = Util.GetArray(light); // memory hotspot
array.AddRange(d); // memory hotspot
}

_dirLights.SetRawValue(new DataStream(array.ToArray(), false, false), array.Count);// memory hotspot
}
public void SetMaterial(Material m) {
var d = Util.GetArray(m);// memory hotspot
_mat.SetRawValue(new DataStream(d, false, false), d.Length);
}
public void SetBoneTransforms(List<Matrix> bones) {
_boneTransforms.SetMatrixArray(bones.ToArray());// memory hotspot
}

At this point, I realized that the arrays of directional lights and bone matrices, at least, could be allocated once, and just updated, rather than recreated each time they change.  The underlying HLSL shader only supports a maximum of 3 lights and 96 bones, so there was really no need to support arrays larger than these maximums.  I realized that I could add additional statically allocated buffers, for the purposes of uploading the data to the GPU, and just copy over the passed-in arrays to this static buffer.  The resulting code below ended up reducing my memory growth significantly.  (DirectionalLight.Stride is the unmanaged byte size of my DirectionalLight structure, calculated and cached by using Marshal.SizeOf()).

public const int MaxLights = 3;
private readonly byte[] _dirLightsArray = new byte[DirectionalLight.Stride*MaxLights];

public const int MaxBones = 96;
private readonly Matrix[] _boneTransformsArray = new Matrix[MaxBones];

public void SetDirLights(DirectionalLight[] lights) {
System.Diagnostics.Debug.Assert(lights.Length <= MaxLights, "BasicEffect only supports up to 3 lights");

for (int i = 0; i < lights.Length && i < MaxLights; i++) {
var light = lights[i];
var d = Util.GetArray(light);
Array.Copy(d, 0, _dirLightsArray, i*DirectionalLight.Stride, DirectionalLight.Stride );
}

_dirLights.SetRawValue(new DataStream(_dirLightsArray, false, false), _dirLightsArray.Length);
}

public void SetBoneTransforms(List<Matrix> bones) {
for (int i = 0; i < bones.Count && i < MaxBones; i++) {
_boneTransformsArray[i] = bones[i];
}
_boneTransforms.SetMatrixArray(_boneTransformsArray);
}

This cut down on my memory growth problem significantly, but I wasn’t all the way there.  Instead of seeing memory go up a couple megabytes every few seconds, I was now seeing growth of a few hundred kilobytes.  Firing up the VS profiling tools again, I saw that I was still getting a lot of small allocations as a result of Util.GetArray, inside of SetDirLights() and SetMaterial().  GetArray() seemed to be the culprit, as it was following the same pattern of allocating a new byte array on each call.

public static byte[] GetArray(object o) {
var len = Marshal.SizeOf(o);
var arr = new byte[len]; // hotspot
var ptr = Marshal.AllocHGlobal(len);
Marshal.StructureToPtr(o, ptr, true);
Marshal.Copy(ptr, arr, 0, len);
Marshal.FreeHGlobal(ptr);
return arr;

}

I was a little more hesitant to use the same trick of statically allocating a buffer here, since I was somewhat concerned that I might be copying the unmanaged memory over to the static array, and then overwriting it in another call before the data contained was consumed by the calling code. However, after a little searching, I realized that this code was only used by my various HLSL Effect wrapping functions and some similar code from the first lighting example that was fundamentally the same.  So effectively, with my rendering code being single-threaded, the data in the returned array was only “live” for a couple lines of C# code at most, with no real threat of thread-safety issues, so long as the calling code immediately uploaded the returned data to the GPU or copied it to another array, it seemed as though it should work.

public static class Util {
private static byte[] _unmanagedStaging = new byte[1024]; // 1024 is almost certainly overkill

public static byte[] GetArray(object o) {
Array.Clear(_unmanagedStaging, 0, _unmanagedStaging.Length);
var len = Marshal.SizeOf(o);
if (len >= _unmanagedStaging.Length) {
// only if we need to, resize the static buffer
_unmanagedStaging = new byte[len];
}
var ptr = Marshal.AllocHGlobal(len);
Marshal.StructureToPtr(o, ptr, true);
Marshal.Copy(ptr, _unmanagedStaging, 0, len);
Marshal.FreeHGlobal(ptr);
return _unmanagedStaging;

}
}

One downside of this change, is that the calling code needs to know the size of the structure passed in to be converted to raw bytes. In my case, this is no real issue, since I only use my own structures, and I know what it was I passed down. I should probably return the actual portion of the returned buffer that is used in an out parameter, but this is good enough for me right now.  With this change, my memory growth has shrunk to nearly nothing (I’m still seeing a very slow growth, but it’s pretty negligible, and I need to switch to a different (non-free) profiler to chase down these last bits).


So I solved my problem, but this does seem a little half-baked.  I’m not quite certain what the underlying issue with my original approach was (possibly that it lies at the cusp between managed C# code and unmanaged DirectX code?), but for whatever reason, the GC was not working for me, and I’ve had to fall back and implement an incredibly primitive memory management system myself.  Oh well…  If you’ve got a better idea, I’d love to hear it.

Refactoring Rendering Code out of the Terrain Class

Howdy, time for an update.  I’ve mostly gotten my terrain pathfinding code first cut completed; I’m creating the navigation graph, and I’ve got an implementation of A* finished that allows me to create a list of terrain nodes that represents the path between tile A and tile B.  I’m going to hold off a bit on presenting all of that, since I haven’t yet managed to get a nice looking demo to show off the pathfinding yet.  I need to do some more work to create a simple unit class that can follow the path generated by A*, and between work and life stuff, I haven’t gotten the chance to round that out satisfactorily yet.

I’ve also been doing some pretty heavy refactoring on various engine components, both for design and performance reasons.  After the last series of posts on augmenting the Terrain class, and in anticipation of adding even more functionality as I added pathfinding support, I decided to take some time and split out the code that handles Direct3D resources and rendering from the more agnostic logical terrain representation.  I’m not looking to do this at the moment, but this might also make implementing an OpenGL rendering system less painful, potentially.

Going through this, I don’t think I am done splitting things up.  I’m kind of a fan of small, tightly focused classes, but I’m not necessarily an OOP junkie.  Right now, I’m pretty happy with how I have split things out.  I’ve got the Terrain class, which contains mostly the rendering independent logical terrain representation, such as the quad tree and picking code, the terrain heightmap and heightmap generation code, and the global terrain state properties (world-space size, initialization information struct, etc).  The rendering and DirectX resource management code has been split out into the new TerrainRenderer class, which does all of the drawing and creates all of the DirectX vertex buffers and texture resources.

I’ll spare you all the intermediate gyrations that this refactoring push put me through, and just post the resulting two classes.  Resharper was invaluable in this process; if you have access to a full version of Visual Studio, I don’t think there is a better way to spend $100.  I shiver to think of how difficult this would have been without access to its refactoring and renaming tools.

Tuesday, December 3, 2013

Terrain Tile Picking

Typically, in a strategy game, in addition to the triangle mesh that we use to draw the terrain, there is an underlying logical representation, usually dividing the terrain into rectangular or hexagonal tiles.  This grid is generally what is used to order units around, construct buildings, select targets and so forth.  To do all this, we need to be able to select locations on the terrain using the mouse, so we will need to implement terrain/mouse-ray picking for our terrain, similar to what we have done previously, with model triangle picking.

We cannot simply use the same techniques that we used earlier for our terrain, however.  For one, in our previous example, we were using a brute-force linear searching technique to find the picked triangle out of all the triangles in the mesh.  That worked in that case, however, the mesh that we were trying to pick only contained 1850 triangles.  I have been using a terrain in these examples that, when fully tessellated, is 2049x2049 vertices, which means that our terrain consists of more than 8 million triangles.  It’s pretty unlikely that we could manage to use the same brute-force technique with that many triangles, so we need to use some kind of space partitioning data structure to reduce the portion of the terrain that we need to consider for intersection.

Additionally, we cannot really perform a per-triangle intersection test in any case, since our terrain uses a dynamic LOD system.  The triangles of the terrain mesh are only generated on the GPU, in the hull shader, so we don’t have access to the terrain mesh triangles on the CPU, where we will be doing our picking.  Because of these two constraints, I have decide on using a quadtree of axis-aligned bounding boxes to implement picking on the terrain.  Using a quad tree speeds up our intersection testing considerably, since most of the time we will be able to exclude three-fourths of our terrain from further consideration at each level of the tree.  This also maps quite nicely to the concept of a grid layout for representing our terrain, and allows us to select individual terrain tiles fairly efficiently, since the bounding boxes at the terminal leaves of the tree will thus encompass a single logical terrain tile.  In the screenshot below, you can see how this works; the boxes drawn in color over the terrain are at double the size of the logical terrain tiles, since I ran out of video memory  drawing the terminal bounding boxes, but you can see that the red ball is located on the upper-quadrant of the white bounding box containing it.

bvh

Monday, November 18, 2013

A Terrain Minimap with SlimDX and DirectX 11

Minimaps are a common feature of many different types of games, especially those in which the game world is larger than the area the player can see on screen at once.  Generally, a minimap allows the player to keep track of where they are in the larger game world, and in many games, particularly strategy and simulation games where the view camera is not tied to any specific player character, allow the player to move their viewing location more quickly than by using the direct camera controls.  Often, a minimap will also provide a high-level view of unit movement, building locations, fog-of-war and other game specific information.

Today, we will look at implementing a minimap that will show us a birds-eye view of the our Terrain class.  We’ll also superimpose the frustum for our main rendering camera over the terrain, so that we can easily see how much of the terrain is in view.  We’ll also support moving our viewpoint by clicking on the minimap.  All of this functionality will be wrapped up into a class, so that we can render multiple minimaps, and place them wherever we like within our application window.

As always, the full code for this example can be downloaded from GitHub, at https://github.com/ericrrichards/dx11.git.  The relevant project is the Minimap project.  The implementation of this minimap code was largely inspired by Chapter 11 of Carl Granberg’s Programming an RTS Game with Direct3D, particularly the camera frustum drawing code.  If you can find a copy (it appears to be out of print, and copies are going for outrageous prices on Amazon…), I would highly recommend grabbing it.

image

Thursday, November 14, 2013

Adding Shadow-mapping and SSAO to the Terrain

Now that I’m finished up with everything that I wanted to cover from Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, I want to spend some time improving the Terrain class that we introduced earlier.  My ultimate goal is to create a two tiered strategy game, with a turn-based strategic level and either a turn-based or real-time tactical level.  My favorite games have always been these kinds of strategic/tactical hybrids, such as (in roughly chronological order) Centurion: Defender of Rome, Lords of the Realm, Close Combat and the Total War series.  In all of these games, the tactical combat is one of the main highlights of gameplay, and so the terrain that that combat occurs upon is very important, both aesthetically and for gameplay.

Or first step will be to incorporate some of the graphical improvements that we have recently implemented into our terrain rendering.  We will be adding shadow-mapping and SSAO support to the terrain in this installment.  In the screenshots below, we have our light source (the sun) low on the horizon behind the mountain range.  The first shot shows our current Terrain rendering result, with no shadows or ambient occlusion.  In the second, shadows have been added, which in addition to just showing shadows, has dulled down a lot of the odd-looking highlights in the first shot.  The final shot shows both shadow-mapping and ambient occlusion applied to the terrain.  The ambient occlusion adds a little more detail to the scene; regardless of it’s accuracy, I kind of like the effect, just to noise up the textures applied to the terrain, although I may tweak it a bit to lighten the darker spots up a bit.

We are going to need to add another set of effect techniques to our shader effect, to support shadow mapping, as well as a technique to draw to the shadow map, and another technique to draw the normal/depth map for SSAO.  For the latter two techniques, we will need to implement a new hull shader, since I would like to have the shadow maps and normal-depth maps match the fully-tessellated geometry; using the normal hull shader that dynamically tessellates may result in shadows that change shape as you move around the map.  For the normal/depth technique, we will also need to implement a new pixel shader.  Our domain shader is also going to need to be updated, so that it create the texture coordinates for sampling both the shadow map and the ssao map, and our pixel shader will need to be updated to do the shadow and ambient occlusion calculations.

This sounds like a lot of work, but really, it is mostly a matter of adapting what we have already done.  As always, you can download my full code for this example from GitHub at https://github.com/ericrrichards/dx11.git.  This example doesn’t really have a stand-alone project, as it came about as I was on my way to implementing a minimap, and thus these techniques are showcased as part of the Minimap project.

Basic Terrain Rendering

image

Shadowmapping Added

image

Shadowmapping and SSAO

image

Friday, November 8, 2013

Rendering Water with Displacement Mapping

Quite a while back, I presented an example that rendered water waves by computing a wave equation and updating a polygonal mesh each frame.  This method produced fairly nice graphical results, but it was very CPU-intensive, and relied on updating a vertex buffer every frame, so it had relatively poor performance.

We can use displacement mapping to approximate the wave calculation and modify the geometry all on the GPU, which can be considerably faster.  At a very high level, what we will do is render a polygon grid mesh, using two height/normal maps that we will scroll in different directions and at different rates.  Then, for each vertex that we create using the tessellation stages, we will sample the two heightmaps, and add the sampled offsets to the vertex’s y-coordinate.  Because we are scrolling the heightmaps at different rates, small peaks and valleys will appear and disappear over time, resulting in an effect that looks like waves.  Using different control parameters, we can control this wave effect, and generate either a still, calm surface, like a mountain pond at first light, or big, choppy waves, like the ocean in the midst of a tempest.

This example is based off of the final exercise of Chapter 18 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0.  The original code that inspired this example is not located with the other example for Chapter 18, but rather in the SelectedCodeSolutions directory.  You can download my source code in full from https://github.com/ericrrichards/dx11.git, under the 29-WavesDemo project.  One thing to note is that you will need to have a DirectX 11 compatible video card to execute this example, as we will be using tessellation stage shaders that are only available in DirectX 11.

image

Sunday, November 3, 2013

SSAO with SlimDX and DirectX11

In real-time lighting applications, like games, we usually only calculate direct lighting, i.e. light that originates from a light source and hits an object directly.  The Phong lighting model that we have been using thus far is an example of this; we only calculate the direct diffuse and specular lighting.  We either ignore indirect light (light that has bounced off of other objects in the scene), or approximate it using a fixed ambient term.  This is very fast to calculate, but not terribly physically accurate.  Physically accurate lighting models can model these indirect light bounces, but are typically too computationally expensive to use in a real-time application, which needs to render at least 30 frames per second.  However, using the ambient lighting term to approximate indirect light has some issues, as you can see in the screenshot below.  This depicts our standard skull and columns scene, rendered using only ambient lighting.  Because we are using a fixed ambient color, each object is rendered as a solid color, with no definition.  Essentially, we are making the assumption that indirect light bounces uniformly onto all surfaces of our objects, which is often not physically accurate.

image

Naturally, some portions of our scene will receive more indirect light than other portions, if we were actually modeling the way that light bounces within our scene.  Some portions of the scene will receive the maximum amount of indirect light, while other portions, such as the nooks and crannies of our skull, should appear darker, since fewer indirect light rays should be able to hit those surfaces because the surrounding geometry would, realistically, block those rays from reaching the surface.

In a classical global illumination scheme, we would simulate indirect light by casting rays from the object surface point in a hemispherical pattern, checking for geometry that would prevent light from reaching the point.  Assuming that our models are static, this could be a viable method, provided we performed these calculations off-line; ray tracing is very expensive, since we would need to cast a large number of rays to produce an acceptable result, and performing that many intersection tests can be very expensive.  With animated models, this method very quickly becomes untenable; whenever the models in the scene move, we would need to recalculate the occlusion values, which is simply too slow to do in real-time.

Screen-Space Ambient Occlusion is a fast technique for approximating ambient occlusion, developed by Crytek for the game Crysis.  We will initially draw the scene to a render target, which will contain the normal and depth information for each pixel in the scene.  Then, we can sample this normal/depth surface to calculate occlusion values for each pixel, which we will save to another render target.  Finally, in our usual shader effect, we can sample this occlusion map to modify the ambient term in our lighting calculation.  While this method is not perfectly realistic, it is very fast, and generally produces good results.  As you can see in the screen shot below, using SSAO darkens up the cavities of the skull and around the bases of the columns and spheres, providing some sense of depth.

image

The code for this example is based on Chapter 22 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0.  The example presented here has been stripped down considerably to demonstrate only the SSAO effects; lighting and texturing have been disabled, and the shadow mapping effects in Luna’s example have been removed.  The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the SSAODemo2 project.  A more faithful adaptation of Luna’s example can also be found in the 28-SsaoDemo project.

Monday, October 28, 2013

Shadow Mapping with SlimDX and DirectX 11

Shadow mapping is a technique to cast shadows from arbitrary objects onto arbitrary 3D surfaces.  You may recall that we implemented planar shadows earlier using the stencil buffer.  Although this technique worked well for rendering shadows onto planar (flat) surfaces, this technique does not work well when we want to cast shadows onto curved or irregular surfaces, which renders it of relatively little use.  Shadow mapping gets around these limitations by rendering the scene from the perspective of a light and saving the depth information into a texture called a shadow map.  Then, when we are rendering our scene to the backbuffer, in the pixel shader, we determine the depth value of the pixel being rendered, relative to the light position, and compare it to a sampled value from the shadow map.  If the computed value is greater than the sampled value, then the pixel being rendered is not visible from the light, and so the pixel is in shadow, and we do not compute the diffuse and specular lighting for the pixel; otherwise, we render the pixel as normal.  Using a simple point sampling technique for shadow mapping results in very hard, aliased shadows: a pixel is either in shadow or lit; therefore, we will use a sampling technique known as percentage closer filtering (PCF), which uses a box filter to determine how shadowed the pixel is.  This allows us to render partially shadowed pixels, which results in softer shadow edges.

This example is based on the example from Chapter 21 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0. The full source for this example can be downloaded from my GitHub repository at https://github.com/ericrrichards/dx11.git, under the ShadowDemos project.

image

Friday, October 25, 2013

Expanding our BasicModel Class

I had promised that we would move on to discussing shadows, using the shadow mapping technique.  However, when I got back into the code I had written for that example, I realized that I was really sick of handling all of the geometry for our stock columns & skull scene.  So I decided that, rather than manage all of the buffer creation and litter the example app with all of the buffer counts, offsets, materials and world transforms necessary to create our primitive meshes, I would take some time and extend the BasicModel class with some factory methods to create geometric models for us, and leverage the BasicModel class to encapsulate and manage all of that data.  This cleans up the example code considerably, so that next time when we do look at shadow mapping, there will be a lot less noise to deal with.

The heavy lifting for these methods is already done; our GeometryGenerator class already does the work of generating the vertex and index data for these geometric meshes.  All that we have left to do is massage that geometry into our BasicModel’s MeshGeometry structure, add a default material and textures, and create a Subset for the entire mesh.  As the material and textures are public, we can safely initialize the model with a default material and null textures, since we can apply a different material or apply diffuse or normal maps to the model after it is created.

The full source for this example can be downloaded from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ShapeModels project.

image

Monday, October 21, 2013

Particle Systems using Stream-Out in DirectX 11 and SlimDX

Particle systems are a technique commonly used to simulate chaotic phenomena, which are not easy to render using normal polygons.  Some common examples include fire, smoke, rain, snow, or sparks.  The particle system implementation that we are going to develop will be general enough to support many different effects; we will be using the GPU’s StreamOut stage to update our particle systems, which means that all of the physics calculations and logic to update the particles will reside in our shader code, so that by substituting different shaders, we can achieve different effects using our base particle system implementation.

The code for this example was adapted from Chapter 20 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, ported to C# and SlimDX.  The full source for the example can be found at my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the ParticlesDemo project.

Below, you can see the results of adding two particles systems to our terrain demo.  At the center of the screen, we have a flame particle effect, along with a rain particle effect.

image

Tuesday, October 15, 2013

Skinned Models in DirectX 11 with SlimDX and Assimp.Net

Sorry for the hiatus, I’ve been very busy with work and life the last couple weeks.  Today, we’re going to look at loading meshes with skeletal animations in DirectX 11, using SlimDX and Assimp.Net in C#.  This will probably be our most complicated example yet, so bear with me.  This example is inspired by Chapter 25 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, although with some heavy modifications.  Mr. Luna’s code uses a custom animation format, which I found less than totally useful; realistically, we would want to be able to load skinned meshes exported in one of the commonly used 3D modeling formats.  To facilitate this, we will again use the .NET port of the Assimp library, Assimp.Net.  The code I am using to load and interpret the animation and bone data is heavily based on Scott Lee’s Animation Importer code, ported to C#.  The full source for this example can be found on my GitHub repository, at https://github.com/ericrrichards/dx11.git under the SkinnedModels project.  The meshes used in the example are taken from the example code for Carl Granberg’s Programming an RTS Game with Direct3D.

Skeletal animation is the standard way to animate 3D character models.  Generally, a character model will be represented by two structures: the exterior vertex mesh, or skin, and a tree of control points specifying the joints or bones that make up the skeleton of the mesh.  Each vertex in the skin is associated with one or more bones, along with a weight that determines how much influence the bone should have on the final position of the skin vertex.  Each bone is represented by a transformation matrix specifying the translation, rotation and scale that determines the final position of the bone.  The bones are defined in a hierarchy, so that each bone’s transformation is specified relative to its parent bone.  Thus, given a standard bipedal skeleton, if we rotate the upper arm bone of the model, this rotation will propagate to the lower arm and hand bones of the model, analogously to how our actual joints and bones work.

Animations are defined by a series of keyframes, each of which specifies the transformation of each bone in the skeleton at a given time.  To get the appropriate transformation at a given time t, we linearly interpolate between the two closest keyframes.  Because of this, we will typically store the bone transformations in a decomposed form, specifying the translation, scale and rotation components separately, building the transformation matrix at a given time from the interpolated components.  A skinned model may contain many different animation sets; for instance, we’ll commonly have a walk animation, and attack animation, an idle animation, and a death animation.

The process of loading an animated mesh can be summarized as follows:

  1. Extract the bone hierarchy of the model skeleton.
  2. Extract the animations from the model, along with all bone keyframes for each animation.
  3. Extract the skin vertex data, along with the vertex bone indices and weights.
  4. Extract the model materials and textures.

To draw the skinned model, we need to advance the animation to the correct frame, then pass the bone transforms to our vertex shader, where we will use the vertex indices and weights to transform the vertex position to the proper location.

skinnedModels