This time, we are going to take the scene that we used for the Shapes Demo, and apply a three-point lighting shader. We’ll replace the central sphere from the scene with the skull model that we loaded from a file in the Skull Demo, to make the scene a little more interesting. We will also do some work encapsulating our shader in a C# class, as we will be using this shader effect as a basis that we will extend when we look at texturing, blending and other effects. As always, the full code for this example can be found at my Github repository https://github.com/ericrrichards/dx11.git; the project for this example can be found in the DX11 solution under Examples/LitSkull.
Rendering the LitSkull scene with 1 light (key lit) | Rendering the LitSkull scene with 2 lights (key and fill lit) | Rendering the LitSkull scene with 3 lights (key, fill and back lit) |
We'll start be writing our shader effect. We will make use of the light and material structures and the light computation functions that we previously defined in LightHelper.fx. We will support up to three directional lights, and provide three different techniques; one using just the first light, one using two lights, and one using all three lights. We select the number of lights to use in the pixel shader by passing in the uniform parameter gLightCount.
#include "LightHelper.fx"
cbuffer cbPerFrame
{
DirectionalLight gDirLights[3];
float3 gEyePosW;
};
cbuffer cbPerObject
{
float4x4 gWorld;
float4x4 gWorldInvTranspose;
float4x4 gWorldViewProj;
Material gMaterial;
};
struct VertexIn
{
float3 PosL : POSITION;
float3 NormalL : NORMAL;
};
struct VertexOut
{
float4 PosH : SV_POSITION;
float3 PosW : POSITION;
float3 NormalW : NORMAL;
};
VertexOut VS(VertexIn vin)
{
VertexOut vout;
// Transform to world space space.
vout.PosW = mul(float4(vin.PosL, 1.0f), gWorld).xyz;
vout.NormalW = mul(vin.NormalL, (float3x3)gWorldInvTranspose);
// Transform to homogeneous clip space.
vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
return vout;
}
float4 PS(VertexOut pin, uniform int gLightCount) : SV_Target
{
// Interpolating normal can unnormalize it, so normalize it.
pin.NormalW = normalize(pin.NormalW);
// The toEye vector is used in lighting.
float3 toEye = gEyePosW - pin.PosW;
// Cache the distance to the eye from this surface point.
float distToEye = length(toEye);
// Normalize.
toEye /= distToEye;
//
// Lighting.
//
// Start with a sum of zero.
float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 spec = float4(0.0f, 0.0f, 0.0f, 0.0f);
// Sum the light contribution from each light source.
[unroll]
for(int i = 0; i < gLightCount; ++i)
{
float4 A, D, S;
ComputeDirectionalLight(gMaterial, gDirLights[i], pin.NormalW, toEye,
A, D, S);
ambient += A;
diffuse += D;
spec += S;
}
float4 litColor = ambient + diffuse + spec;
// Common to take alpha from diffuse material.
litColor.a = gMaterial.Diffuse.a;
return litColor;
}
technique11 Light1
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS(1) ) );
}
}
technique11 Light2
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS(2) ) );
}
}
technique11 Light3
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS(3) ) );
}
}
To make this shader easier to use, we are going to encapsulate it in a C# class. We’ll begin by creating a base class, that we can use for all of the shaders that we will create going forward. This base class will just contain the SlimDX Effect class, which will be created in the constructor using the passed-in Device and filename. We will also use our Disposable class (covered in DirectX 11 Initialization with SlimDX) as a base for this Effect class, so that we can easily and cleanly dispose of the Effect COM pointer. We declare the class as abstract to prevent this class from being used directly; without any public access to the Effect member, creating an instance of this class would be pretty useless.
public abstract class Effect : DisposableClass {
protected SlimDX.Direct3D11.Effect FX;
private bool _disposed;
protected Effect(Device device, string filename) {
ShaderBytecode compiledShader = null;
try {
compiledShader = new ShaderBytecode(new DataStream(File.ReadAllBytes(filename), false, false));
FX = new SlimDX.Direct3D11.Effect(device, compiledShader);
} catch (Exception ex) {
MessageBox.Show(ex.Message);
} finally {
Util.ReleaseCom(compiledShader);
}
}
protected override void Dispose(bool disposing) {
if (!_disposed) {
if (disposing) {
Util.ReleaseCom(FX);
}
_disposed = true;
}
base.Dispose(disposing);
}
}
Now, we create the subclass for our shader effect, which we will call BasicEffect. This is inspired by the XNA BasicEffect class, which provides most of the functionality that we will eventually add to our Basic.fx shader. We will cache the pointers to our shader techniques and constant buffer variables, and provide methods to set the constant’s values.
public class BasicEffect : Effect {
public EffectTechnique Light1Tech;
public EffectTechnique Light2Tech;
public EffectTechnique Light3Tech;
private EffectMatrixVariable WorldViewProj;
private EffectMatrixVariable World;
private EffectMatrixVariable WorldInvTranspose;
private EffectVectorVariable EyePosW;
private EffectVariable DirLights;
private EffectVariable Mat;
public BasicEffect(Device device, string filename) : base(device, filename) {
Light1Tech = FX.GetTechniqueByName("Light1");
Light2Tech = FX.GetTechniqueByName("Light2");
Light3Tech = FX.GetTechniqueByName("Light3");
WorldViewProj = FX.GetVariableByName("gWorldViewProj").AsMatrix();
World = FX.GetVariableByName("gWorld").AsMatrix();
WorldInvTranspose = FX.GetVariableByName("gWorldInvTranspose").AsMatrix();
EyePosW = FX.GetVariableByName("gEyePosW").AsVector();
DirLights = FX.GetVariableByName("gDirLights");
Mat = FX.GetVariableByName("gMaterial");
}
public void SetWorldViewProj(Matrix m) {
WorldViewProj.SetMatrix(m);
}
public void SetWorld(Matrix m) {
World.SetMatrix(m);
}
public void SetWorldInvTranspose(Matrix m) {
WorldInvTranspose.SetMatrix(m);
}
public void SetEyePosW(Vector3 v) {
EyePosW.Set(v);
}
public void SetDirLights(DirectionalLight[] lights) {
System.Diagnostics.Debug.Assert(lights.Length <= 3, "BasicEffect only supports up to 3 lights");
var array = new List<byte>();
foreach (var light in lights) {
var d = Util.GetArray(light);
array.AddRange(d);
}
DirLights.SetRawValue(new DataStream(array.ToArray(), false, false), array.Count);
}
public void SetMaterial(Material m) {
var d = Util.GetArray(m);
Mat.SetRawValue(new DataStream(d, false, false), d.Length);
}
}
We will also create a static class called Effects, which will hold global instances of any shaders that we develop. Right now, we will only have our BasicEffect, but later on, we will develop other shaders for different effects. Global variables are generally considered poor practice, but this is a reasonable candidate, as we will only want to instantiate a single instance of each shader type, and as we delve into more complicated examples, we may end up having our drawing code broken up across different functions, so that it will be simpler to use a global, rather than creating the effect in our main class and passing it around. This also gives us a central point to manage the lifecycle of our shader objects, which we can create with the InitAll() function after we have created the Device, and clean up using the DestroyAll() function.
public static class Effects {
public static void InitAll(Device device) {
BasicFX = new BasicEffect(device, "FX/Basic.fxo");
}
public static void DestroyAll() {
BasicFX.Dispose();
BasicFX = null;
}
public static BasicEffect BasicFX;
}
In a similar fashion, we will also centralize our InputLayouts for our various vertex formats using a static InputLayouts class. We will additionally create a static class to hold our InputElement arrays for each vertex structure. I was tempted to remove this class, and instead add the InputElement[] as a public static readonly member of the relevant vertex structure, but for the moment, I am following Mr. Luna’s example and using this static class. I suppose if we develop a shader which uses the same vertex format but different semantics, this would be the more flexible method in that case.
public static class InputLayoutDescriptions {
public static readonly InputElement[] PosNormal = new[] {
new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0),
new InputElement("NORMAL", 0, Format.R32G32B32_Float, 12, 0, InputClassification.PerVertexData, 0),
};
}
public static class InputLayouts {
public static void InitAll(Device device) {
var passDesc = Effects.BasicFX.Light1Tech.GetPassByIndex(0).Description;
PosNormal = new InputLayout(device, passDesc.Signature, InputLayoutDescriptions.PosNormal);
}
public static void DestroyAll() {
Util.ReleaseCom(PosNormal);
}
public static InputLayout PosNormal;
}
With this foundational work completed, we can move on to implement our demo application. We will use the Shape Demo as our starting point. The changes that we make to add lighting will be very similar to those we performed in the previous lighting demo. We will need to define our lights and materials in our constructor, as we did in the LitTerrain Demo. Our Init() function will change pretty significantly: we will create our effect using our new Effects.InitAll() function after we have created the device, and then initialize our global InputLayouts with its InitAll() function. We must do this after the BasicEffect has been created, as we will need the effect pass signature to get the appropriate semantics to bind to.
public override bool Init() {
if (!base.Init()) {
return false;
}
Effects.InitAll(Device);
_fx = Effects.BasicFX;
InputLayouts.InitAll(Device);
BuildShapeGeometryBuffers();
BuildSkullGeometryBuffers();
Window.KeyDown += SwitchLights;
return true;
}
We will also add functionality to this demo to allow the user to switch between using 1, 2, or 3 lights using their keyboard. To do this, we need to add an event handler to our main application form to handle key down events, SwitchLights.
private void SwitchLights(object sender, KeyEventArgs e) {
switch (e.KeyCode) {
case Keys.D0:
_lightCount = 0;
break;
case Keys.D1:
_lightCount = 1;
break;
case Keys.D2:
_lightCount = 2;
break;
case Keys.D3:
_lightCount = 3;
break;
}
}
Beyond that, there is not much to change from our base examples. You will need to set the appropriate shader variables in DrawScene, but that should be simple, after our last demo.
That wraps up our demos for Chapter 7. Next time, we’ll move onto Chapter 8 and start adding textures to our objects.
The number parsing in this code is made without av format provider, which makes it fail when the current locale uses "," as a decimal point. The SkullDemo code takes care of that issue, but the LightingDemo does not. I woud suggest using Convert.ToSingle(vals[0].Trim(), CultureInfo.InvariantCulture).
ReplyDeleteYes, that is something I've been meaning to fix - as I'm sure you can tell, I'm based in the United States, and the code that I was working from was based on C++ with no thought for internationalization...
ReplyDeleteWhat I really would like to do is replace this home-brew text-based model format that I inherited from Luna's examples with a more standard model format and the Assimp-based model loading code that I developed subsequently. Although that would break the progression from simple to more complex examples somewhat.
Typo: It is the LitSkull project that has this problem, not LightingDemo.
ReplyDeleteThis CultureInfo issue should be fixed now in the most recent version on GitHub.
ReplyDelete