Categories
AI Competitions Development Games

Ludum Dare 48h & Locomotion

It looks like there’s another Ludum Dare 48 hour game programming competition starting soon, so I’ll hopefully be taking part in that. Spread the word and join in – the more the merrier!

Meanwhile, back on my main project, I’m having much trouble with just trying to get a simple “chase player” behaviour. Pathfinding (with A*) wasn’t too hard but there’s lots of icky low-level details for moving a character around the environment which are creaping into my behaviour and generally cluttering it up. And due to the highly temporary nature of behaviours stopping and restarting a behaviour (such as repathing when the player moves far enough to invalidate the current behaviour) tends to lead to unpleasant animation jittering as it rapidly switches between idle and running animations.

So I’m trying to pull out some of the low-level movement and animation into a “locomotion” layer. The idea being that it will take high level orders like “run left to this point” or “jump to this waypoint” and it’ll automatically handle transitioning from the current animation to the next as well as the low-level ground following and animation ticking.

The idea is that if a behaviour aborts or switches the locomotion layer will still retain the current action and state, so it’ll continue with it’s current animation until issued a new order by the behaviour. So hopefully switching to or restarting the “chase” behaviour means that instead of snapping to a halt and then starting to run again (often in the same direction) the enemy should continue to run as it was until the “chase” behaviour decides where it actually wants to go. The locomotion layer should probably have a default anim to revert too should the current action finish, so if a behaviour is taking a particularly large time to respond it’ll start playing an idle animation rather than doing nothing.

That’s the theory at least. In practice drawing the line between locomotion and AI logic is proving tricky, so we’ll see how it evolves as things progress…

Categories
Development Games Graphics Shaders

2D Ambient Shadows

For the last few days I’ve taken some time off from the AI work to mess around with some graphical effects, and in particular I’ve been experimenting with a 2d ambient shadows effect. This is inspired by the Screen Space Ambient Occulsion (SSAO) effect which has gotten popular lately, and is largely a translation and adaptation of it into a 2d world.

To start with, here’s my test scene (unrelated to the current platformer/ai work):


(all screenshots a quater full size, click to view the full sized version)

That’s a whole bunch of tightly packed parallax layers with some trees and letters interleaved between them. The parallax is quite subtle, so it’s mostly lost in a static shot but it creates a nice 3d effect when scrolling.

The first step is to also generate a depth map from this. Since we’re in 2d and we don’t have a z-buffer, we can fake one with a simple shader to tint the sprites based on their depth.

(I’ve artificially tweeked the colour levels in the above to exagerate the layers, as otherwise you only really see white objects on a black background). It looks a bit jaggier than the base colour because we clamp the alpha values of the sprite textures to either be zero or one in the depth shader as otherwise the semi-transparent pixels introduce errors in the next step.

Next is the real magic, we apply the ambient shader. This accepts the previously generated depth texture as an input. For each fragment it looks up it’s base depth, then samples a number of surrounding texels and finds their depth as well. Surrounding depths which are higher than our base depth (i.e. it’s from a surface in front of us) darken our ambient shadow factor. We also apply a cutoff for this test so that depths which are really far in front get ignored as we decide that their shadow won’t be cast onto our current pixel. Surrounding depths lower (i.e. behind) our base depth are ignored.

Surrounding texels are found using precalculated poisson disc offsets in a similar way to traditional growable blur. We also apply a constant offset to these samples so that the shadows appear dropped slightly down-left of the shadow casters.

This produces the raw ambient map:

You can see how the grass layers are much more clearly defined and that letters both cast shadows onto trees behind them and receive shadows from trees in front as well.

Since this is a little noisy, we apply a simple blur to the raw ambient map:

Then as a final stage we combine the blurred ambient map with the colour map (and any other layers, like a bloom map) to the framebuffer:

A pretty neat effect I think – it’s certainly got a lot more depth than the basic colour version, and the shadows on moving objects really help them feel like they’re part of the world.

And if anyone wants to play around with this, here’s the GLSL shader to generate the raw ambient map:


uniform sampler2D depthMap;

const int numSamples = 16;
const float divisor = 1.0 / float(numSamples);

vec2 samples[numSamples];


void main()
{
	// Our generated poisson disc sample offsets
	samples[0] = vec2(0.007937789, 0.73124397);
	samples[1] = vec2(-0.10177308, -0.6509396);
	samples[2] = vec2(-0.9906806, -0.63400936);
	samples[3] = vec2(-0.5583586, -0.3614012);
	samples[4] = vec2(0.7163085, 0.22836149);
	samples[5] = vec2(-0.65210974, 0.37117887);
	samples[6] = vec2(-0.12714535, 0.112056136);
	samples[7] = vec2(0.48898065, -0.66669613);
	samples[8] = vec2(-0.9744036, 0.9155904);
	samples[9] = vec2(0.9274436, -0.9896486);
	samples[10] = vec2(0.9782181, 0.90990245);
	samples[11] = vec2(0.96427417, -0.25506377);
	samples[12] = vec2(-0.5021933, -0.9712455);
	samples[13] = vec2(0.3091557, -0.17652994);
	samples[14] = vec2(0.4665941, 0.96454906);
	samples[15] = vec2(-0.461774, 0.9360856);

	// Sample spread distance
	float spread = 0.007;

	// Offset to make shadows set slightly down and left from their caster
	vec2 depthOffset = vec2(0.001, 0.003);

	// Grab the base texture coord
	vec2 baseCoord = gl_TexCoord[0].xy;

	float baseDepth = texture2D(depthMap, baseCoord).r;

	float ambient = 1.0;
	for (int i=0; i<numSamples; i++)
	{
		float offsetDepth = texture2D(depthMap, baseCoord + depthOffset +
					(samples[i] * spread) ).r;
		float diff = offsetDepth - baseDepth; // diff is +itive if offset depth
							// is in front of us

		float cutoff = 0.08;	// limits how far objects can cast a shadow
		float threshold = 0.01;	// must cross this threshold to cast a shdow

		if (diff < cutoff && diff > 0.01)
		{
			diff = clamp(diff, 0.0, cutoff);
			diff = cutoff - diff;

			ambient -= diff;
		}
	}

	gl_FragColor = vec4(ambient, ambient, ambient, 1.0);
}

All of the other shaders are trivial, so I won’t include those. And the above is probably pretty sub-optimal as it was written for clarity rather than speed but it seems to fly along at a nice smooth framerate regardless. 🙂

If anyone experiments further with this I’d be interested in hearing about your results and comments. Ta.

Categories
AI Development Games

Jumping, launch velocities and rounding errors

The physics behind a jump in a platformer is pretty simple and something I must have written a thousand times, but having an AI player jump and land where it wants to is considerably trickier. My initial attempt was to string a bezier curve between the start and end points and just make it follow that – this is easy to do and very reliable (as it will always get to the exact end point) but produces unconvincing motion. It really needs to use the proper physics otherwise it looks out of place.

So if we’re using the proper physics we want to calculate a launch velocity then just let the simulation do the rest. However finding a suitable launch velocity isn’t easy – there’s lots of unknowns and variables, largely because there’s lots of potential jump arcs which take you between two points. You need to nail down a few of the variables so that ideally only one solution pops out.

After several failed attempts I’ve found that specifying the jump apex gives nice consistent and controllable results. I can pick the apex by applying an offset from the highest of the start and end points then split the jump into three components- the vertically to the apex, vertically from the apex, and horizontally for the whole jump:

ImmutableVector2f startPos = enemy.getPosition();
ImmutableVector2f endPos = nextWaypoint.getPosition();

// First, find jump apex
final float highestY = Math.max(startPos.y(), endPos.y());
final float apexY = highestY + 100;

// Vertically, to apex
// t = sqrt( 2s / a )

final float s1 = apexY - startPos.y();
final float t1 = (float)Math.sqrt( (2 * s1) / -FallBehaviour.GRAVITY);

// What's our launch speed for this section?
final float vy = -FallBehaviour.GRAVITY * t1;

// Vertically from apex

final float s2 = apexY - endPos.y();
final float t2 = (float)Math.sqrt( (2 * s2) / -FallBehaviour.GRAVITY);

// Total time for entire jump
final float tTotal = t1 + t2;

// Horizontally
final float s3 = (endPos.x() - startPos.x());
final float vx = s3 / tTotal;

brain.jump( new Vector2f(vx, vy) );

Unfortunately this has two down sides – the structure of a level means that sometimes there is an interveaning platform that the AI wants to jump “through” but the simulation causes it to land there instead. The other down side is that rounding errors mean you don’t always hit the target exactly, and since my collision is all sub-pixel that means you can miss the edge of the platform by a fraction and end up plummetting to your doom.

The solution is to also calculate the expected time length of the jump and when that time is reached snap the enemy to the target point (and disable all other ground collisions in between). This gives us the best of both approaches – the AI always ends up exactly where it wanted to, but while following a convincing curve. And since we’re doing a proper simulation if something happens during the jump (like we get hit) we can easily turn off the special collision handling and let the physics take over.

On an entirely different note, Deaths (thanks indie gamer) is a really neat idea for a platformer with a twist (and, amusingly, no AI whatsoever). Passive online interaction like this is something I’ve been thinking about for a while but can never manage to come up with an idea where it’s an important part of the gameplay rather than just a novelty – Deaths manages this quite well I think.

Categories
AI Development

Yet More On Behaviour Trees

So I’ve been going through as much stuff on behaviour trees that I can find to try and figure out whether they’re going to be appropriate for what I’m doing and how they actually work. “Behavior Trees for Next-Gen Game AI” is very comprehensive and well worth a watch if you’re interested (despite my dislike for videos – text is just so much more practical), and I think I can see how it would come together to produce interesting behaviour.

In a way I’m actually feeling a little disappointed – the current game design doesn’t call for massivly complex AI (you are fighting zombies after all) but now i have the urge to switch to something with fewer enemies with a much richer set of behaviours. But that will have to wait, and the current game will give me a chance to walk before I run anyway.

I’m not entirely sure how i’m going to handle interrupted states and animation (like when an enemy gets hit right in the middle of an attack). Currently I’m thinking of having behaviours listen to events, and making them fail and bail out if they take damage. It seems like it would work, but it sounds like it would require handling this event all over the tree, which could get tedious and error prone. On the other hand it might be a good way of intelligently playing different damaged/death animations depending on what we were doing at the time.

Categories
AI Development

Behaviour / AI rambling

Enemy behaviour in my current (as yet unnamed) game hasn’t come on very far – just a single enemy with a traditional hard-coded finite state machine (done the old-school way, with a big switch statement). Which was initially fine as it let me get some of the more important low-level details into place, but now I’m looking at adding more enemies it’s not looking so hot so something more elaborate is called for.

Ai Game Dev has been in my bookmarks for a while now, and provides a lot of interesting reading. The approach F.E.A.R. takes towards it’s AI is particularly interesting and probably something which would work well, but is a little beyond me at the moment. Instead what’s caught my eye is behaviour trees. In particular it seems to solve a problem that I’ve been having – how to write specific modules of behaviour (like a specific enemy attack) in a way that they can be reused and rearranged rather than having an explicit “next” behaviour.

I’m not sure I entirely understand how it’s all going to fit together with some of the higher level gameplay interactions, but it’s a promising direction. I think I shall leave my current FSM enemy as it is and code up the next enemy as a behaviour tree (or possibly do the same one again) and see what the resulting code is like.

Since I havn’t really done a scrolling beat-em-up before, I’m expecting to take a few wrong turns with the AI before I find something that works. If anyone has any experience to share then feel free to leave a comment.