The next major stage of production is necessarily focused on tools and pipelines. This post covers more design tools (instead of art) as these have a fundamentally different focus when building simulation-drive games. Art tools and pipeline problems may be just as important, but are better understood. Best practices from other types of game for them easily apply here.
Previous entries in the series:
- Part 1 – An overview of the problems applying common production processes to these types of games.
- Part 2 – Description of the concept phase & deliverables
- Part 3 – Guidelines and pitfalls for building isolated prototypes in the concept phase.
- Part 4 – The Connected Systems Playable
A Tool Is Not a Goal Unto Itself
As mentioned in the first post, EA’s XLevel is the phase of production that follows the first playable milestone. While it involves build a larger slice of gameplay, it is meant to build and prove tools and asset pipelines.
A similar goal here is needed to help give this milestone focus, but because of the nature of simulation-driven gameplay, the scope is up to us to define. If it is a level driven systemic game (like the Thief series), building finished production assets and gameplay for an entire level may still suffice.
Otherwise, we have to pick an area of the game to take on. The definition of area is going to change drastically by game type. In an open city game it can be literal, focusing a neighborhood and the characters and vehicles in it.
Because of the scope, you may need to limit this to risky asset types. For instance if you’re building a god game with dynamic terrain but very simple characters, you may choose to focus solely on terrain tools and systems, because of the much lower fidelity requirements on characters.
By at least being consistent in what types of assets (if not a spatial area of the game), you can still try to get players to focus on gameplay during playtesting. By clearing communicating which classes of assets are being polished and which are not, hopefully you can still partially avoid the problem discussed in the previous post under “Art Directing Your Prototype”.
I would claim the best the best area to focus on next is actually the tutorial, instead of following the more standard best practice to wait until the end of production. I need a lot more space to make that claim, though…
Focus on Design Tools
Design tools for simulation driven games have unique challenges. They are less understood than art pipelines, but unlike other game types they are more crucial to building and testing gameplay effectively (due to the larger role systems play in the experience).
They typically need to be focused more on adding and removing rules than tweaking them. In a linear shooter like Call of Duty, weapons are added very rarely, but the parameters for an individual weapon’s systems are fine tuned almost endlessly.
Since our simulation’s experience will be defined by how our systems interact, we need to add and remove minor rules much more frequently to enforce the right positive/negative feedback loops. That is the fundamental work designers have to do so gameplay events occur at the frequency they need to.
Scripted elements need to interact with systemic elements smoothly. Naive industry thinking sees these approaches as separate. Ultimately, it is a more a question of code architecture as to whether or not designers can successfully, robustly, inject scripted moments into dynamic systems. (Uncoincidentally, most game programmers are horrible at architecting code). Even the most simulation-driven game needs some scripted elements, such as for tutorials.
A Rookie Mistake
Perhaps the simplest example I can think of goes back to my initial implementation of the pedestrian traffic spawning in Scarface. In order to choose randomly from a set of given characters models to populate the street, I defined a list of percentages, one for each character archetype. My thinking was that that would be the most natural way of reasoning about how many of each type you’d want (e.g. 10% tourists compared to 30% suited business types).
In practice when you consider any change or addition, for these values to make sense they need to be re-distributed to add up to the prerequisite 100%. Even if this is done automatically, it becomes harder to reason about relative amounts, which is actually more important than reasoning about the absolute proportions.
When you consider that models will be often added and removed to the list, and some models in the list might not be in memory when evaluating it, the designer editing those values will have an easier time if they edit the values by weight. So the tourist has a weight of 1 and the businessman a weight of 3. But if we add a new character or if for some reason a character isn’t available in memory, the relative proportions are still clearly defined for the designer, because that’s what they spend their time defining.
The moral of this story might seem to be always used weights in making these kinds of choices. What it really means is to worry more about how to best represent changes to design data instead of how to best represent a single static instance of that data.
Declarative Authoring
One way to build robust systems that allow heavy tweaking relies on declarative authoring for any sort of gameplay transition.
Normally, if the player completes a gameplay objective or mission, the code (whether in C++, a visual state machine scripting language, whatever) that executes at the end of that objective unlocks the next objective.
Instead imagine if that chunk of code sets data that defines the world state with that mission complete. The subsequent mission has same piece of data marked as a requirement, and a corresponding piece of code that activates when that data requirement has been met.
Fundamentally these are two ways of expressing the same functionality. BUT, authoring these transitions by declaring requirements in data has one important difference: There are typically fewer places to change when adding an element. If you wanted to add an ancillary mission that also opened up after completing the first, when coding transitions explicitly you would have to edit the end the of the first mission as well as adding the new one. When authoring declaritively, you simply have to add a new mission with the same requirement.
The fewer places that need to be changed when adding elements to systems, the more robust your game will be as you build it through production. The ultimate benefit though, is the player experience – by authoring transitional requirements this way, the player is more likely to find their own path through a relatively complex network of elements, instead of just the one explicitly authored.
Tagging FTW
A best-practice design pattern that is slowly emerging (well, slower than it should anyway) is using declarative authoring to define all (or almost all) procedural asset selection (read: hooking up stuff to be used during gameplay).
You can see a concrete example in this GDC talk by Elan Ruskin (slides, GDC vault video). In Left 4 Dead 2 (and other Valve games), character dialogue is procedurally chosen by pattern matching assets tagged with various traits to traits applied to the current game context.
We actually had started working on a similar system for LMNO. It had its origins in the sound database for Thief – sounds for game events like an arrow hitting the ground would be chosen by factoring all the tags involved (such as the surface material, the arrow type, etc), with some randomized weighting.
This abstraction helps workflow because people authoring assets can more easily integrate them into existing systems without the need for additional code hook-up. Gameplay analysis can also be done to find the most frequent request and make sure that have enough variants, or determine if certain combinations are not showing up when they should be, etc.
Scripting
The easiest way to allow for scripted moments in your simulation is by ensuring the scripted behaviors are actually part of the system. For instance, let’s say adesigner wants to script some story related behavior for an AI character. If the AI is selecting behaviors based on priority, the designer should be able to set the priority of the script. That way, it can override unimportant idle behaviors, but still allow crucial reactions like combat to interrupt without issues. By giving non-simulated parts a full-fledged slot in the simulation, you’re able to resolve conflicts between them in code or data.
Another negative impact of “code” scripting (again, even in the form of a visual state machine language like Unreal 3’s Kismet) is that it doesn’t conform well to the designer’s workflow. Adding and removing rules also means assigning rules to different types of objects. Instead of elaborating a sequence of events through script, think about how that can be encapsulated as a small set of rules. Then allow the designer workflow to easily add/remove these small sets of rules.
To go further, attempt to break down requirements for scripted behavior and determine if any aspects of them can be simulated. Systems must be designed modularly, with clean inputs and outputs between other systems. That way you can poke an individual system to provide an input that drives the behavior, while not breaking how other systems work.
In a series like The Sims, where characters try to meet goals by finding and using objects, a script may as simple as assigning a goal that is only met by a specific object. Connected systems will work robustly with that script, because that system’s outputs are the same even though its inputs in that case were designer-driven.
Next Time…
Hopefully this has given some insight into how to evolve your tools robustly. Next time, I’ll go into why the tutorial is one of the most important elements to build upon entering production.
Thank you for this series!