Flamme Rouge: A study of game variability

An essential part of the fun in gaming is about not knowing what to expect. Suspense, surprise, hard decisions are things we are looking for when playing. And in order for a game to stay fun, each play need to be different from the last.

Here we will have a look at how games can make each play unique, and then how is this achieved by one game in particular:  Flamme Rouge, a game of professional cyclist racing.

There is nothing unique about Flamme Rouge replayability, but since this post kind of grew out of the modular board game post I was working on, it is perfect to introduce the subject!

If you are new to Flamme Rouge, you can read my review and a run-down of how the game is played here: Analyzing Flamme Rouge.

What to expect in this article

A quick breakdown of today’s post content:

  1. An overview of sources of variation and unpredictability in games.
  2. A breakdown of game variations and unpredictability in Flamme rouge.

In a follow-up article, I will be adding a thorough exploration of one particular source of the game variation: the modular board of Flamme Rouge….

And because it’s fun to take light questions seriously, I’ll do my best to come up with some real numbers for how many unique game board configurations of Flamme Rouge are possible.

But for now, let’s take a look at game variations…

The mechanisms of game variations and unpredictability

There are several ways of adding unpredictability to a game, and not all of them involve throwing dices, or drawing cards!

Luck-based mechanisms

Here are three ways to add unpredictability involving some form of luck:

  1. Using luck-based mechanic, such as drawing from a deck of cards, or rolling a die.
  2. Hiding information from players (Where the players have to take decisions based on partial information)
  3. Having players make decisions simultaneously (Think rock-paper-scissor)

All of those mechanics demands from the player to be making decisions based on partial information. The best you can do in such case is to rely on an expectation of results, as you need to do when luck or, if you prefer, probabilities are involved.

Now if the game involve player interactions, trying to predict other player choices can be quite fun, and can quickly transform the simplest decision in a tortuous mind puzzle. (As illustrated in the famous movie scene: The ‘Battle of wits’ from the princess bride)

Apparently I’m not the first to think of this, as I just found out someone even made a board game out of it: Princess bride: The battle of wits

One last thing to note: when luck mechanism are involved, sometime the more often an event repeat, the less luck will affect who is the final winner.

Let’s take an example: Tossing a fair coin.

The 1 event case

If all depends on 1 coin toss, you have a 50-50 chance of winning.

This is very luck dependent.

The 1000 events case

If there is 1000 coin tosses, well, if there is nothing else to it, and the winner is determined simply by winning a majority of them, you still have a 50-50 chance of winning…

While this can extend the suspense a bit, knowing that there is nothing you can do to change the outcome will usually only make the whole process more tedious…

However, what happen when the coin is not fair?

The unfair twist

Let’s say somehow you are to gain a small advantage. In a game, this could simply mean that you are better at planning, or that you found a good combo of moves to use. If you can increase your chances to win by 1%, making it an unfair coin toss with odds at 51% against 49%, this can dramatically alter the outlook of the game.

One unfair coin toss

If only one coin toss determine the outcome of the game, your chance are almost unchanged. 51% to 49%. Not much better than 50-50, so the result will still be largely determined by luck…

A thousand unfair coin tosses

For 1000 coin tosses however, you’ll most likely end-up winning about 510 of them, and lose the remaining 490. If the game depends on having a majority of win, the chances of your opponent drop dramatically!

For big numbers like 1000, the odds of winning a specific number of tosses is narrowly centered on 510, the expected number of win, with about 2.5% of chance of wining exactly 510 of them. Just have a look yourself at the probability of getting any specific number of win, and see how clustered the probabilities are around the expected results :

For 1000 tosses, this 1% difference gives you in reality a 75% chance of winning the majority of the toss! For a 2% increase it goes to 90% chances of winning!

To calculate this I used something called: the binomial distribution, a computation available in your favorite spreadsheet program.

For the math-curious

It is computed using the following information:

  • The probability of winning is P(win)The probability of losing P(losing) (which is here:  1 – P(win) )The number of toss: NThe exact number of win: K

The chances of this happening is:

\frac{N!}{k! \times (N-k)!} \times p(win)^k \times p(losing)^{(n-k)}

In English: The number of possible ways a specific number of win can occur, multiplied by the actual probability for this actual number of win to happen.

For one win, it would be: (Since the win can occur at the first toss, or the second, all the way to the last it gives you 1000 multiplied by the probability of winning once (0.51) multiplied 999 times with the probability of losing or (0.49)^{999} .

To know what are your chances of winning a majority of them, you simply have to recompute this for all the numbers of K where you win (501,502,503, etc… ), and add the individual probabilities to get the overall chance of winning!

What about a normal board game ?

Obviously, few board game consist of tossing coins. But you could make the exact same argument, using slightly more complicated math, for any probability based event in a board game.

In a typical board game however you won’t likely look at a thousand events. But even with a more reasonable 100 events in one game, consider this:

  • A 51% chance of winning individual event translate into a 62% of chance of winning the game.
  • A 52% individual event odds translate into a 69% chance of winning the game
  • A 60% odds will brings you to a 98% chance of winning the game.

So the odds climbs slowly, but repetition sure does take luck out of the equation!

The higher the number of event, the easier the results are to predict.

Or if you prefer:

The higher the number of event, the more the abilities of the player are taken into account and the less you end up depending on the traditional concept of luck.

But if any kind of luck is not your cup of tea, other ways to add variations from game to game can come from the game itself….

Non luck-based variations

Here are three ways to add non luck-based variations in a game:

  1. Having a very large number of possible game states (such as in Chess or Go).
  2. Allowing for rules or player power variability (such as offering different factions with different ability to choose from).
  3. Having a variety of initial game setup… such as having a modular board game!
A line of chess pieces

Very large state space are a nice way to provide variability. In chess and go the rules and the board are static, the initial setup is always the same, and there is no hidden information…  but they provide so many choices at each step, that it is possible that no two exact same game will be ever played! One caveat for the casual gamer is that player skills then become the definite trait for those games.  (One could argue that there is some luck involved in selecting moves that you cannot fully predict, but this is not a very strong argument!)

Providing different factions, or variations in player power during a game allows for added variability, how much will depends on the number of available factions, or variables to choose from. I usually quite enjoy multiple factions games, since each faction often provide a completely different game experience, but it comes with one possible drawback: Balancing issues.

For dedicated players, factions offer the opportunity to find special cases that can break the game, if they offer too great an advantage. This is however a whole topic in itself, and is generally game specific….

Finally, modular board game, allows for ever-changing games, without affecting the game difficulty, and without having to learn new rules! It is a very popular mechanism, that is used by well-known board games such as Catan, but also Forbidden Island, Betrayal at the house on the Hill, Takenoko and many others. (Here is an exhaustive list of them: list of modular board game from Board Game Geek.)

A game can use some or all of those mechanisms, from game variability to unpredictability in order to make each game different.

This will evidently affect how different a game can be from one play to the next. But maybe more significantly, those variability elements will determine what is the importance of individual player skills in determining the winner.

Trying to determine what is the part of luck and the part of skills in a game is  a non-trivial question. One that inspired me in starting this blog. But I’m far from having addressed it seriously, since it is quite the rabbit hole!

That being said…

I wanted to have practical look on all of this. So let’s have a look at Flamme Rouge, and see how the game implements’ variability through the lens of the above mechanisms.

Unpredictability and game variation in Flamme Rouge

In Flamme Rouge, each player control two cyclists, each with their own deck of 15 cards. The cards value represent of how many spaces to advance a cyclist when played.

Here are the specific values for each deck:

For the Rouleur, 3 cards for each of the following values:


For the Sprinter, 3 cards for each of the following values:


During the game, each time a cyclist end up in front of a pack, its deck will be augmented by a low value, aptly named fatigue card.

Playing simultaneously, each turn, players draw four cards at random for one of their cyclist and secretly select one card to be played. Then they repeat this process for the second cyclist, using the second deck of cards.

In this lie a few sources of luck-based randomness for this game.

The card drawing

The 4 cards you draw for your cyclist determine what you can play this turn. This has the potential to limit you at some crucial moments of the game.

The sequential card selection

Because you have to select the card for one cyclist BEFORE seeing the 4 possible cards to play for your second cyclist. You are limited in how effectively you can coordinate between your cyclists.

To be explicit: this is what I called hidden information. When selecting the card for the first cyclist, you do not know what will be the 4 potential choices for your second cyclist.

Minor and major sources

Since the decks are relatively small and have always the same initial composition, the card drawing and sequential selection elements add some game variation, but are not the major factor of unpredictability. Sure, it can often affect the outcome of a race, but a good player will play each deck to minimize those unknowns.

It is relatively easy to mentally keep track of what is left in your decks. And if you don’t have too many fatigue card, you will have a good idea of the cards that you should be able to play during your next turn.

I would be willing to play this game without the drawing element. I believe that simply selecting the cards to be played each turn from each deck would probably play similarly, and I think it would be as much fun!

Simultaneous selection

In Flamme Rouge, all the players select their cards at the same time, and wait for everyone to be ready before revealing their cards.

In my opinion, the biggest source of unpredictability in this game come from this simultaneous selection of cards by the players. Decisions made by others are what really affect how the race unfold, and it is not a neutral element of unpredictability.

If this was a random phenomenon, we could expect the simultaneous selection affecting all the players equally. But there is certainly some people skills at play when trying to predict how others will choose their cards…

There is mainly 3 different mechanisms of interaction built in the game:

Slip streaming is the act of using other cyclists in front of you to advance at no cost, if you leave one space between you and them.

Blocking, accidental or planned, there is only space for 2 cyclists side by side. You need enough movement to be able to move ahead of them, or you’ll be stuck behind them when advancing.

Fatigue, any cyclist that ends up in front of a group, and unable to slip stream behind another group, will have to add a fatigue cards to its deck, so players may try to stay behind to avoid over accumulating fatigue cards.

During a race, thinking ahead and trying to minimize adverse effects on your cyclists, while improving your chances of benefiting from slip streaming (or denying it from others) is the crux of the game. Something that can take a few plays to fully understand!

So simultaneous play is where I think is the more meaningful source of uncertainty, the others being simply interfering mechanisms for the player strategies. It is a fun mechanism, because it places player interaction at the core of the game, and it comes with the added benefit of  speeding up the game quite a bit compared to a more turn based approach!

What about number of events ?

Note: Before proceeding with this specific explanation on statistical significance, I would like to note that I’m not an expert on statistics. It just appeared to make sense to talk about this a bit, given the coin toss discussion earlier in this post. So takes the following with a grain of salt, and please, let me know if you happen to be an expert with a better insights.

As I explained earlier with coin tossing example, repetition is what allows player skills to shine.

In Flamme Rouge, we could say there is about 15 turns in a race, often a bit less, where each player draws each time from 2 decks. So we could say about 30 events of drawing and selecting a card. 30 decisions to be made that will interact to decide who will be the winner.

One important factor about my tossing coin example was that the events were independent. Each coin toss is in no way affected by earlier coin toss results. So the order of the wins do not matter.

This would not be the case in say Catan, or Monopoly, or even Chess, where an early advantage tend to amplify through the game because of compounding effect (more resources early allows you to buy more resources producing stuff earlier, or a strategic advantage allows further gains later in the game).

I would say that Flamme Rouge turns are roughly independent, since each turn you play, your cyclist have the opportunity to win or lose a few spaces relatively to the card that was played. Since the sum of the card values in a deck is the same for everyone, and close to the total distance to go in a race, the “win” does not happen by playing a higher value card, but by benefiting from slip streaming or blocking others, which translate in augmenting or diminishing the distance you can effectively travel with your deck.

The results accumulate towards the final win, but the effects are not amplified through time.

I’m stretching a bit here the application of sampling, but when random and independent events are sampled, 30 events are often enough to give an accurate measure of the underlying phenomena, with a reasonable margin of error. 

In polling, they do the same thing. You may be familiar with warning such as: within 3.1 percentage points 19 times out of 20, which puts in number the likelihood that the numbers they just gave you being representative of the real opinion of the entirety of a population. Which still give you a 5% chance of having bad luck when randomly selecting a sample and ending up with a non-representative group of people giving you their opinion (assuming the random sampling is done properly)! 

In our case, let’s assume we are trying to measure who is the best player in the game. My rudimentary knowledge here tells me that 30 samples are quite sufficient for a measure to get some significance. There is some randomness in the game, but if the skills of the player are not matched, this should be sufficient to tilt the balance in a statistically significant, and thus measurable way toward the better player!

Unfortunately, this statement is not independent of my personal understanding of Flamme Rouge, so it’s a bit like cheating! Having access to a few thousands games played on a Flamme Rouge app by hundreds of players would probably be a better approach to determine the skill factor in the game!

In my limited situation, I’ll have to explore this a bit before being able to give you a more scientific answer! I hope to get some experimental data when I attempt to write a strategy guide involving some game simulation. But in the meantime, feel free to pitch in if you have good insights to share with me and others!

Let’s examine the non luck-based source of unpredictability

In Flamme Rouge, there is one part that adds huge variability from one game to the other without affecting individual players chance to win: The board itself!

The race track, with all its hills and slopes, is built from track parts, making it a modular board game! The game come with 6 race tracks examples, but the potential number of different racetracks you can build is enormous!

Having a modular board adds a great deal of replay-ability, without hiding any information from the players.  Sure, some tracks will allow better skilled player to outshine their peers more easily, but at least it does so transparently.

One could say that because of the players interactions, and the hidden information, each race in Flamme Rouge has the potential of being different anyway. But always racing the same track could get quite repetitive. This is why I believe that the idea to provide a modular race track is an excellent one.

I wanted to do an in depth analysis of board modularity and Flamme Rouge presented the perfect opportunity to do so. But how can we calculate how many board variation is offered by such a game?

Time to introduce one very powerful tool to add variability:

The combinatorial approach

When the order matters, such as in laying out racetrack, combining a few items is an easy way to offers a large number of possibilities. With 2-3 items, maybe not so much, but if the number of item is a bit bigger, the numbers grow really fast!

So fast, that the terms used are not of exponential grow, but rather of a combinatorial explosion!

If you simply re-order items, the growth of possible combination is the following:

If you have 5 items and want to pick an ordering. You have 5 choices for the first piece to choose. Then you have 4 pieces, then 3, then 2, and finally one.

You can easily see the pattern here. To calculate any number of combination to take the total number N, and you multiply it by N-1, N-2, … all the way until you have only one piece left.

  • 1 item = 1 possible ordering
  • 2 items = 2×1  – or 2 possible ordering
  • 3 items = 3x2x1  – or 6 possible ordering
  • 4 items = 4x3x2x1  – or 24 possible ordering
  • 5 items = 5x4x3x2x1  – or 120 possible ordering
  • 6 items = …  – 720 possible ordering
  • 7 items = 5040 possible ordering
  • 8 items = 40320
  • 9 items = 362 880
  • 10 items = 3 628 800
  • etc…

In mathematical term, it is called a factorial, and is usually written with an exclamation mark. So you can write 10! = 3 628 800.

As a programmer, the speed at which a problem increase in size as we add elements to consider is called complexity. Just for the fun of it, here is a chart comparing different complexity for problems dealing from 1 to 100 elements (The X-axis) and how fast their complexity grow (Y-Axis).

Complexity Chart

Stack Overflow discussion of complexity

For the fun of it, let’s say you have a bunch of marbles…

The slowest possible growth of complexity for a problem is no growth at all, which is called constant complexity.

A good example of a problem of constant complexity would be determining the total weight of a group of marbles. Since you can weight all the marbles at once, the complexity is independent of the number of marbles. Weighting 1 or 100 of them consist in the same number of operations.

In complexity growth speak, (called Big-O in computer science), this is denoted by O(1), shown on the above graph by the flat line at the bottom.

A bit more complex

A problem showing linear growth, proportional to how many marbles you have, could be to determine how many marbles of a certain type you have. For this, you have to look at all of them, but only once.

The graph shows this as O(n) complexity in green at the bottom of the graph. From this we would assume it’s about twice as long to process 20 marbles than to process a lot of 10.

From there, the complexity grows much faster...

If the marbles were of different colors, and you wanted to order them according to their shade of blue, you would be faced by a O(n log n) or even a O(n^2 ) complexity.

This time, the complexity actually depends on how you approach the problem… Knowing how to order stuff efficiently can save you huge amount of time!

If your problem grows in complexity to the power of 2, the number of operation in takes is proportional to the square of the number of items you have. Sorting 20 marbles will takes four times longer than sorting 10. But sorting 100 marbles will takes 100 times longer than sorting them. You quickly start looking at way to improve your approach!

Finally, if you have a factorial complexity problem, the leftmost in blue on the graph, you can see that it rise crazy fast, so usually, you try as hard as you can to avoid this kind of problem. But it is exactly the case we have in our combinatorial race track situation: counting the different number of valid ordering!

Let’s not forget that in our case, big numbers are good, because it means a very large amount of ways to create different race tracks for the game! (It is just unfortunate for me that I am trying to analyze it!)

So this is for the theory, but a number of thing affect how real complexity emerge in a real analysis: If we use only a subset of the pieces, if some pieces are identical, or if the placement of pieces have some constraints, those are all factors that will affect the number of possible combinations, and may help us save some time!

Race track pieces type in Flamme Rouge

Finding the total number of possible race tracks

In the Flamme Rouge base game, we always use all track pieces, but there is 21 of them… double-sided! It makes for quite a bit of possibilities.

In the box you’ll find 6 initial race plans for player to try out. But you can arrange the tracks any way you want, providing you with quite a bit of possible tracks!  But how many ?

This is the strange quest I decided to undertake and you can read all about it in next week post, hopefully learning a thing or two about games, math and computer: How many unique race tracks in Flamme Rouge? A combinatorial study of a modular board game.

In the meantime, have fun, thanks for reading, and let me know if you have any comments or things to add about this post!

1 thoughts on “Flamme Rouge: A study of game variability

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.