Candy Crush is at almost 10,000 levels. I know it makes a ton of money so they could afford to pay humans to design all of them, but it seems like it would be very difficult for a human to design a level and make sure that it is still “winnable” in a set number of moves. So are the levels automatically generated and then play tested by humans?
Making Levels for these games becomes an economy of scale and consistency.
Levels of games in that genre are primarily edited by hand, but then extensively categorized, classified and automation tested with heuristic or probabilistic and decision-based testing methods, such as Monte Carlo Tree Search. (King published some of this research, you can Google the full docs)
Level design takes a certain skill, and new or amateur designers will often try to build “puzzle” levels or gimmick levels, or can’t make levels hard enough because they themselves can’t play at the needed difficulty level yet. So there’s a certain job security in being a good Match-3 designer 🙂 … and conversely, for a team it might take a while for them to onboard a new designer onto “their” particular game.
The trick is not to design for a specific difficulty, the trick is to design many levels in one go, and then order them according to apparent difficulty by manual and automated testing; and also with data coming from live players. Then, the designers tweak them – the infamous Level 65 in the original Candy Crush Saga was nerfed this way – either by changing probabilities of certain pieces spawning, increasing turn allowances, or reducing objective counts.
Usually, a wavy or sawtooth-type difficulty curve from level to level is approximated for games with “endless” progression and several daily sessions, so you get small progressions of harder and harder levels.
In the various tile matching puzzle companies I worked in, level difficulties are also nerfed at runtime when they occur as the first few levels of your current play session. This helps players get into the flow and start their session off with more likely success. Another realization was that it doesn’t help monetization if a player fails a level 20x versus maybe 5x, so many studios code their games in a way that after frequent failures, the difficulty subtly or sometimes not-so-subtly drops. This also helps rein in problematic spending patterns in case players keep failing despite buying boosters aka. cheats. (though just as many companies go evil and lean into this instead).
Many of these games, like HomeScapes, also don’t really get any more difficult over time (otherwise they’d lose all their users!), but keep a relatively constant overall difficulty level indefinitely.
At 5th Planet, we considered automating our manual (still reasonably productive!) editing process. Along the way, we were presented with a level generator offered by an external company that would work by painting in some template shapes, setting up a pool of rules and goals choices, and then an AI would fill in (almost like waveform collapse) the other tiles, quickly playtest the level thousands of times, and then rate the design and the spawned result levels accordingly; and these levels could then be automatically deployed according to a desired difficulty curve, and the real player stats overlaid the predicted win/failure rates from the AI. It was quite impressive.
Sadly, our “Switcher” core at that studio had some unique (“cool”) features, like being able to allow making new matches while the board resolves, and some other advanced stuff that the external editor couldn’t replicate in its AI testing. In hindsight, we should have just disabled these things in our switcher rules, and waited for the company to implement the most important customized trickle down rules we had in our game.
I gotta say, this AI editor tool was leaps and bounds better than anything what King had while I was there. The AI that did the level classifications there was mostly heuristic / probabilistic, because a MCTS AI would easily look ahead further than even the best players, let alone the median players, could, and also accumulate knowledge and bias from unknown factors like the ratios of colors of pieces in a level.
Speaking of AIs and bots to test – my heuristic AI for a linker type game (connect 4 or more tiles with a swipe) played so well, we were putting it up on the screens and cheered it on for the cool “confident” moves it made. It could only be heuristic (possibly some ML magic could make it better), because even the solution space for 1 board state in Linkers (e.g. Best Fiends) turns out to be NP hard, whereas the possible number of moves in a Switcher (e.g. Candy Crush) is much simpler (maximum swaps to consider are board size * 4).
I once witnessed also some research into using autoencoders to generate more levels (imagine using neural network style transfer not for art, but instead, for puzzle level design!). I don’t believe much came of it, but I think it’s super promising.
I worked on a cancelled Solitaire game for Dreamworks at one point and our lead game designer and our lead software engineer referenced Candy Crush heavily in the level design meetings so that our one designer was able to make upwards of 70 levels in less than 6 months of production using some simple core design rules.
Essentially there were two separate but important aspects to how we approached this.
#1 Simple level builder tool
Our lead engineer made a pretty robust solitaire level simulator, it allowed the designer to place objects according to a grid ( to help design different layout shapes for the cards) , allowed the designer to determine depth of a card and whether they were below or above relative to other cards, and really kind of worked as a simple template we repurposed over and over with great effect . I believe pre-selecting the cards in the deck was a factor here as certain configurations made a level possible to complete or nearly impossible.
#2 Creating a Player AI that can play your levels and help make decisions based on its performance
The second and more interesting aspect of this was an AI our lead engineer architected that would play solitaire and could complete levels . I don’t know the nuances of it but I am assuming it brute forced checked the current top “pulled” card as a player tried to place it and if it could not place a card , discarded it and then brute-forced with the next pulled card over and over until either the AI won or lost.
The designer would have a difficulty curve in mind , run this AI maybe 10 – 20 times times on a specific level , and based off of the AI’s success / failure rate determine if this level was appropriately easy or hard from a base concept and then he’d usually do a little more design work to make sure the level felt right.
Not sure if this is helpful but we talked about Candy Crush A LOT on that project and this question stirred up some interesting old memories on how we tried to approach a similar design.
I made a Candy Crush inspired game once. I used a combination of doing the levels by hand and generating via a simple procedural generator (I had layers and I could place things above the board that would later drop onto the board). I tested every level by hand to determine how many moves it requires, for some reason it turned out to be easy to approximate just by playing it a few times – maybe because my game had few boosters so was more predictable than Candy Crush. If you want to check it look for Magic Potion on Google Play, it is not commercial at this point, doesn’t make me any money so I hope mentioning it doesn’t break any rules here.
The main point though is I didn’t design levels for a specific set of moves – I designed a level and then tested by hand how many moves it usualy requires and set that as the number of moves you have. Players could bend this limit a bit using coins or boosters (no in-app payments required).
Level Design Saga: Creating Levels for Casual Games
Read more at Reddit.
Mobile \ Read more \ 27.09
iOS SwiftUI Lists Are Broken And Can’t Be Fixed Swift Combine: Understanding Publisher-Subscriber Pattern in 2 seconds How to Create...
Mobile \ Read more \ 26.09
iOS Compose UI for iOS Aspect Fit Layout Guide Replacing if let in Swift 5.7 Native iOS Game Development w/...
Dotlin – Kotlin to Dart compiler
Dotlin is a Kotlin to Dart compiler. The aim is to integrate Kotlin as a language into the Dart ecosystem,...
Koreography – A light weight Compose Animation library
A lightweight Compose Animation utility library to choreograph low-level Animation API through Kotlin DSL. It does the heavy lifting of dealing...
Mobile \ Read more \ 22.09
iOS Understanding Swift’s Opaque Types Implement Core Spotlight in a SwiftUI App MVVM + POP to deal with complex UI...
Dynamic Islands – A sample of Dynamic Island designs
A sample of Dynamic Island widgets that will help you make better things! Dynamic Islands This repo aims to provide...