Blog Entry: Game release comparison re-thought


One of my biggest mistakes back as a newbie contributor at MobyGames was adding the Nintendo DS release of Jewel Match as a new platform for the Windows release, not as a new game entry. I was told to change that because both versions have noticable differences. Quoting from my own description at MobyGames:

Due to the limited screen size of the handheld, the grid size of the levels had to be cut down from 14x14 to 12x12 tiles, thus forcing a complete redesign of all the levels. While at it the developers added some DS-only gameplay options like the new magical storm where the player can blow into the microphone to reshuffle the jewels.

Despite these differences, it seemed illogical to my younger self to separate both versions, thus cluttering up the game list. I then blamed it on the technical limitations of MobyGames in documenting the differences between the platforms and moved on. But the core issue remained in my head, and surfaced again when thinking about the Oregami way.

This time, the game at hand was Tetris, a Russian classic that was arguably ported to every gaming environment in existence, thus holding a complicated history of ports, licensing, re-ports, and clones. How to press all these releases into a sane adaption of our model of Game→Release_Group→Release became known at Oregami as "The Tetris Question", because when solved we'd probably have solved the different game issue for all games.

The one extreme of putting Tetris into our data model would be to have one game entry with countless release groups under it, the other extreme to have many, many game entries for every sub-family of releases. Both of these solutions seem not desirable. When using solution one, we arguably could drop genre classification at all because every sub-genre (like chess, soccer, etc.) would become its own game entry with countless release groups. On the contrary, going with solution two would prolly leave us with a mess similar to that at Mobygames. We will need to find some middle ground.

Early warning: This blog post won't directly answer the Tetris Question for us, yet. Instead, I will try to outline some basics of different game criteria for Oregami and introduce you to the draft of a comparison concept for a less opinionated approach to comparing two game releases.

What others do

Many decades of video game documenting saw much fighting over the issue of when a new release of a game is so different from former relases that it shouldn't be grouped together with its predecessors any more. As a result of this, every game database out there developed some more or less vague criteria, written or unwritten, that are applied to daily business on a (mostly undocumented) case-by-case basis. Let's take a look at some examples:


Different game: When either game play, perspective, and/or storyline are different than the existing entry. Graphics/Sound that are merely improved are usually not a different game. This is of course evaluated on a case-by-case basis in comparison with other versions. Particular cases to be aware of are handheld versions of console or PC games which usually are different. In general, licensed titles (like those based on movies) often are different for different platforms, especially those released in the late 80s/early 90s. If you are not sure that a certain version of a game is the same as another, don't assume it is - always try to confirm it by playing it yourself or reading reviews.

This advice for MobyGames contributors basically means that a new game entry is due when the genre classification changes.

Leaving a true change of game play aside, which is the prime reason for a change of genre, the perspective and some special themes representing sub-genres are also part of the classification there and, thus, are also connected to the game entry. This means exemplary that one cannot contribute a change of perspective without creating a new game entry. Having said that, a change of storyline is a mixed bag that depends on the genre. While a new game entry may be due for an RPG or Adventure, an alternate story in an Action game prolly won't make the game feel too much different.

Oh, and I particularly like the either...and condition in that first sentence. (smile)


A game should only be split when it is clearly two different games. And by this we mean that the gameplay or genre are substantially different from each other, not that the graphics have been changed slightly or that some features have been stripped out. Separating a game entry into two when a significantly different version has been released has always been one of the gray areas of VGG. We developed a few rules, listed below, to help you determine when something should be split or not. Most of the time however, it will be individual calls.

As I said above, a true change of game play naturally involves a change of genre, so the advice of VGG of game play OR genre being substantially different is a little pointless. And even if game play AND genre are exactly the same, the different game issue will arise, too.

So, by just reading these two examples alone, it becomes quite clear that we need to rethink all this for Oregami, and come up with a more objective and documented decision-making which is based on clearer criteria. Of course, we must admit that this is everything, but easy.

The third layer

But before we will dive deeper into the glory of different game criteria, let's take a slightly deeper look at our data model for games. As explained above, the basic hierarchy in our database will be Game→Release_Group→Release which means three layers of data of which two are rather straight forward to explain:

The game (G) is the superior data layer, i.e. the umbrella database entry holding all the information about a G together. Exemplary, this would be "Tetris".

The release (R) is a version of a G that you can own. It doesn't matter here if this version was bought in a box, came as a paid download, is a freeware floppy disc, or was aquired as part of a compilation. For our example, this would mean something like "German release of Tetris for Amiga" or "English budget release of Tetris for C64".

But then, there is the third layer of data at Oregami which is the release group (RG). On this level, similar R's are bundled together to prevent data redundancy. Exemplary, if we would just stick to the two data layers G and R, we would need to connect a review of Tetris' Amiga version - not knowing exactly which release they tested - to every Amiga R of Tetris in our database, because its content may apply to every of them. But if we bundle together all those R's into one RG - for our example this would mean "All Amiga releases of Tetris" - we would just connect this review to the RG and be done with it.

And yes, at other sites this third data layer is called the platform - and indeed every platform also gets its own RG at Oregami -  but our concept goes further in that there can be more than one RG per platform. If two groups of R's on the same platform somehow need to be distinguished from one another, we will create an RG for both of them. Just to give a simple example, the full version of a game would get its own RG and the demo version on the same platform would get its own RG, too.

As a result of this, we not only need to decide if a newly contributed R needs a new G entry in our database, because it is so different from former R's. If the answer to this first question is no, we also need to check if this release needs a new RG under the existing G entry. So we need to bundle these two decisions together to find the right spot for a new R in our data world.

Criteria back and forth

But before we can even think about making a decision, we need to identify criteria on which to base our outcome. As the interested reader may have concluded above, there are two criteria which are so important for Oregami's data model that meeting them requires a certain outcome for a game release:

  1. If a R features a different (core) genre classification, it requires a new G entry.
  2. If a R is on a new gaming environment (platform), it requires (at least) a new RG entry.

With these two out of the way, let's take a look at the criteria that matter for our decision making. Oregami is mainly a database about games, so obviously game play is the core thing that matters for our decisions, and things that don't make a difference within game play don't matter. Thus, we are looking for criteria that "affect game play" to base our decisions on.

Every of the following six criteria does affect game play in one way or another in our humble opinion. Furthermore, we defined a weak case (W) and a strong case (S) for every criterion. The Why will be explained below.

  1. Different visuals

    W: The graphics were updated / improved.
    S: The graphics were exchanged / remade / ported to a new engine, or video sequences were added.

  2. Different sound

    W: The sound effects or music were enhanced / improved.
    S: The sound effects or music were exchanged / remade / newly recorded, or digital speech was added.

  3. Different story line

    W: The story line was enhanced / improved at certain points, but stayed the same at its core.
    S: An alternate story line was implemented.

  4. Different user interface

    W: The user interface was improved / enhanced
    S: The user interface was exchanged / reinvented / remade.

  5. Different game play elements

    W: Some minor game play elements were changed.
    S: At least one significant game play element was added / changed, but without a change in the genre classification.

  6. Different game content

    W: Some minor game content was added / changed / removed.
    S: Some major game content was added / changed / removed.

Having identified the criteria which we shall base our decisions on, let's now take a look at some criteria which shall not matter for any of the decisions to be made, because either they found their place elsewhere in our data model or they are just irrelevant for game play.

  1. Time of release

    It does not matter if a new R of a game was released 6 months after the original, or 6 years after it.

  2. License

    It does not matter if the new R was released under the same official license as its predecessor or not. This information doesn't affect game play and will be covered elsewhere.

  3. Company setup

    A change of the publishing / developing company may be seen in different packaging, missing printed manual, or whatever external stuff you can think of. But for the game play it's irrelevant.

  4. Special releases

    A new RG or even a new G entry into Oregami's database won't be due to a fancy new box, a voluptuous manual, additional goodies, or other stuff.

  5. Localization

    A release adapted to a different country / language doesn't matter as these information are documented elsewhere in our data model. A notable exception to that would be censorship that affects game play.

  6. Patches / Addons

    Only playable R's can be compared with one another. Patches or add-ons released later will get their own G entry in our database, thus are irrelevant for the original R they add to.

With six relevant and six irrelevant criteria named, we're still facing the problem that some of the relevant criteria are more important for specific genres than others. Exemplary, if a new R of an Adventure game features an alternate story line, this alone might be enough for a new G entry. But if a new R of a shooter does so, would that even be enough for a new RG?

To solve this problem, we should define one or two primary criteria (P) for every core genre of our database. If a new R meets the weak or strong case of one of its genre's primary criteria, chances will go up it requires a new G entry or a new RG.

Decisions, decisions - and when to make them

Let's assume that we compared a new R to one of its predecessors, and found the answers to all of the six criteria decisions above, what then?

As I wrote above, we will need to make two decisions based on the outcome: First decide if the new R deserves a new G entry, and if no, decide if a new RG under the existing G entry is needed. Both decisions shall be based on the W or S criteria the release has met, and on the primary criteria for its genre.

For the new G decision, I would suggest the following criteria combinations to have been met:

  1. One primary criterion with S
  2. Two primary criteria with W
  3. One primary criterion with W and one non-primary criterion with S
  4. Two non-primary criteria with S.

For the new RG decision, I would suggest the following criteria combinations to have been met:

  1. One primary criterion with W
  2. One non-primary criterion with S
  3. Two non-primary criteria with W.

But comparing two R's takes quite some time, so we can't possibly require that for every new R that a contributor wants to put into Oregami's database, can we?

Of course we can't, but there will be certain occasions where we might do it. Let's outline some:

a) We know that two or more main versions of a game exist, and we want to settle once and for all their treatment inside the database. An example here would be The Witcher and its Enhanced Edition. We would compare the original R's of both main versions to decide, if different RG's might be enough to separate them or if separate G entries are needed.

b) A game features a confusing maze of R's on different platforms and by different companies with no sufficient information to be found elsewhere, and we feel the need to finally bite the bullet and clearly document and preserve its R history. This might apply to Tetris.

c) A contributor asks for it, or claims the data for a game to be erroneous.

d) We introduce the concept of finalized database entries to Oregami. A "finalized" G entry might mean that every R is contributed with its main data, and the distribution of all the R's among the different RG's is documented by comparison. Changes to such entries after their finalization will follow special supervision.

Looking at some examples

Let's just use three cases from video gaming release history to test the above theory. I chose three games from different genres, representing three standard cases where a comparison might be due:

  1. The Witcher vs. The Witcher Enhanced Edition (RPG; enhanced re-release)
  2. Doki Doki Panic vs. Super Mario Bros. 2 (Jump and Run; regionally different release)
  3. Jewel Match for Windows vs. Jewel Match for Nintendo DS (Match 3 game, port to different platform)

These exampes have been researched through for this blog post. Within Oregami's final comparison system, the outcome might be different.

The Witcher Enhanced Edition

After the initial release of The Witcher, CD Projekt took another year to iron out all the problems of the game and give the players a flawless experience. The results of this work were released as The Witcher Enhanced Edition, and stand to this day as one of the best roleplaying games ever made.

The pattern of re-selling games in debugged / improved versions is something we've seen again and again throughout the years. Other examples of this are the Extended and the Relodad version of The Fall: Last Days of Gaia, or the re-releases of Dungeon Lords.

When reading the GameSpot review of the Enhanced Edition, we can identify the main differences to the initial version:

  • English dialogue has been rewritten and expanded upon
  • Engine performance has been dramatically improved
  • Color palette, character models, animations have been enhanced
  • 2 new stand-alone adventures have been added

Super Mario Bros. 2

The release history of Super Mario Bros. 2 is rather complicated, with the gory details described in this piece at Basically, the graphics of the Japanese NES game Dokidoki Panic were replaced with Super Mario content, with the game play getting some tweaks to make it feel more Mario-like, and it got released outside Japan as Super Mario Bros. 2.

Later on, the Japanese gamers also got an original Super Mario Bros. 2 which basically was a set of newly designed, much harder levels using the same game engine. And to add another lesson on how to make money, the above mentioned US release of SMB2 was re-released in Japan as Super Mario USA, and the above mentioned Japanese SMB2 release was re-released in the US as Super Mario Bros.: The Lost Levels.

How to press this release history into our data model might be subject of a following blog post, for now we want to focus on comparing the original Dokidoki Panic with the US release of Super Mario Bros. 2. Besides replacing the graphics, the following changes were made (quoting the post):

Tanabe's team made many improvements to the original for its American debut, adding more enemy characters, throwing in some visual nods to the Mario games and greatly enhancing the animation and sound effects. Because one of Mario's most notable features at the time was his ability to grow and shrink when he ate magic mushrooms, this was added to the game.

Jewel Match

I already mentioned this case in the introduction to this blog post. We want to compare the PC release of Jewel Match with its Nintendo DS port. Again quoting from my own description on the DS port:

Due to the limited screen size of the handheld, the grid size of the levels had to be cut down from 14x14 to 12x12 tiles, thus forcing a complete redesign of all the levels. While at it the developers added some DS-only gameplay options like the new magical storm where the player can blow into the microphone to reshuffle the jewels.


Having outlined the basics of a possible different game concept for Oregami, we now need to actually compare some releases of known difficult cases as a next step, to see how such a system would behave in the real world.

While doing this, the concept needs to get much more thought and research. Are the six basic criteria relevant for all games out there? What are the primary criteria for the main genres? Are the criteria combinations given above suitable to find the "middle ground" between one game entry per genre and too many game entries? What happens if a successor to a game features the exact same game play as its predecessor?

Many interesting questions will need to be answered on the way, many interesting cases will need to be evaluated. And we will see if the concept outlined above will survive all this. And as always, if you wanna dive in just let us know.