Sega Saturn vs PlayStation 1 vs N64

Forget Xbox One & PS4 – Fifth Gen Was The Best Gen

Tim White gives three reasons the fifth generation was better than the current gaming generation.

I know, I know. I get a lot of flak for this claim, but before you toss me into the dustbin of history, hear me out. Allow me to clarify a few things right away: I don’t think the modern gaming industry is (entirely) a dump, nor do I think everything was rosy and perfect twenty years ago. I believe in changing with the times, but I also believe some standards are objective and that newer is not, by sole virtue of that fact, always better.

I’m thirty-one. To save you the math, that means I was nine when the original PlayStation – still my favorite console – was released. Prior to that, I had regular access to a Sega Genesis, a SNES, a regular NES, and even an Atari 2600 someone found in my grandmother’s attic. I wasn’t around for the Odyssey or ColecoVision, but back then, games were little more than electronic versions of tic-tac-toe anyway (not to discount the amazing advances in computer science that made them possible). To some extent, the console that accompanied me through puberty is inevitably going to hold a special place in my heart, but I nonetheless think there are some compelling arguments to be made in support of the late 90’s being the true Golden Age of Gaming.

Controversial point one: when it comes to gaming, online connectivity has done more harm than good overall

I love coop games, particularly in the survival horror, stealth, and RPG genres. I generally loathe PvP; it’s a legitimate hobby, it’s just not for me. I’m certainly not out to claim that the internet is a unilaterally bad thing for gaming. It’s brought tremendous positive value to the industry and to individual gamers – it’s just also dragged quite a bit of bad stuff in along with it.

I’ll first reach for the low hanging fruit: the ability to patch a game over the internet after its release is a double-edged sword if ever there was one. I can’t even begin to guess at how many games I’ve purchased on or shortly after launch day, only to find myself stuck with a broken mess that can’t even go five minutes without freezing. It’s great that this kind of thing can be fixed, but the mere existence of the solution has, for many, greatly deemphasized the expectation for developers to release a polished, professional product. I firmly believe that back when consoles were offline, the inability to fix significant errors in the code powerfully incentivized developers to get it right the first time, to lead with their absolute best.

Profit margins on video games have always been narrow, and have only narrowed further as technology has increased costs exponentially in exchange for diminishing returns in graphics. A single flagship title tanking can spell the end of a company – or at least it could, back before game makers had the option to retroactively turn a garbage pile into something marginally passable weeks or months after release. Additionally, DLC packs that can be shoveled out on a regular schedule sometimes create similar problems for similar reasons, particularly when the quality of the extra content fails to justify its price point or when it has clearly been clipped from the base game arbitrarily.

However, there is a subtler and more sinister side to being globally connected to other gamers. Groupthink is extraordinarily dangerous, and is arguably more prevalent today than ever before; pressure to agree with or fit into the crowd is a force that has real power to overwhelm an individual’s better judgment, especially among young gamers who have been intellectually crippled by failing public schools that don’t teach them how to think. To put it another way: when others around you are gigantic assholes, it takes real integrity to not act just like them, and multiplayer gaming has certainly proven that there is no shortage of gigantic assholes out there. Add to this the fact that many games being released nowadays are more like virtual casinos than video games and you have a real recipe for nastiness.

I’m certainly not implying that technology has some metaphysical power to turn good people into jerks. You are, of course, entirely responsible for your own thoughts and actions, no matter how many other people are trying to push you in a certain direction. While ultra-competitive PvP grind-fests designed to maximize revenue at the expense of artistic value (or even just entertainment) are not responsible for the behavior of people who choose to play them, the developers that create these kinds of titles certainly aren’t helping by explicitly incentivizing the behavior. Among modern gamer communities, there is little to be found in the way of mutually respectful competition that doesn’t devolve into insults and death threats, and that’s just sad. At least in a certain context, gaming with others was a healthier and more enjoyable activity when your opponent was three feet away; people are a lot less likely to be senselessly mean if they aren’t anonymous.

Controversial point two: the idea that games can be art is now being taken less seriously

I hold what seems to be an oddly uncommon position on this issue, in that I believe that some video games (and some books, and some films) are art. That is to say, the subject, not the medium, is what determines the artistic status or merit of a given work. A video game doesn’t have to be art to be good and to have value; some of my favorite games are unequivocally not art. On the other hand, some of my favorite works of art are video games and have added tremendous spiritual value to my life. I think it’s a little silly that many (if not most) non-gamers still hold the view that games can’t be art, but I understand why that view is prevalent, especially in the age of digital gambling and microtransactions.

Throughout the 90’s, graphics were hilariously bad by modern standards and the capabilities of hardware were not all that impressive. In the original Silent Hill, the persistent fog in outdoor areas serves primarily to conceal the PlayStation’s poor draw distance, but it made the place extra spooky as a bonus. In a very real sense, less powerful technology was a great thing for fifth and sixth generation games; by not being able to rely on photo-realistic graphics and lifelike physics to draw players in, developers had little else to focus on aside from the story, characters, and gameplay. Game makers were forced to get creative in order to invoke a particular atmosphere – such as with the fog in Silent Hill – and, at least in some genres of games, to pen narratives with real literary merit.

In my experience, most people respond to great art even if they never consciously think about art or the criteria by which they judge it. That is, after all, the entire essence of the artist’s job – to condense complex abstract themes into tangible, concrete experiences directly perceivable by our physical senses and interpretable by our minds (a sculpture, a book, a video game). For example, I know of almost nobody that dislikes Final Fantasy X (among those who have played it). It is, in fact, my favorite game to this day. Its basic theme is one woefully underrepresented in the culture today: your life is yours, and you have a right to live it, no matter what anyone else says. You don’t need a doctoral degree in philosophy to get the message and to be impacted by it because the artists (the developers) did a phenomenal job of communicating it to you through the words and actions of the characters as their decisions move the plot forward.

I think the overall trend in games today is regrettably drifting toward laziness on both sides of the equation. Fewer gamers place a high value on a great story and interesting characters, instead content to fiddle with complex endless leveling mechanics or slight variations on the same PvP battles over and over. Modern technology has made it relatively easy for nothing to appear to be something – that is, for a threadbare amount of content to be stretched into hundreds of hours of gameplay that looks and sounds fantastic. Developers of yore, technologically unable to replace substance with flash, often produced deeper and more interesting games than what we see today. Personally, I’d rather buy a game that lasts twelve hours but will create a lasting impression than one that I can manage to forget entirely after playing it for 100 hours.

Controversial point three: versatility is not always a good thing

In 1995, the PlayStation could do exactly two things: play PlayStation games and play music CDs. It does both of those things very well; my original PS1 is still going strong, as are those of everyone I know who still owns one. Contrast the lasting appeal of this family of consoles against those that came after them (Xbox 360, PS3, Wii). Obviously personal opinions will vary, but I think the 90s gave us games of far greater overall quality than the 2000’s did. One reason I believe that to be true is precisely because of the relative simplicity of fifth generation consoles. As you cram more distinct functions and capabilities into the same piece of hardware, it becomes exponentially more complex, expensive, and prone to failure. It’s hard to pin down exactly how much Microsoft and Sony spend on repairs and refunds these days, but I’d confidently bet that it’s orders of magnitude more than it was twenty years ago.

But the complexity of hardware is only one element to consider. On the software side of things, it’s now expected that one box should be able to run your entire living room. Your modern console of choice probably streams Netflix, manages your Spotify playlist, browses Facebook, stores your credit card number and might even log you in with facial or voice recognition. This expanded functionality means that it has to download gigantic updates every few days, and with a longer chain of mutual dependencies, one app or program breaking can cripple many others. Even game developers – those that don’t directly produce third-party apps – have to account for and integrate them. Just about any game coming out in 2018 supports screenshots and live streaming, sharing to social media, in-game purchases, and many other complex features. All of these things take massive amounts of time and money to develop and implement – time and money that is now effectively removed from game developers’ budgets. The more resources game developers have to allocate to creating and maintaining support for all that other stuff, the fewer resources they have to make the actual video game.

Even today, twenty-three years after I bought it, I can put a disc in my PlayStation and it just loads, and works. No updates, no 2FA, no locked accounts, no inability to play a single-player game without being connected to the internet. In my view, there’s a lot to be said for simplicity, for doing one task and doing it exceptionally well.

The Verdict

What do you think? Am I just a grumpy old man that hates the future? Or is it possible that, as cool as some modern games are, the old ones are still even cooler?

I fully admit that there is a lot about 2018 that is better than 1998, in terms of what video games can do. And yet, in some ways, the emphasis on the product – the video game – has been lost. I fear that games are moving away from being games and becoming something more akin to “multimedia experiences.” Only time will tell whether the net result turns out to be better or worse, but if you still have an original Xbox or PS2 buried in your closet somewhere, consider bringing it out and blowing the dust off. You may find more to love about the old days than you remember.