perfect silence

He's actually just pumped about a new Potbelly opening nearby.

Horizons are broadening. Nintendo’s newest and most original IP in a long time, Splatoon has been looking quite impressive for something that failed to get the attention of Miyamoto the first time around. Impressions have been almost universally enthusiastic since its first public display, and the gaming world can hardly contain itself long enough to wait for its release just over a month from now. It could well be instrumental in turning around the Wii U’s fortunes.

It’s not quite perfect, though. Splatoon is missing a feature that is key to games these days, something that many people can’t imagine a game not supporting: voice chat. To many, this is simply unimaginable. How can you play an online team-based third person shooter with no way of live, direct communcation with the other members of your team? The simple answer is the internet sucks. But it’s more complicated than that.
Actually, it’s not. The internet is not known as a haven for rational, well-articulated debate, and the gaming community even less so. Playing online games is often an exercise in self-restraint, and more often than not involves use of ignore lists and offline modes to maintain a buffer against the tide of verbal excrement that constantly flows in. In almost any given online match, the majority of the participants do not have a microphone active, or have incoming voice chat muted–often, if it’s one of those, it’s both. People just do not want to interact verbally online, for the most part.
And it’s not because gamers are antisocial. It’s because gamers are assholes to each other. If they’re not raging over their most recent death streak, they’re lording over everyone else and putting all their extra character points into Hubris. Things are spoken (and screamed) over game chat channels that are often reserved for the most bigoted of rallies. And when people try to break the cycle, it only gets worse. Some games don’t have voice chat at all, and probably wouldn’t benefit from it. For many, talking to strangers on the internet is like opening a gate in a dam holding back an entire lake of fecal waste: you can do it, and you can even probably avoid getting covered in shit, but chances are you’ll be looking for a change of clothes regardless.
Nintendo is hoping to avoid the issue entirely by eschewing voice chat. In the past, they have replaced this with a simplistic interface meant to encourage positive interaction and keep focus on the game. I didn’t mind this, because I would rather play (and rage) in my own solitude than subject others to the siren call of my frustration, or subject myself to others’ frustration. I extensively played Black Ops 2 online via my Wii U and never once used voice chat. And I didn’t miss it.
Don’t misunderstand–This is also a result of some of the more inane limitations of the Wii U. Unlike with its competitors, the system has no support whatsoever for a wireless headset. If one really wants to engage in this sort of thing, there are only two possible methods: use one of an elite few USB headsets that work, or use a 3.5mm headset wired directly into…the gamepad. This is only possible with the gamepad, as the Wii U Pro Controller does not have a 3.5mm connector. It’s not nearly as asinine as some past solutions, but it’s definitely not going to motivate any “core” gamers to switch to Nintendo’s console for their gaming needs.
Then again, does anyone out there really, seriously want to LISTEN TO THIS?

requiem for a rant

CompUSA and Circuit City are finally being put (back) in the ground, after three tortuous years of Systemax trying desperately to make their gutted husks not only look alive again, but alive enough to put the fear of competition in Best Buy.

As someone who worked for CompUSA (long before this whole fiasco), I didn’t get that feeling of watching a relative or parent exhumed and reanimated before my eyes. I don’t feel any nostalgia for that company whatsoever. But I would rather have seen it get just one quick death, rather than lingering on as it did. Circuit City didn’t really have much better of a reputation; in the best of times, it was regarded as a wannabe Best Buy (albeit one that carried a better selection of stereo equipment), and at the worst, it was remembered primarily for inventing things no one wanted. The electronics retail crash dive of the late 00s was a shock that left the bodies of these companies intact, but their internal workings permanently and catastrophically damaged. By 2004, most any CompUSA or Circuit City was filled with staff who were best described as demoralized and unmotivated. They were still forced to fill a quota of signups to AOL dial-up internet, in a period when far faster cable internet was becoming both easily available and very affordable. Most people were just waiting for the end, like strangers holding a candlelight vigil outside the hospital, eager to go home and continue their lives.

What concerns me more about this is the damage to their parent/sibling, TigerDirect.

2004 was probably the last heyday of the big box electronics superstores. Best Buy, CompUSA and Circuit City spent their days battling for the crown of Retail King, evoking memories of the European monarchs warring over control of the Continent in the first World War. The common people wearily endured the constant deluge of propaganda, while others served in the ranks, and all just hoped for the excitement to die down a bit so things could just be normal for a while.

Meanwhile, off to one side, was TigerDirect. I fondly remember Tiger as being a cross between these big box stores and RadioShack. One could go to Tiger for something older or more obscure, but also find plenty of newer hardware as well. They weren’t obsessed with selling overpriced TVs or games. In particular, I remember my local Tiger’s absurdly wide selection of system memory. Everything from the then-current DDR2, to archaic PC66 SDRAM, and even a few sticks of the near-mythical RDRAM, could be found with a fair degree of ease. One entire wall of the store was nothing but system memory. Similarly, older hard drives and motherboards were easily available, making working on older systems far easier. Smaller parts like molex splitters, 3-pin adapters and mounting kits to adapt components of different sizes were nearly as prevalent as the memory. I can’t even remember how many dozens of trips I made to that store when I suddenly discovered I needed a splitter to install a new fan or something of the like.

In 2007, CompUSA officially went under, and was purchased by Systemax, the parent company of TigerDirect, with its last 16 remaining stores rebranded as TigerDirects. The following year, Circuit City closed nearly all its stores, and the remainder were also acquired and eventually rebranded. Systemax had pulled these vegetables right off their hospital beds, given them a fresh change of clothing and started making them walk and talk as if they had never been in ill health at all. I had my doubts about this move, but if it meant even more access to niche hardware, I was all for it.

That didn’t happen.

Almost overnight, my local TigerDirect underwent huge changes. Within six months, fully half the store’s footprint was occupied by enormous HDTVs, with computer hardware pushed off to the side and the selection reduced to a mere sampler, rather than the imperial buffet it had previously been. Within another twelve months, the hardware was moved again, to the back corner where it had virtually no exposure. Only the cheapest and most extravagant of video cards and motherboards were on display, with the midrange underrepresented at best. For me, it was only barely a notch above Best Buy, but they still had the advantage with hard-to-find parts and kits. Eventually that dwindled to nothing, as well. After a while, I had very little reason to go to TigerDirect, and apparently most other people felt the same way.

In 2012, the CompUSA and Circuit City brands were officially dissolved and brought under the TigerDirect banner, finally putting the zombies in the ground. But TigerDirect shuffled on, apparently hoping to fulfill some obscure set of criteria to attain manhood. My local store was completely rearranged every 6-9 months, seemingly part of a mission to guarantee customer dependence on the nonexistent salesmen to find products because they were constantly on the move. They still had those obscure niche components, but my motivation to shop there continued to asymptotically approach zero. Even when I went there during the evening rush hour, I would see at most half a dozen other customers. It was nearly abandoned.

And here we are, another three years later. Systemax has announced that TigerDirect will effectively be no more, closing all but three stores nationwide, including a distribution center in Naperville that has been open since Ye Olde Dayes. Finally, the (tarnished) legacies of these three franchises are being put to rest.

I’ll miss TigerDirect, but only the TigerDirect I knew before 2008. I certainly won’t miss CompUSA or Circuit City. Their time was long ago, and long past.

laterally

Technology doesn’t always move forward. Sometimes you have a solar flare, or maybe a robot revolution, and in the end the collective decides that it might be better off living a simpler life.

Sometimes the decision is made for you, by people who claim to know better than you.

Apple decided to resurrect the Macbook a couple weeks ago, to the usual media circus and fanboys (or fanpeople, to be more equal) collapsing in seizures of ecstasy. The laptop’s primary feature is that it is even thinner, as per Pope Steve‘s papal bull issued in time immemorial. All other features, apparently, are secondary. Coming in at a gaunt 13mm and just a hair over two pounds, the new Macbook continues to solidify Apple’s stance as the heroin chic of the computer industry. After seeing the absurdly-extended marathon of anorexia Jony Ive and his coworkers seem compelled to display, I’m surprised I haven’t yet seen a Macbook regurgitate some part of its mainboard live on stage.

Unsurprisingly, it has had to give up some of those pesky internal organs to maintain its girlish figure. While optical drives have been vanishing on thinner laptops for some years now, Apple decided motherboards were just too damn big. The more features you want, the more circuit board there needs to be, and who really wants either of those, right? Hard drive connections, memory slots…those are all just vestiges of previous generations that need to be evolved out. And that’s why we have Apple. To tell us what we need and don’t need.

Among the earth-shattering changes wrought upon humanity with this model is the improved runtime. The new Macbook includes a “stepped battery” design, to fit the laptop’s frame as closely as possible, and pack in that ever-so-coveted battery life. Combined with the tiny mainboard, this means the laptop’s footprint is around 90% battery. And none of these are removable. Everything is soldered, screwed and glued in, to make serviceability as impossible as possible.

Now, I’m not an engineer or anything, but it seems to me the Macbook would get even better battery life if it just had a slightly thicker battery with a few more layers. Or a user-replaceable battery that could be swapped for an extended model. Or even two batteries, allowing the user to swap between them on the road without powering down. Nope. Apparently the future is in paper-thin batteries you can’t touch or replace without destroying half of the fucking machine.

But one more thing…the new Macbook actually does possess one feature that everyone agrees is the future: a USB “C” port. This nifty new guy promises to eliminate long-standing frustrations, will increase the bandwidth to 10Gbps and carry up to 100 watts, allowing phones and even laptops to charge over a USB connection. And guess what? The new Macbook will do exactly that. It’s a fairly brilliant move–future proofing, if nothing else–and I can’t wait for a future where I don’t need to lug my laptop’s charger around, just a USB cable and wall outlet adapter. I can remember the last time a change like this had so much convenience potential. And it was glorious.

Problem is, apparently the future doesn’t include multiple ports. That’s right. The new Macbook includes exactly one USB connector. And that’s it. Oh, it also has a 3.5mm for headphones. But it has no HDMI-out (or any AV-out for that matter), no SD slot…nothing else. If you have to charge your laptop and use an external drive, you have to pick one. You have to read an SD card and use an ethernet adapter because your Macbook’s wifi crapped out? You’re SOL. The Mac world’s response basically amounts to “more ports bad, more accessories good!” Yes, because external add-ons are exactly the thing to complement portability. I mean, it’s not like this laptop is going to be on the road, right?

What makes this whole thing really entertaining (or depressing, whichever works for you) is that Asus beat them at their own game. The Zenbook UX305 is superior to the new Macbook in almost every imaginable category: While the processor is (possibly) not as powerful, the memory is user-upgradeable, as is the hard drive. The screen is capable of up to 282 pixels per inch, versus Apple’s 226 much-trumpeted Retina display. And this even comes with two free kicks in the nuts: Asus’ Zenbook has a bigger battery, has three USB ports (granted, not v3.1 “C” ports), is thinner than the Macbook, and will be almost half the price. It boggles the mind, it really does. It makes me want to buy the Zenbook just so I can stand outside an Apple store as people build their hoovervilles and wait desperately for a chance to breathe the same air the new Macbook used as intake and exhaust.

Personally I don’t see why laptops need to be thin enough to double as filet knives. I have no problem with a laptop in the 1-2cm range, and if manufacturers are going to impulsively shrink the electronics with each new generation, why not instead work on replacing that newly-empty space with more battery? That way everyone wins. Laptops are thin enough, damn it.

off the block

Earlier this week, one of the gaming industry’s biggest figures began his exit. Markus Persson, better known to fans as Notch, sold his baby off and has decided to move on. Reactions run the gamut, naturally, especially in this land of the internet where hyperbole is the only accepted form of communication. In less than six years, what started as a pet project for Notch eventually grew to a community of tens of millions, worth $2.5 billion. If my math is right, that means Minecraft made him over a million dollars a day, not counting the sales of the product itself. If those sales are accounted for, that adds another billion or so to the total, raising the daily breakdown to $1.5 million, for something that started as a mere personal project for amusement. Mind boggling.

It’s not about the money. It’s about my sanity.

In his personal blog post, Notch revealed that after the sale is finalized, he will leave Mojang, along with its other two founders, and go back to his own personal tinkerings. Many reacted with shock that he would abandon something so popular, so large and so profitable. In his closing line, “it’s not about the money. It’s about my sanity,” he exposes a larger problem that has surfaced in recent years in the gaming industry. It’s incredibly easy for someone to become famous, and sometimes the wrong people get put on the pedestal.
With the release of Minecraft, Notch’s little project almost instantly began to get attention. Just two and a half years after the first public alpha release of the game, Notch and 5,000 people gathered at the Manalay Bay in Vegas to celebrate his game. Less than a month after that, Jens Bergensten was given complete creative control over Minecraft, and Notch more or less became a CEO in function. In his free time he pursued new projects like Cobalt, Scrolls and a very experimental concept called 0x10c. While showing promise, 0x10c eventually failed to pan out, and Notch cancelled it. Already he was finding his personal limitations; while he now had the time and money to pursue whatever project he desired, he didn’t always have the ability to make it work.
Of course, this isn’t the first time this sort of thing has happened. History is replete with instances of common individuals who do something otherwise innocuous and suddenly gain massive fame and wealth from it. In recent decades, people like George Lucas, John Carmack and Mark Zuckerberg have risen from obscurity to worldwide fame in months. But therein lies the rub–not everyone is cut out to be famous. Our most recent example of the wrong person can be found in Phil Fish, another game developer. Fish created Fez, a very complex game with very simple basics, which became another indie hit and propelled him into the spotlight. Fish turned out to be impulsive and vindictive, often doling out harsh personal insults to people who criticized him. In perhaps the most remembered moment of recent Internet history, Fish went on a rampage on Twitter and cancelled the development of his game’s sequel.
Notch saw Phil Fish, and worried that he might become that man. He realized he wasn’t meant for the spotlight he was in when Mojang announced changes to the Minecraft EULA and users across the internet instantly targeted Notch, who had not been involved in the EULA changes. He realized he had become something different to his fans and his players, and he wasn’t what they wanted him to be. He would rather sit in obscurity and toil on his own pet projects, which he is now free to do for possibly the rest of his life. Will he be remembered as one man who flew in from the night, changed indie gaming forever, and then vanished? Or will he be remembered as the man who started a phenomenon, and then gave his baby away to a company with a less-than-stellar track record with acquisitions? We’ll see.

one more time

In a Nintendo Direct yesterday, Satoru Iwata announced a new model of 3DS to be released next year, featuring various improvments. Among these include a larger screen with better parallax effect, a faster processor, more memory, a second analog nub (finally) and more shoulder buttons. The New 3DS’ screen can adjust its brightness automatically to compensate for ambient conditions, and track the player’s face to keep things looking just right.

The internet hasn’t taken this well, with complaints that the current hardware is now completely obsolete, that this move by Nintendo is an “insult” to its longtime supporters, and cries of how Nintendo won’t support their hardware for any considerable length of time. One complaint which strikes me as particularly ignorant is that which “this is the first time Nintendo has ever split the userbase…depending on which re-release of the handheld they bought.” Most forum comments stop just short of outright claims of fraud. I’m left wondering how long it will be until lawsuits from jilted customers start popping up.

I really don’t see where any of this hate is coming from. It’s not much different from the overall progression of Nintendo consoles over the past 15 years or so. While the original Game Boy persisted for a good deal before finally needing a successor, the Game Boy Color was only on shelves for three years before the Advance model was rolled out. It was another three years before they changed gears entirely and released the DS in 2004. The original 3DS was released in 2010…and here we are today.

Notably, the hinge of Nintendo’s strategy was that each new model included full backwards compatibility with the previous model, allowing customers to continue playing their older games on the new hardware, and continue to get their worth out of them. While the new 3DS will have some new hardware (namely the second analog nub and second set of shoulders) that will undoubtedly result in newer games that will be unplayable on the “old” 3DS, the new model can still play the old games. An upgrade is in no way forced, and the old hardware is nowhere near being cut off from support. Even if that were a possibility of some kind, developers simply could not ignore the over 40 million 3DS, 3DS XL and 2DS units already out there. Games will still be made for these handhelds. They are not going away.

That being said, I’m not a mindless cheerleader blindly supporting the move. There are aspects I don’t like, first and foremost the name. With Nintendo still smarting from the confusion caused by the unimaginative name and bad marketing related with the Wii U, one would think they would at least try to come up with a name to distinguish the new from the old. But no, apparently Nintendo is still taking notes from Apple and are just calling it the “new 3DS”. I wish them luck in helping customers tell the two apart, and I don’t envy the legions of Best Buy and GameStop employees whose jobs it will be to enlighten them.

Hardware-wise, I actually don’t think Nintendo went far enough with the changes. The second analog nub is a joke. I fail to see how it will be comparably useful to the existing circle pad on the left side. Personally I enjoy eraser-head mice over touchpads and such, but this is a very different application. Besides the limited method of movement detection, it’s crammed into a tiny space that only leads my imagination to conjure scenarios involving my thumb slamming into the base of the hinge at high speed. I’m also less than enthusiastic about the extra set of shoulders. More shoulders means more can be done, but the placement of these buttons makes their practical use seem awkward. But I could be wrong, you never really know until you actually hold it in your hands and use it.

Other than that, the beefier CPU and extra memory mean better performance and more detailed graphics in the future, and microSD support is a nice step forward, though I question the placement of the slot in a recessed well in the bottom. In all, it seems like a mixed bag. I won’t be planning to upgrade anytime soon, as I’m quite happy with my 3DS XL. Maybe the next revision will have the true second circle pad I’ve been waiting for.

what the players want

A few days ago, news trickled down to the outlets that the Steam controller’s design had been tweaked once again, triggering waves of debate running the gamut from high praise to condemnation and everything between. But if nothing else, the design history of the still to-be-released Steam controller is just more proof that the gaming consumer base is fickle, and doesn’t always seem to know what it wants.

A cursory search will bring up a great deal of information (and even more opinion) regarding the state of innovation in the gaming industry, in all imaginable forms. Most of the opinion seems to demand more innovation, insisting that gaming has lost all creativity and become nothing more than an assembly line for money. And it seems to be true–each year’s Call of Duty game looks virtually indestinguishable to its predecessor, except in minor gameplay elements. Each new Mario game is looked as “just another Mario game” by many, only introducing the occasional new character and sometimes a third dimension. People just don’t see a whole lot going on other than the gaming equivalent of injection-molded mass production. Even though Microsoft reportedly spent “hundreds of millions” designing the Xbox One controller, it just doesn’t look very different to the average buyer.

But the facts tell a very different story. Over the years, many people and companies have had many different ideas about what a game controller should be. With the recent change announced by Valve, many are asking questions like “why does every game controller look like an Xbox gamepad?”, apparently forgetting that the Steam controller started out as a much more radical departure from the norm and has gradually been migrating toward a more traditional design, most notably dropping the expensive touchscreen in favor of a few fixed-use face buttons. Well beyond this, the consumer base of the gaming has shown, to the shock of many, no real desire for innovation–games that are praised for being innovative and original just don’t sell. The adage just doesn’t die: You vote with your wallet, and when innovative titles fail to sell, it tells developers and publishers to go back to the established formulas and design ethics.

This applies in the world of controllers, too. Everyone compared the Wii U’s Pro Controller to that of its competitor, but they didn’t seem to see that controllers these days have been more or less boiled down to their essence. Hardware designers have had over thirty years now to experiment, to come up with new layouts and concepts and see how they play out with the consumers. Some succeed, many die out. These ideas are even somewhat cyclical: The Sega Genesis had a concept controller while in the design phase, that strongly resembled the Wiimote of today. The PlayStation controller originally had a much more radical design before it shifted back to the DualShock design that is now a staple not just of the PlayStation, but even of its parent company as a whole. Even when new concepts seem to take off, they don’t seem quite able to reach orbit. Microsoft’s Kinect seemed like a great idea at the time, but after a while, the novelty wore off, and even putting it in every box and forcing its use couldn’t keep it afloat. People just wanted controllers in their hands, it seemed.

Even the company known for innovation, Nintendo, has been caught in this net many times, most notably with the N64 controller, which is often rated on lists as the worst controller of all time (either for Nintendo specifically, or the gaming industry as a whole). Early in development, the PlayStation 3 used a totally new design of controller, often referred to as the “boomerang“, that was lampooned by many as ridiculous, though it was rumored to be very comfortable to use. Why did Sony ditch the futuristic design? “…there are so many players who are used to the PlayStation controller; it’s like a car steering wheel and it’s not easy to change people’s habits.”

People like what’s familiar. You want both new customers and longtime fans to be able to use the controllers on your new console easily, without a long adjustment period. Controllers have reached the point where most all needs are covered. Two joysticks for analog movements in most games, a d-pad for older games and menus, a four-button cluster on the face, two sets of shoulders, and an extra button under each analog stick. All arranged so that they are easy to reach with thumbs and fingers. There’s really not much room for improvement anymore, unless you’re a pro gamer in need of more convenience. Just as evolution has dead ends, so does hardware design. And until we find something that can surpass the ulitity and ease of use of current controllers, that’s what will continue to be made and sold.

some assembly required

Just when I think I’ve seen the complete progression of a particular branch of gaming evolution, another step appears just to surprise me. As if the industry hadn’t already been overly focused on DLC, in the past few years they seem determined to take it to its logical conclusion, stepping even beyond the sad reality we’ve grown accustomed to.

Earlier this year, Warner Bros Interactive went back on their plans to provide DLC support for Arkham Origins on Wii U and canned all story DLC that had previously been planned. Apparently feeling screwing over the maligned Wii U was not enough, they decided to shaft their other customers in a different way: bugs and performance issues that had been prevalent in Arkham Origins would be ignored in favor of the aforementioned story DLC. Not only is WB comfortable with releasing an incomplete game, they’re clearly very okay with leaving it that way. At least Bethesda (eventually) patches its games.

In another part of the realm, Ubisoft’s reputation as a top independent has been slipping rather quickly of late. With Rainbow Six: Patriots languishing before being leaked (and then cancelled), and Siege not even in alpha, it’s now been six years since the franchise last showed a major entry. Meanwhile, the Splinter Cell games–once a hallmark of stealth tactical action–have slowly been corrupted, with Blacklist being a fairly boilerplate shooter game in which stealth is an afterthought. The Division has looked promising, but its release remains nebulous, and in fact this is mostly irrelevant after issues with Ubisoft’s other major release: Watch Dogs. A very ambitious title, early trailers show a gorgeously detailed, living world for the player to explore; but actual gameplay videos are considerably less impressive. More damning was when modders found a way to restore the game to its previous level of graphical quality, which didn’t stop Ubisoft reps from denying that such a downgrade had ever occurred. In video reviews, both TotalBiscuit and birgirpall have noted glaring issues with the game (to be fair, birgirpall’s whole draw basically revolves around the “I broke <insert game>” schtick). Ubisoft isn’t letting such trivialities stall their plans, however, and rolled out the first piece of Watch Dogs DLC just ten days after the game itself was released.

Even smaller developers are feeling the fever. Until recently, From Software held the (publicly stated) belief that games should be released as whole products and had denied any plans to release DLC for their games. As of two weeks ago, that position radically reversed, with not just DLC, but a DLC trilogy announced for Dark Souls II. This could easily have been marketed as a form of episodic content, but these days “DLC” is the buzzword, so I guess that worked just as well for them, except that “DLC” is increasingly carrying a negative connotation among gamers, who often feel defrauded or let down by such content.

No list of gaming sins would be complete without two-time national champion EA included. At the height of the anti-Call of Duty movement, DICE boldly declared that they would “never charge for Battlefield map packs“. While the DLC released for Battlefield 3 was certainly more than just maps, it still left a bitter taste in my mouth to pay $15 a pop for each of them. To date I have not purchased a single one; the two I do have were only acquired because they were given out as freebies. For the same reason, I chose not to throw my money at Battlefield 4, which was itself little more than a major patch for Battlefield 3. Nevertheless, they managed to thoroughly break the game, and eight months later EA and DICE are still cleaning up the mess. Even after this, and after stating as recently as last winter that Battlefield would never be an annual franchise, DICE and Visceral Games are preparing for the release of Battlefield: Hardline, which is regarded as little more than DLC packaged and marketed as a separate product.

At this point, it’s pretty clear that DLC is taking priority over actual products. If developers at least tried to steer closer to episodic content, it might be understandable. But this is a different trend, one I can’t see ending well for gaming in general. It seems like it’s up to the hyperconservatives to hold the line on this one…which is an odd feeling for a liberal.

Subscribe for exclusive early alpha access to upcoming content for this blog post.

the state of art

We are fast approaching a tipping point in the realm of video games. I like to think of it more as a singularity point, because in the near future many people will have to come to terms with the evolution of games as a whole. In the past (and present), many have defined art as purely noninteractive–you look at a painting, and you interpret it, but you don’t add or subtract paint from it. Video games, on the other hand, are frequently defined as “interactive art“, a term that is itself controversial. But it’s this label of “art” that is will have the most effect on their future.

The recent release of Wolfenstein: The New Order invariably fell afoul of censorship laws in several countries. As such, the German version of the game is devoid of Nazi symbols such as swastikas, and it faced heavy resistance from the Australian Classification Board, known for its long list of banned games. This is nothing new: from their inception, games have faced higher standards than other forms of media due to their perception as toys and the likelihood of exposure to children. In the case of Germany, the ban stems from a much wider cultural mindset: all Nazi symbols are banned in media, except in cases of educational or artistic depictions. Portrayals of violence are similarly prohibited, with some interesting workarounds being the result.

Therein lies the rub: If Wolfenstein is not art, what is it? Game developer art departments are now as large as those of big Hollywood movie studios. Millions are spent on AAA games such as this, with a not-insignificant fraction going to the people designing clothing, billboards, vehicles and buildings, all to create a living world that the player can immerse themselves in. We have games that range from hyper-realistic to highly stylized and everything between. There are even games that feature quadrilaterals as main characters, and abstract stories based entirely on wandering through the desert. Games have made us question our morals, and have turned our worlds upside down.

But is it really “interactive media”? You press buttons to make the protagonist move, shoot and operate objects, but in the vast majority of games, there is a very explicit path with a fixed beginning and end. The developers clearly have a defined story they want to tell, and there is only one way to experience it. Some games go as far as to be little more than playable movies (not naming names here). These are the most forced form of the medium, in which players experience only exactly what the developers want them to experience. Even games like Fallout, which appear to present the player with limitless possibilities, have an ending that doesn’t change (or changes very little) regardless of the player’s decisions and actions. The player may influence the movement of the brush, but in the end the painting still more or less looks the same.

To me, there is no doubt that games are art. But the debate likely won’t end anytime soon, particularly when considering the age gap between the average gamer and the average legislator. Just as their parents didn’t see television as an artistic medium, they often don’t see video games that way. Will it take another 30 years before our generation enters politics to see the issue in the same light? I hope not, but it definitely feels that way.

adaptations

As the seventh generation began winding down, everyone had high hopes. The Wii had smashed even its enthusiastic expectations, bolstered by a generous library of classics. Microsoft cemented its place as a major player, with successful franchises and the continuing expansion of Xbox Live. The PlayStation 3 managed to pick itself up and race to a close third. Overall, the three together managed to eclipse the sales of their combined predecessors, and as the eighth (and current) generation loomed on the horizon, everyone was riding high waves of success and victory (and income). Those waves have since crashed on the rocky shores of a new land, and those who thought they would stick a perfect landing have had something of a rude awakening.

Everyone has taken a step (or multiple steps) back from their originally-ambitious offerings of a few years ago. Microsoft recently announced a Kinectless Xbox One package (as well as finally announcing free access to services like Netflix), doomsayers are predicting the demise of the Vita, and the Wii U…it really doesn’t need to be said. So far the only real successes are the PlayStation 4 and Nintendo 3DS, both of which seem to be enjoying incredible popularity and acclaim. This feels to me like the strongest example yet of Darwinian gaming, vicious competitors adapting to 1) keep themselves alive a bit longer, and 2) try to gain an advantage over their neighbors, to keep themselves alive even longer.
Almost all of this is attributable to marketing blunders or bad design decisions that went uncorrected. Nintendo’s Wii U seems to be loved by most of its owners, but horrible–even nonexistent in some cases–marketing left many prospective buyers confused about what it was. Even highly acclaimed, well designed games like Wonderful 101, Sonic Lost World, and Pikmin 3 haven’t done much to shore it up against the onslaught of its competitors. Others, like Deus Ex: Human Revolution and Batman: Arkham City, have demonstrated the strong possibilities of using the gamepad to augment the user experience, but it hasn’t caught on. More than ever before, Nintendo is relying on its first-party offerings to keep the life jacket inflated.
Meanwhile, Sony has had wild success with the PlayStation 4, but its little cousin the Vita has been struggling. This despite the handheld’s greatest strength: Remote play. The appeal of playing your PS4 over LTE while on break at work is undeniably strong…yet it doesn’t seem to have motivated that many cross sales. Sony has at least seen this and put out a bundle for sale in Europe–but they have no plans to market this in North America. They’ve seen the light, but seem to be trying to look at it from an odd angle, or perhaps in the wrong wavelength.
Listening to Joystiq the other day, an interesting idea was posited: What if Nintendo radically changed their strategy, and changed the Wii U as we know it? What if they dropped the gamepad, made the Virtual Console a subscription service, and scaled down the hardware until it was essentially a “Nintendo Ouya”? Legions of fans would gladly fork over cash every month to play Nintendo classics, that much is beyond doubt. There are still dozens of games that could put huge momentum behind the Virtual Console, but Nintendo so far has failed to tap the torrent. Meanwhile, Sony puts out tons of games each year that are freely accessible, as long as you’re willing to pay a recurring fee, a model that seems to have had resounding prosperity. (Of course, PS+ has far more useful features than just that.) This is something Nintendo could adapt themselves to, and exploit, and probably carry right into the endzone.
But I feel like I may as well ask animals to stop trying to cross the highway.

wasteband

Storage is cheap these days. It’s not rare to find terabyte drives in low end desktops, and many people have several multi-terabyte drives to store oodles of data. In particular, many games have large drives to store games downloaded from Steam, GOG, Origin, Uplay, or any of many services out there. There’s no doubt about it–games are getting bigger. But is it better?

Recently Bethesda announced that the upcoming Wolfenstein: The New Order will require 47GB of hard drive space to store the game. It’s already spilling over into dual layer blu-rays, and the Xbox 360 version will span four discs. This brings back some old memories, not all of them good ones.

It’s one thing if there is actually enough content to justify such a large download size, but is there? Titanfall on PC is a 48GB download–of that, 35GB comprises every single language of the game, in uncompressed audio. That’s not “lossless compressed audio”, or even “high bitrate audio”. Uncompressed. Respawn claimed this was to accomodate lower spec machines, but this reasoning is (to use a technical term) bullshit. We’re in the days of six- and eight-core computers, when even low end duals and quads have cores sitting on their laurels with nothing to do, when processing and decompressing such files is a trivial matter even for a cheap entry-level cell phone. This isn’t even excusable, it’s just laziness.

Rather than an isolated incident, this is on its way to becoming the norm. Max Payne 3 will cost you 29GB if you want it on PC. Battlefield 4 demands 24GB even before DLC is added in. For comparison, World of Warcraft was roughly 25GB at its worst, before Blizzard rolled out a patch that hugely optimized the game and pared it down to size. Skyrim is a diminutive 6GB in size, and still looked good (but that’s not to say it can’t get better). Meanwhile, Rocksteady recently commented that the Batmobile alone in Arkham Knight would take up half the available memory of an Xbox 360. I’m left wondering how much space a game like Grand Theft Auto V would will consume. I have a 1.5TB hard drive that is mostly taken up by games, and I’m not keen on shoving another in there.

Storage limits aren’t the only concern, either. Most internet providers impose limits on users’ activity, namely through download caps. In some cases, downloading even a few games like Max Payne 3 or The New Order will put someone over their limit, resulting in their speed being throttled or huge overages on their bill. I had to download and install Titanfall three times before I could launch, meaning I burned through nearly 150GB of data. While I (no longer) have a cap to worry about hitting, many users aren’t so lucky. And what about patches? Machinegames recently decided that 47GB isn’t enough space, and will be applying a 5GB day-one patch to the monstrosity of a game.

Other than optimization of files, where is the future? My money is on procedural generation. While its engine is simplistic, Minecraft can generate vast worlds using a binary that is a mere 100MB. At 148MB, the binary for Daggerfall makes use of procedural generation to create a world that would cover most of England. Going off the deep end, you find .kkreiger, which is contained entirely within an executable 95 kilobytes in size.

Even ignoring hard drive space, games are beginning to hog memory. Watch Dogs, The New Order, Titanfall, and Call of Duty: Ghosts all require at least 4GB of memory to run, to store their enormous textures. There is a dire need for a new, more efficient engine to run games at higher qualities like these; hopefully one will be here soon.

It seems that procedural generation is the buzzword of the gaming industry’s future. It cuts down on file sizes, allows for streamlining and makes every experience unique. I know I wouldn’t mind my second playthrough of a game being a little different than the first; it certainly seems to have worked well (albeit through limited implementation) in Left 4 Dead.

It’s either that, or we’re looking at multi-blu-ray titles hitting the Xbox One and Playstation 4 before long. Time to start checking prices on hard drives.