laterally

Technology doesn’t always move forward. Sometimes you have a solar flare, or maybe a robot revolution, and in the end the collective decides that it might be better off living a simpler life.

Sometimes the decision is made for you, by people who claim to know better than you.

Apple decided to resurrect the Macbook a couple weeks ago, to the usual media circus and fanboys (or fanpeople, to be more equal) collapsing in seizures of ecstasy. The laptop’s primary feature is that it is even thinner, as per Pope Steve‘s papal bull issued in time immemorial. All other features, apparently, are secondary. Coming in at a gaunt 13mm and just a hair over two pounds, the new Macbook continues to solidify Apple’s stance as the heroin chic of the computer industry. After seeing the absurdly-extended marathon of anorexia Jony Ive and his coworkers seem compelled to display, I’m surprised I haven’t yet seen a Macbook regurgitate some part of its mainboard live on stage.

Unsurprisingly, it has had to give up some of those pesky internal organs to maintain its girlish figure. While optical drives have been vanishing on thinner laptops for some years now, Apple decided motherboards were just too damn big. The more features you want, the more circuit board there needs to be, and who really wants either of those, right? Hard drive connections, memory slots…those are all just vestiges of previous generations that need to be evolved out. And that’s why we have Apple. To tell us what we need and don’t need.

Among the earth-shattering changes wrought upon humanity with this model is the improved runtime. The new Macbook includes a “stepped battery” design, to fit the laptop’s frame as closely as possible, and pack in that ever-so-coveted battery life. Combined with the tiny mainboard, this means the laptop’s footprint is around 90% battery. And none of these are removable. Everything is soldered, screwed and glued in, to make serviceability as impossible as possible.

Now, I’m not an engineer or anything, but it seems to me the Macbook would get even better battery life if it just had a slightly thicker battery with a few more layers. Or a user-replaceable battery that could be swapped for an extended model. Or even two batteries, allowing the user to swap between them on the road without powering down. Nope. Apparently the future is in paper-thin batteries you can’t touch or replace without destroying half of the fucking machine.

But one more thing…the new Macbook actually does possess one feature that everyone agrees is the future: a USB “C” port. This nifty new guy promises to eliminate long-standing frustrations, will increase the bandwidth to 10Gbps and carry up to 100 watts, allowing phones and even laptops to charge over a USB connection. And guess what? The new Macbook will do exactly that. It’s a fairly brilliant move–future proofing, if nothing else–and I can’t wait for a future where I don’t need to lug my laptop’s charger around, just a USB cable and wall outlet adapter. I can remember the last time a change like this had so much convenience potential. And it was glorious.

Problem is, apparently the future doesn’t include multiple ports. That’s right. The new Macbook includes exactly one USB connector. And that’s it. Oh, it also has a 3.5mm for headphones. But it has no HDMI-out (or any AV-out for that matter), no SD slot…nothing else. If you have to charge your laptop and use an external drive, you have to pick one. You have to read an SD card and use an ethernet adapter because your Macbook’s wifi crapped out? You’re SOL. The Mac world’s response basically amounts to “more ports bad, more accessories good!” Yes, because external add-ons are exactly the thing to complement portability. I mean, it’s not like this laptop is going to be on the road, right?

What makes this whole thing really entertaining (or depressing, whichever works for you) is that Asus beat them at their own game. The Zenbook UX305 is superior to the new Macbook in almost every imaginable category: While the processor is (possibly) not as powerful, the memory is user-upgradeable, as is the hard drive. The screen is capable of up to 282 pixels per inch, versus Apple’s 226 much-trumpeted Retina display. And this even comes with two free kicks in the nuts: Asus’ Zenbook has a bigger battery, has three USB ports (granted, not v3.1 “C” ports), is thinner than the Macbook, and will be almost half the price. It boggles the mind, it really does. It makes me want to buy the Zenbook just so I can stand outside an Apple store as people build their hoovervilles and wait desperately for a chance to breathe the same air the new Macbook used as intake and exhaust.

Personally I don’t see why laptops need to be thin enough to double as filet knives. I have no problem with a laptop in the 1-2cm range, and if manufacturers are going to impulsively shrink the electronics with each new generation, why not instead work on replacing that newly-empty space with more battery? That way everyone wins. Laptops are thin enough, damn it.

off the block

Earlier this week, one of the gaming industry’s biggest figures began his exit. Markus Persson, better known to fans as Notch, sold his baby off and has decided to move on. Reactions run the gamut, naturally, especially in this land of the internet where hyperbole is the only accepted form of communication. In less than six years, what started as a pet project for Notch eventually grew to a community of tens of millions, worth $2.5 billion. If my math is right, that means Minecraft made him over a million dollars a day, not counting the sales of the product itself. If those sales are accounted for, that adds another billion or so to the total, raising the daily breakdown to $1.5 million, for something that started as a mere personal project for amusement. Mind boggling.

It’s not about the money. It’s about my sanity.

In his personal blog post, Notch revealed that after the sale is finalized, he will leave Mojang, along with its other two founders, and go back to his own personal tinkerings. Many reacted with shock that he would abandon something so popular, so large and so profitable. In his closing line, “it’s not about the money. It’s about my sanity,” he exposes a larger problem that has surfaced in recent years in the gaming industry. It’s incredibly easy for someone to become famous, and sometimes the wrong people get put on the pedestal.
With the release of Minecraft, Notch’s little project almost instantly began to get attention. Just two and a half years after the first public alpha release of the game, Notch and 5,000 people gathered at the Manalay Bay in Vegas to celebrate his game. Less than a month after that, Jens Bergensten was given complete creative control over Minecraft, and Notch more or less became a CEO in function. In his free time he pursued new projects like Cobalt, Scrolls and a very experimental concept called 0x10c. While showing promise, 0x10c eventually failed to pan out, and Notch cancelled it. Already he was finding his personal limitations; while he now had the time and money to pursue whatever project he desired, he didn’t always have the ability to make it work.
Of course, this isn’t the first time this sort of thing has happened. History is replete with instances of common individuals who do something otherwise innocuous and suddenly gain massive fame and wealth from it. In recent decades, people like George Lucas, John Carmack and Mark Zuckerberg have risen from obscurity to worldwide fame in months. But therein lies the rub–not everyone is cut out to be famous. Our most recent example of the wrong person can be found in Phil Fish, another game developer. Fish created Fez, a very complex game with very simple basics, which became another indie hit and propelled him into the spotlight. Fish turned out to be impulsive and vindictive, often doling out harsh personal insults to people who criticized him. In perhaps the most remembered moment of recent Internet history, Fish went on a rampage on Twitter and cancelled the development of his game’s sequel.
Notch saw Phil Fish, and worried that he might become that man. He realized he wasn’t meant for the spotlight he was in when Mojang announced changes to the Minecraft EULA and users across the internet instantly targeted Notch, who had not been involved in the EULA changes. He realized he had become something different to his fans and his players, and he wasn’t what they wanted him to be. He would rather sit in obscurity and toil on his own pet projects, which he is now free to do for possibly the rest of his life. Will he be remembered as one man who flew in from the night, changed indie gaming forever, and then vanished? Or will he be remembered as the man who started a phenomenon, and then gave his baby away to a company with a less-than-stellar track record with acquisitions? We’ll see.

one more time

In a Nintendo Direct yesterday, Satoru Iwata announced a new model of 3DS to be released next year, featuring various improvments. Among these include a larger screen with better parallax effect, a faster processor, more memory, a second analog nub (finally) and more shoulder buttons. The New 3DS’ screen can adjust its brightness automatically to compensate for ambient conditions, and track the player’s face to keep things looking just right.

The internet hasn’t taken this well, with complaints that the current hardware is now completely obsolete, that this move by Nintendo is an “insult” to its longtime supporters, and cries of how Nintendo won’t support their hardware for any considerable length of time. One complaint which strikes me as particularly ignorant is that which “this is the first time Nintendo has ever split the userbase…depending on which re-release of the handheld they bought.” Most forum comments stop just short of outright claims of fraud. I’m left wondering how long it will be until lawsuits from jilted customers start popping up.

I really don’t see where any of this hate is coming from. It’s not much different from the overall progression of Nintendo consoles over the past 15 years or so. While the original Game Boy persisted for a good deal before finally needing a successor, the Game Boy Color was only on shelves for three years before the Advance model was rolled out. It was another three years before they changed gears entirely and released the DS in 2004. The original 3DS was released in 2010…and here we are today.

Notably, the hinge of Nintendo’s strategy was that each new model included full backwards compatibility with the previous model, allowing customers to continue playing their older games on the new hardware, and continue to get their worth out of them. While the new 3DS will have some new hardware (namely the second analog nub and second set of shoulders) that will undoubtedly result in newer games that will be unplayable on the “old” 3DS, the new model can still play the old games. An upgrade is in no way forced, and the old hardware is nowhere near being cut off from support. Even if that were a possibility of some kind, developers simply could not ignore the over 40 million 3DS, 3DS XL and 2DS units already out there. Games will still be made for these handhelds. They are not going away.

That being said, I’m not a mindless cheerleader blindly supporting the move. There are aspects I don’t like, first and foremost the name. With Nintendo still smarting from the confusion caused by the unimaginative name and bad marketing related with the Wii U, one would think they would at least try to come up with a name to distinguish the new from the old. But no, apparently Nintendo is still taking notes from Apple and are just calling it the “new 3DS”. I wish them luck in helping customers tell the two apart, and I don’t envy the legions of Best Buy and GameStop employees whose jobs it will be to enlighten them.

Hardware-wise, I actually don’t think Nintendo went far enough with the changes. The second analog nub is a joke. I fail to see how it will be comparably useful to the existing circle pad on the left side. Personally I enjoy eraser-head mice over touchpads and such, but this is a very different application. Besides the limited method of movement detection, it’s crammed into a tiny space that only leads my imagination to conjure scenarios involving my thumb slamming into the base of the hinge at high speed. I’m also less than enthusiastic about the extra set of shoulders. More shoulders means more can be done, but the placement of these buttons makes their practical use seem awkward. But I could be wrong, you never really know until you actually hold it in your hands and use it.

Other than that, the beefier CPU and extra memory mean better performance and more detailed graphics in the future, and microSD support is a nice step forward, though I question the placement of the slot in a recessed well in the bottom. In all, it seems like a mixed bag. I won’t be planning to upgrade anytime soon, as I’m quite happy with my 3DS XL. Maybe the next revision will have the true second circle pad I’ve been waiting for.

what the players want

A few days ago, news trickled down to the outlets that the Steam controller’s design had been tweaked once again, triggering waves of debate running the gamut from high praise to condemnation and everything between. But if nothing else, the design history of the still to-be-released Steam controller is just more proof that the gaming consumer base is fickle, and doesn’t always seem to know what it wants.

A cursory search will bring up a great deal of information (and even more opinion) regarding the state of innovation in the gaming industry, in all imaginable forms. Most of the opinion seems to demand more innovation, insisting that gaming has lost all creativity and become nothing more than an assembly line for money. And it seems to be true–each year’s Call of Duty game looks virtually indestinguishable to its predecessor, except in minor gameplay elements. Each new Mario game is looked as “just another Mario game” by many, only introducing the occasional new character and sometimes a third dimension. People just don’t see a whole lot going on other than the gaming equivalent of injection-molded mass production. Even though Microsoft reportedly spent “hundreds of millions” designing the Xbox One controller, it just doesn’t look very different to the average buyer.

But the facts tell a very different story. Over the years, many people and companies have had many different ideas about what a game controller should be. With the recent change announced by Valve, many are asking questions like “why does every game controller look like an Xbox gamepad?”, apparently forgetting that the Steam controller started out as a much more radical departure from the norm and has gradually been migrating toward a more traditional design, most notably dropping the expensive touchscreen in favor of a few fixed-use face buttons. Well beyond this, the consumer base of the gaming has shown, to the shock of many, no real desire for innovation–games that are praised for being innovative and original just don’t sell. The adage just doesn’t die: You vote with your wallet, and when innovative titles fail to sell, it tells developers and publishers to go back to the established formulas and design ethics.

This applies in the world of controllers, too. Everyone compared the Wii U’s Pro Controller to that of its competitor, but they didn’t seem to see that controllers these days have been more or less boiled down to their essence. Hardware designers have had over thirty years now to experiment, to come up with new layouts and concepts and see how they play out with the consumers. Some succeed, many die out. These ideas are even somewhat cyclical: The Sega Genesis had a concept controller while in the design phase, that strongly resembled the Wiimote of today. The PlayStation controller originally had a much more radical design before it shifted back to the DualShock design that is now a staple not just of the PlayStation, but even of its parent company as a whole. Even when new concepts seem to take off, they don’t seem quite able to reach orbit. Microsoft’s Kinect seemed like a great idea at the time, but after a while, the novelty wore off, and even putting it in every box and forcing its use couldn’t keep it afloat. People just wanted controllers in their hands, it seemed.

Even the company known for innovation, Nintendo, has been caught in this net many times, most notably with the N64 controller, which is often rated on lists as the worst controller of all time (either for Nintendo specifically, or the gaming industry as a whole). Early in development, the PlayStation 3 used a totally new design of controller, often referred to as the “boomerang“, that was lampooned by many as ridiculous, though it was rumored to be very comfortable to use. Why did Sony ditch the futuristic design? “…there are so many players who are used to the PlayStation controller; it’s like a car steering wheel and it’s not easy to change people’s habits.”

People like what’s familiar. You want both new customers and longtime fans to be able to use the controllers on your new console easily, without a long adjustment period. Controllers have reached the point where most all needs are covered. Two joysticks for analog movements in most games, a d-pad for older games and menus, a four-button cluster on the face, two sets of shoulders, and an extra button under each analog stick. All arranged so that they are easy to reach with thumbs and fingers. There’s really not much room for improvement anymore, unless you’re a pro gamer in need of more convenience. Just as evolution has dead ends, so does hardware design. And until we find something that can surpass the ulitity and ease of use of current controllers, that’s what will continue to be made and sold.

some assembly required

Just when I think I’ve seen the complete progression of a particular branch of gaming evolution, another step appears just to surprise me. As if the industry hadn’t already been overly focused on DLC, in the past few years they seem determined to take it to its logical conclusion, stepping even beyond the sad reality we’ve grown accustomed to.

Earlier this year, Warner Bros Interactive went back on their plans to provide DLC support for Arkham Origins on Wii U and canned all story DLC that had previously been planned. Apparently feeling screwing over the maligned Wii U was not enough, they decided to shaft their other customers in a different way: bugs and performance issues that had been prevalent in Arkham Origins would be ignored in favor of the aforementioned story DLC. Not only is WB comfortable with releasing an incomplete game, they’re clearly very okay with leaving it that way. At least Bethesda (eventually) patches its games.

In another part of the realm, Ubisoft’s reputation as a top independent has been slipping rather quickly of late. With Rainbow Six: Patriots languishing before being leaked (and then cancelled), and Siege not even in alpha, it’s now been six years since the franchise last showed a major entry. Meanwhile, the Splinter Cell games–once a hallmark of stealth tactical action–have slowly been corrupted, with Blacklist being a fairly boilerplate shooter game in which stealth is an afterthought. The Division has looked promising, but its release remains nebulous, and in fact this is mostly irrelevant after issues with Ubisoft’s other major release: Watch Dogs. A very ambitious title, early trailers show a gorgeously detailed, living world for the player to explore; but actual gameplay videos are considerably less impressive. More damning was when modders found a way to restore the game to its previous level of graphical quality, which didn’t stop Ubisoft reps from denying that such a downgrade had ever occurred. In video reviews, both TotalBiscuit and birgirpall have noted glaring issues with the game (to be fair, birgirpall’s whole draw basically revolves around the “I broke <insert game>” schtick). Ubisoft isn’t letting such trivialities stall their plans, however, and rolled out the first piece of Watch Dogs DLC just ten days after the game itself was released.

Even smaller developers are feeling the fever. Until recently, From Software held the (publicly stated) belief that games should be released as whole products and had denied any plans to release DLC for their games. As of two weeks ago, that position radically reversed, with not just DLC, but a DLC trilogy announced for Dark Souls II. This could easily have been marketed as a form of episodic content, but these days “DLC” is the buzzword, so I guess that worked just as well for them, except that “DLC” is increasingly carrying a negative connotation among gamers, who often feel defrauded or let down by such content.

No list of gaming sins would be complete without two-time national champion EA included. At the height of the anti-Call of Duty movement, DICE boldly declared that they would “never charge for Battlefield map packs“. While the DLC released for Battlefield 3 was certainly more than just maps, it still left a bitter taste in my mouth to pay $15 a pop for each of them. To date I have not purchased a single one; the two I do have were only acquired because they were given out as freebies. For the same reason, I chose not to throw my money at Battlefield 4, which was itself little more than a major patch for Battlefield 3. Nevertheless, they managed to thoroughly break the game, and eight months later EA and DICE are still cleaning up the mess. Even after this, and after stating as recently as last winter that Battlefield would never be an annual franchise, DICE and Visceral Games are preparing for the release of Battlefield: Hardline, which is regarded as little more than DLC packaged and marketed as a separate product.

At this point, it’s pretty clear that DLC is taking priority over actual products. If developers at least tried to steer closer to episodic content, it might be understandable. But this is a different trend, one I can’t see ending well for gaming in general. It seems like it’s up to the hyperconservatives to hold the line on this one…which is an odd feeling for a liberal.

Subscribe for exclusive early alpha access to upcoming content for this blog post.

the state of art

We are fast approaching a tipping point in the realm of video games. I like to think of it more as a singularity point, because in the near future many people will have to come to terms with the evolution of games as a whole. In the past (and present), many have defined art as purely noninteractive–you look at a painting, and you interpret it, but you don’t add or subtract paint from it. Video games, on the other hand, are frequently defined as “interactive art“, a term that is itself controversial. But it’s this label of “art” that is will have the most effect on their future.

The recent release of Wolfenstein: The New Order invariably fell afoul of censorship laws in several countries. As such, the German version of the game is devoid of Nazi symbols such as swastikas, and it faced heavy resistance from the Australian Classification Board, known for its long list of banned games. This is nothing new: from their inception, games have faced higher standards than other forms of media due to their perception as toys and the likelihood of exposure to children. In the case of Germany, the ban stems from a much wider cultural mindset: all Nazi symbols are banned in media, except in cases of educational or artistic depictions. Portrayals of violence are similarly prohibited, with some interesting workarounds being the result.

Therein lies the rub: If Wolfenstein is not art, what is it? Game developer art departments are now as large as those of big Hollywood movie studios. Millions are spent on AAA games such as this, with a not-insignificant fraction going to the people designing clothing, billboards, vehicles and buildings, all to create a living world that the player can immerse themselves in. We have games that range from hyper-realistic to highly stylized and everything between. There are even games that feature quadrilaterals as main characters, and abstract stories based entirely on wandering through the desert. Games have made us question our morals, and have turned our worlds upside down.

But is it really “interactive media”? You press buttons to make the protagonist move, shoot and operate objects, but in the vast majority of games, there is a very explicit path with a fixed beginning and end. The developers clearly have a defined story they want to tell, and there is only one way to experience it. Some games go as far as to be little more than playable movies (not naming names here). These are the most forced form of the medium, in which players experience only exactly what the developers want them to experience. Even games like Fallout, which appear to present the player with limitless possibilities, have an ending that doesn’t change (or changes very little) regardless of the player’s decisions and actions. The player may influence the movement of the brush, but in the end the painting still more or less looks the same.

To me, there is no doubt that games are art. But the debate likely won’t end anytime soon, particularly when considering the age gap between the average gamer and the average legislator. Just as their parents didn’t see television as an artistic medium, they often don’t see video games that way. Will it take another 30 years before our generation enters politics to see the issue in the same light? I hope not, but it definitely feels that way.

adaptations

As the seventh generation began winding down, everyone had high hopes. The Wii had smashed even its enthusiastic expectations, bolstered by a generous library of classics. Microsoft cemented its place as a major player, with successful franchises and the continuing expansion of Xbox Live. The PlayStation 3 managed to pick itself up and race to a close third. Overall, the three together managed to eclipse the sales of their combined predecessors, and as the eighth (and current) generation loomed on the horizon, everyone was riding high waves of success and victory (and income). Those waves have since crashed on the rocky shores of a new land, and those who thought they would stick a perfect landing have had something of a rude awakening.

Everyone has taken a step (or multiple steps) back from their originally-ambitious offerings of a few years ago. Microsoft recently announced a Kinectless Xbox One package (as well as finally announcing free access to services like Netflix), doomsayers are predicting the demise of the Vita, and the Wii U…it really doesn’t need to be said. So far the only real successes are the PlayStation 4 and Nintendo 3DS, both of which seem to be enjoying incredible popularity and acclaim. This feels to me like the strongest example yet of Darwinian gaming, vicious competitors adapting to 1) keep themselves alive a bit longer, and 2) try to gain an advantage over their neighbors, to keep themselves alive even longer.
Almost all of this is attributable to marketing blunders or bad design decisions that went uncorrected. Nintendo’s Wii U seems to be loved by most of its owners, but horrible–even nonexistent in some cases–marketing left many prospective buyers confused about what it was. Even highly acclaimed, well designed games like Wonderful 101, Sonic Lost World, and Pikmin 3 haven’t done much to shore it up against the onslaught of its competitors. Others, like Deus Ex: Human Revolution and Batman: Arkham City, have demonstrated the strong possibilities of using the gamepad to augment the user experience, but it hasn’t caught on. More than ever before, Nintendo is relying on its first-party offerings to keep the life jacket inflated.
Meanwhile, Sony has had wild success with the PlayStation 4, but its little cousin the Vita has been struggling. This despite the handheld’s greatest strength: Remote play. The appeal of playing your PS4 over LTE while on break at work is undeniably strong…yet it doesn’t seem to have motivated that many cross sales. Sony has at least seen this and put out a bundle for sale in Europe–but they have no plans to market this in North America. They’ve seen the light, but seem to be trying to look at it from an odd angle, or perhaps in the wrong wavelength.
Listening to Joystiq the other day, an interesting idea was posited: What if Nintendo radically changed their strategy, and changed the Wii U as we know it? What if they dropped the gamepad, made the Virtual Console a subscription service, and scaled down the hardware until it was essentially a “Nintendo Ouya”? Legions of fans would gladly fork over cash every month to play Nintendo classics, that much is beyond doubt. There are still dozens of games that could put huge momentum behind the Virtual Console, but Nintendo so far has failed to tap the torrent. Meanwhile, Sony puts out tons of games each year that are freely accessible, as long as you’re willing to pay a recurring fee, a model that seems to have had resounding prosperity. (Of course, PS+ has far more useful features than just that.) This is something Nintendo could adapt themselves to, and exploit, and probably carry right into the endzone.
But I feel like I may as well ask animals to stop trying to cross the highway.

wasteband

Storage is cheap these days. It’s not rare to find terabyte drives in low end desktops, and many people have several multi-terabyte drives to store oodles of data. In particular, many games have large drives to store games downloaded from Steam, GOG, Origin, Uplay, or any of many services out there. There’s no doubt about it–games are getting bigger. But is it better?

Recently Bethesda announced that the upcoming Wolfenstein: The New Order will require 47GB of hard drive space to store the game. It’s already spilling over into dual layer blu-rays, and the Xbox 360 version will span four discs. This brings back some old memories, not all of them good ones.

It’s one thing if there is actually enough content to justify such a large download size, but is there? Titanfall on PC is a 48GB download–of that, 35GB comprises every single language of the game, in uncompressed audio. That’s not “lossless compressed audio”, or even “high bitrate audio”. Uncompressed. Respawn claimed this was to accomodate lower spec machines, but this reasoning is (to use a technical term) bullshit. We’re in the days of six- and eight-core computers, when even low end duals and quads have cores sitting on their laurels with nothing to do, when processing and decompressing such files is a trivial matter even for a cheap entry-level cell phone. This isn’t even excusable, it’s just laziness.

Rather than an isolated incident, this is on its way to becoming the norm. Max Payne 3 will cost you 29GB if you want it on PC. Battlefield 4 demands 24GB even before DLC is added in. For comparison, World of Warcraft was roughly 25GB at its worst, before Blizzard rolled out a patch that hugely optimized the game and pared it down to size. Skyrim is a diminutive 6GB in size, and still looked good (but that’s not to say it can’t get better). Meanwhile, Rocksteady recently commented that the Batmobile alone in Arkham Knight would take up half the available memory of an Xbox 360. I’m left wondering how much space a game like Grand Theft Auto V would will consume. I have a 1.5TB hard drive that is mostly taken up by games, and I’m not keen on shoving another in there.

Storage limits aren’t the only concern, either. Most internet providers impose limits on users’ activity, namely through download caps. In some cases, downloading even a few games like Max Payne 3 or The New Order will put someone over their limit, resulting in their speed being throttled or huge overages on their bill. I had to download and install Titanfall three times before I could launch, meaning I burned through nearly 150GB of data. While I (no longer) have a cap to worry about hitting, many users aren’t so lucky. And what about patches? Machinegames recently decided that 47GB isn’t enough space, and will be applying a 5GB day-one patch to the monstrosity of a game.

Other than optimization of files, where is the future? My money is on procedural generation. While its engine is simplistic, Minecraft can generate vast worlds using a binary that is a mere 100MB. At 148MB, the binary for Daggerfall makes use of procedural generation to create a world that would cover most of England. Going off the deep end, you find .kkreiger, which is contained entirely within an executable 95 kilobytes in size.

Even ignoring hard drive space, games are beginning to hog memory. Watch Dogs, The New Order, Titanfall, and Call of Duty: Ghosts all require at least 4GB of memory to run, to store their enormous textures. There is a dire need for a new, more efficient engine to run games at higher qualities like these; hopefully one will be here soon.

It seems that procedural generation is the buzzword of the gaming industry’s future. It cuts down on file sizes, allows for streamlining and makes every experience unique. I know I wouldn’t mind my second playthrough of a game being a little different than the first; it certainly seems to have worked well (albeit through limited implementation) in Left 4 Dead.

It’s either that, or we’re looking at multi-blu-ray titles hitting the Xbox One and Playstation 4 before long. Time to start checking prices on hard drives.

virtual consolation

Nintendo is having a very odd month so far. I have to wonder if they woke up on the wrong side of the bed, or perhaps misread their horoscope. Maybe the Earth’s magnetic field is just out of alignment. I don’t know, I’m not a fortune teller.

So far this generation their modus operandi seems to be that they don’t quite know what they’re doing. They’re out of their element, trying to catch up with the times and not quite succeeding. So like any desperate geriatric, they seem to be doing anything they can think of to get the attention of the hip youngsters. Problem is, they’re not thinking of the right things.

The failure to market the Wii U properly is just now reaching the synapses of Nintendo’s upper management. To their credit, rather than shifting blame and making meaningless promises, Satoru Iwata imposed on himself a 50% pay cut, which will no doubt motivate himself and his management to make greater strides in improving the appeal of their product.

Number one at the top of the list of the moment is the Virtual Console. It’s an abysmal failure. What could have been a flood of titles has been woefully underutilized. Even when it has been utilized, it’s been in the most backward way possible. Currently there are a total of 66 titles available on the Wii U Virtual Console. This pales in comparison to its predecessor, which had 186 titles available at this point in its life cycle. Nintendo seemed to notice this vacuum a couple months ago, but so far have made very little progress, despite (or maybe because of) some lofty promises. Only now are they getting around to the release of A Link to the Past on Wii U, a full two months after its sequel hit 3DS. This would have been a perfect double sale to promote both products and both platforms, but I guess it wasn’t too high on Nintendo’s list.

It’s understandable that licensing is a major issue in these matters (something that kept EarthBound in the vault for so long), but in the case of first-party titles, said licensing issues are far less signifcant, or even nonexistent. Across both Wii U and 3DS, there are a grand total of 11 Mario games. They could easily have launched with all the NES, SNES, Game Boy and GBC games, ready to blow the doors off. But they didn’t. Super Mario Bros 3, one of the most highly acclaimed of the series, is still in the “TBA” category. That’s just sad.

It’s also understood that Nintendo is obsessively meticulous in the titles that are ported to the Virtual Console. They strive for perfection, and many titles go through a rigorous process to ensure there are as few issues as possible for those who should play said games. But a great deal of titles were successfully released on the Wii VC, that much is beyond debate. Wii games also run in emulation on the Wii U. But to play older VC games on the Wii U, it is necessary to switch the console into Wii mode. If the games are already running in a shell of some sort, why is it not possible to have the games switch over automatically, and switch back when you’re done? (This is a serious question; if there is a real reason, I would like to know it.)

More than this, it is not possible to use any Wii U hardware to control these games, meaning to play SNES games one must utilize the somewhat awkward arrangement of connecting a controller to a controller. What makes this particularly ludicrous, is that it’s now possible to play in Wii mode through the gamepad, using it as a primary screen. What does this mean? It means if you do this with a game that requires the use of the classic controller, you must sit there staring at a 6″ screen, with a controller plugged into a Wiimote that does nothing but sit next to you. It’s ridiculous. If games are going to be playable on the gamepad screen, why not at very least allow the use of the gamepad’s controls?

Of course, if this were actually done it would remove most of the incentive to buy Virtual Console games on the Wii U. And we all know where Nintendo cashes its checks.

Another market they seem to have forgotten about is the 3DS Virtual Console. While a total of 131 games have been pushed to the platform thus far, the depth of this catalog is broad but shallow. So far only Game Boy, Game Boy Color, NES, Game Gear and (for a select few) Game Boy Advance titles are available. Where are the SNES games? I would be happy just to be able to play Super Metroid, EarthBound, or Link to the Past on the go. For that matter, where is a cross-buy option? It’s entirely possible to buy a PS Vita, PS3 or PS4 game from any computer and have it installed while you’re out, one cannot buy a 3DS game through a Wii U, or vice versa–this is despite the rather glaring fact that one can view games from both platforms. I can see Wonderful 101 in my 3DS eShop, but I can’t look at its details or choose to buy it, let alone have it installed and ready when I get home. This feature alone would be an incredibly convenient feature. So of course Nintendo hasn’t done it.

Oddly, their most recent announcement has been that of DS games on the Virtual Console. This is an interesting move; I’m curious to know what the market is for playing DS games on a home console, but I’m willing to wager it isn’t as significant as those involving N64 and Gamecube games. We’ll see how this year pans out for Iwata.

birth complications

Tomorrow, the anticipated PlayStation 4 will finally make its way off shelves to waiting buyers’ hands (in America, at least). Players will head home, plug in their new consoles, and get ready to exchange intellectual debates via PlayStation Network. Next week, the event will repeat with the Xbox One.

Or, maybe not. Turns out there’s a snag with the two consoles’ launch processes. Just a minor snag.

The PlayStation 4 will require a day-one patch to be applied before the console can be used. Note the choice of words here–it’s not that it can’t go online, or make use of certain features like Remote Play, or share screenshots and videos (although those are also covered). It can’t be used. The day-one patch, which updates the operating system to version 1.5–implying that it encompasses a number of semimajor updates–must be applied to enable the Blu-ray drive. Repeat: The Blu-ray drive is not a functioning feature out of the box.

In the same vein, Microsoft has announced that the Xbox One will require a similar patch to be applied on launch day. They haven’t gone into detail, but according to a senior director, the unit will be capable of literally “nothing”, being “required for your Xbox One to function.” If what Penello said is accurate, the console will simply not work at all without this update.

No two ways about it, we live in the era of the day one patch. There is some justification for it–if release day is looming, and features are still missing, it’s a perfectly valid tactic to work on the missing content and push it out on launch day. But this is getting ridiculous. Both systems fail to function entirely without the update; the PS4 specifically mentions that its patch enables the Blu-ray drive, which is what the entire system is built around. It’s akin to selling a car without a transmission, and telling potential customers to have their dealer install it when the car is purchased.

Part of me wonders if this isn’t some twisted anti-piracy scheme. A measure like this effectively prevents anyone from making use of their system before launch date (although Sony has made the 1.5 patch available for download now, to be executed from a flash drive and save the user the trouble of downloading it through the PS4). With a number of recent incidents of retailers breaking street dates on both hardware and software, I can see why Microsoft and Sony would be concerned with it. The practice of banning users who log on early doesn’t go always go over well, and this provides a method to keep people off. But it’s still a load of shit.

Or, I don’t know. Maybe it’s related to other issues.