games for windows, unplugged

One of the defining elements of the success of the Xbox and Xbox 360 has been its online service, Xbox Live. From the start it was barebones and hardly more than functional, even by the standards of the time. Nevertheless, it launched what is now the modern era of online console gaming, and is widely reputed for its ease of use, robustness, and large user base. Naturally, Microsoft wanted to capitalize on this by carrying it over to Windows, and thus was born Games for Windows Live.

Naturally, the people in charge of GFWL apparently took the long list of XBL’s successes and felt motivated to completely contradict, if not entirely invalidate them. The GFWL software was clunky, difficult to use, and often redundant. The only thing worse than the standalone executable was the in-game overlay. Often it would fail to appear, or require an update that essentially locked-up the computer it was running on. To make things worse, the overlay and standalone program ran on separate codebases; often both required individual updates, or games could not be played online (or sometimes at all). But GFWL’s greatest magic trick is making save files disappear into thin air. I had this happen myself, in Grand Theft Auto IV, when it lost a save with 36 hours’ progress.

At any rate, by the beginning of this year, the writing was on the wall, and publishers and developers were beginning to read it. Arkham Asylum and Arkham City have already dropped GFWL entirely, as has BioShock 2; Arkham Origins ditched it in mid-development. Capcom has just hopped on the trolley, announcing that it will begin removing the software from its games as well.

Now, it looks like the great experiment is over. With the decision to integrate Xbox Live into Windows 8, things already looked ominous. Two months ago Microsoft announced that the point system will be discontinued. Information was leaked on an update page for Age of Empires Online stating that the GFWL service itself will be discontinued by July 2014 (this page was quickly updated to omit this information). The next stage was to close the GFWL Marketplace, ending purchase of existing titles on the service, and effectively putting it on life support until the coup de grace can be administered sometime next year. Now it’s just a matter of time.

This is all something of a modern Shakespearean tragedy–or a comedy, I can’t decide. It’s both saddening and hilarious, the way Microsoft turned what could have been the next revolution of online gaming and took it to Boondoggle Level Market Garden, and now they’re beating a tactical withdrawal to the fortress of Xbox Live, hoping to reform and organize for another charge. But with Valve’s offensive rolling across the terrain like a Soviet tank division, and competing armies gathering on the fringes of the battlefield, I don’t foresee another major assault by Microsoft anytime soon.

But things change. With a new general, the tide may yet turn.

beta moderne

I feel it’s time for a PSA.

Prior to the current century, the software development cycle was well understood. A product went through a few major phases–primarily pre-alpha, alpha, beta, release candidate, and final release–during which bugs and other issues would be progressively weeded out, and new features added. Alpha and beta releases were done purely on test conditions; interested parties would fill out applications with system specs and testers would be picked from among those applicants. The purpose was clear: those chosen were, in effect, software testers, and would provide detailed feedback on their experience, in particular anything that didn’t work as planned. The developers would take this information, make the appropriate changes to the code, push out patches, and await more feedback.

This understanding seems to have been lost at some point in the past several years.

These days, “betas” are more like previews. They are sold as sneak peeks to games with preorders (Battlefield 4 and Bad Company 2 did this), or as a bonus with an entirely unrelated game (the Halo 3 multiplayer beta access came with copies of Crackdown). It’s become marketing.

To me, it’s absurd. It’s like selling tickets to a feature that is composed entirely of dailies of a movie, with no clean-up or post-processing applied. Would people have paid as much to see the raw film of Lord of the Rings? I don’t think they would. Every product needs polish, and no one wants to buy an unfinished product (unless they plan on finishing it themselves).

It’s an unfinished game, that’s all there is to it. These people playing are expected by developers and programmers to effectively test it and report issues, but they are expected by publishers and retailers to simply buy it. Many of those getting into these betas are not going into them keeping the mindset that it is incomplete. Forums become jammed with complaints that the game fails to launch, or textures pop too much, or certain skills don’t function correctly. Proper bug reports aren’t filed, even in situations when an interface to report bugs is made easily available and its use is actively encouraged by the game.

One memorable beta I took part in was that for Wrath of the Lich King. What made it memorable was the constant complaints in-game. Chat didn’t go more than a few minutes without someone whining about a mob, encounter, or effect not triggering as advertised, or textures not loading properly, or something else not working as it should. Did these people file a bug report, or contact a mod? No, they just bitched in chat or forums that were rarely (if ever) monitored by developers.

Times like this, I wish all betas were closed. But then, the primary advantage of an open beta is to have a much larger sample size, to cover as many different hardware configurations as possible. It strikes me as something of a conundrum. Publishers aren’t abandoning the marketing of betas anytime soon, and as long as they sell them like products or previews many of the players involved with them will fail to consider that they need to treat it for what it is–A god damned beta.

power to the people

During this recent (and most impressive) string of PR failures on Microsoft’s part, there were all sorts of people coming out of the woodwork on both sides of the argument. Among these was one engineer at Microsoft, who released a statement on pastebin explaining and defending some of their less popular policies. But Microsoft’s policies aren’t what bothered me (their failure to competently expain their position was rather annoying, though); what got to me was a particular phrase in the text:

Microsoft is trying to balance between consumer delight, and publisher wishes. If we cave to far in either direction you have a non-starting product. WiiU goes too far to consumer, you have no 3rd party support to shake a stick at.

To me, this statement is indicative of the ongoing conflict between publishers and consumers, not just in the gaming industry, but virtually any market. Until about a generation ago, the gaming industry seemed to be more consumer-friendly than others. I shouldn’t be surprised that didn’t stick.

The phrase “too far to consumer” in particular irks me. There should be no such thing as “too far to consumer”. The consumers have the money. They decide what they want to buy. A publisher cannot demand I do business on their terms. They are asking for my business. If they lay down demands and prerequistes, they don’t get my business.

Consumers need to stand up and let the publishers know this. They need to stop laying down and saying, “oh well, that’s how it is, may as well get used to it.” I am far from a principled person, and I have caved to some things like Mass Effect 3 even though it meant having to put up with Origin. But other things, like Blu-Ray, I refuse to embrace, because of its oppressive DRM. But this thought that publishers need to wage war against consumers, and wrap it in a cloak of “cloud computing” and “all digital platform” half-truths is absurd. At least Steam was open and up-front about its strategy to become an exclusively digital distribution service. Rather than make the same claim, Microsoft was attempting to say that they would somehow be superior to Steam while levying restrictions that were more burdensome, with few advtanges.

On a wider scale, publishers and developers have adpoted a “divide and conquer” strategy for their products over the past several years. This represents a more subtle, but equally effective, approach at belittling their consumer bases. Often it’s compounded with a psuedo-currency system designed to force people to buy in bulk and end up with leftover credits that can’t be spent anywhere else.

Publishers and distributors should be begging for customers’ business. They should be asking nicely for those crisp dollars, and if they want me to give them more dollars, they should be providing me with content that is worth those dollars. When a developer finishes work on a chunk of DLC they should be stopping and asking themselves, “is this worth the money we want them to pay for it?”

But the sad reality of economics is that things aren’t worth a value determined by the cost of the raw goods to produce them, or the man-hours spent, or the weight of the Brazilian royal family in 1871 divided by the rolling average of the number of car accidents in the developing world. They’re worth what people will pay for them. If people demonstrate they are willing to pay $90 for a used video game, publishers and retailers will charge them that. It’s up to the developers and publishers to put out content worth the price tag, but it’s up to consumers to stand up and say “no” when it’s not worth it.

wii woes, part 3

Nintendo has managed to correct its wrongs in recent years, but the list of things they’ve failed to do right is still considerable. With the Wii U, many old mistakes were learned from…and many new ones are in the process of being downplayed, if not ignored.

Part Tri Ultimate: Nintendo Network
Online gaming has been something of a mystery to Nintendo. In the 90s, a small but thriving community existed in the form of Satellaview. While the service’s user base never exceeded 120,000, it had a loyal core that helped keep it alive well into 2001, just 18 months before the debut of Xbox Live. In 1999, Nintendo launched RANDnet as a successor service to support the 64DD; unfortunately both failed.
Perhaps feeling burned by the winding down of Satellaview and the downfall of RANDnet, Nintendo refused to even consider the possibility of online gaming as they went into the sixth generation. While a broadband adapter was released for the Gamecube, only seven games supported it, and only four of those supported online play. The Gamecube’s online community–if it could be called that–scraped by, barely existing for about six years before Nintendo delivered a coup de grace in anticipation of the Wi-Fi Connection service.
But WFC was just another blundering stepping stone for Nintendo. The service wasn’t concieved until after the DS and Wii had reached the market, and the software was difficult to deploy to both platforms. Nintendo’s solution was to put it in the game cartridges, which only created more problems. With no centralized piece of data to rely on, it was necessary to make use of friend codes.
Oh yes, friend codes. Their legacy is so damning and tainted I won’t even go into it here.

With this generation Nintendo has made their first real attempt at creating an online service to compete with Microsoft and Sony. Behold, Nintendo Network. Finally, a service with a centralized profile, a messaging system, and the ability join online games in a manner similar to that on competing platforms.

But it’s still not quite enough. The Network lacks a real method of mass interactivity; the Miiverse seems to want to emulate environments like Sony’s Home, but is really just a visual representation of a message board. The board itself lacks many features that have been long integrated into even the most basic forums. Direct responses are not an option; one can only respond to the main post in a thread, and hope that anyone else addressed will see the message. The one function that is both unique to NN and useful is the ability to post a screenshot of a game to the forums. This is actually something I would love to see in other services.

The system is also heavily fragmented; Nintendo leaves virtually every aspect of it up to the publishers of each game. While this is great for publisher freedom, it means the user has a very inconsistent experience. Some games may support parties, some may support voice chat. There are no cross-game parties or chats. These are things that need to change for this service to compete.

Even headset support itself leaves much to be desired. There is no bluetooth support; only 3.5mm headsets will work, and even then coverage is spotty. Really the only good choices are Nintendo’s first-party headset or one made by Turtle Beach specifically for the Wii U. Even then, headsets can only be used with the gamepad, as the Pro Controller lacks a 3.5mm port. This all adds up to create a distinct impression of a colossal lack of planning. At the very least, adding a connector port to the Pro Controller would be greatly appreciated; Bluetooth headset support would be ideal, however unlikely.

From Nintendo’s point of view, the Network is a huge leap forward, bringing them closer to their competitors’ online gaming and social webs. From outside, though, it’s less significant. I would really call it a Planck step, personally. But it’s a step. Now if they can just take a few more…

wii woes, part 2

I feel it necessary to point out (constantly) that I am, and always have been, a Nintendo fan. Ever since I got my SNES over 12 years ago, I’ve always preferred their consoles. I also happen to be curiously conservative on the issue of consoles; I believe that gaming consoles should focus primarily on gaming, with other functions being secondary. While I love that my Wii U can play Netflix, I don’t care for systems that can access Twitter, Facebook or Internet Explorer during gameplay. That is the realm of desktop computers, and compromises the console’s ability to run gaming software.

That being said, I’m not blind to Nintendo’s mistakes…past or present. And the Wii U has issues. And I’m not done ranting.
Part Deux: The Gamepad
The gamepad is a great idea. It provides a second screen with which to display extra information. It can provide a sense of immersion, like serving as an inventory manager or as a Batcomputer. It can also allow unique involvement of players, such as in New Super Mario Bros U. It has even sparked a new movement, motivating both Sony and Microsoft to come up with their own second screens for their consoles.
It’s overplayed.
Its use is enforced in far too many circumstances. System settings, Miiverse, and the eShop all require its use. It is actually impossible to navigate any of these subsystems without the gamepad. The worst part? It’s completely unnecessary. In the case of the system settings, the TV screen is wasted with just a message telling the user to look at the gamepad. In the Miiverse and eShop, it’s entirely redundant–the content on the displays is mirrored, and it’s possible to navigate using on the gamepad’s buttons, meaning these sections could be used with a pro controller or wiimote. But it’s not an option.
These issues also persist in some games and third-party apps. The Netflix app requires the use of the gamepad, once again with near-complete redundancy. Nano Assault Neo can make use of the pro controller, but only for a second player–the first must use the gamepad, even though there are no integral functions assigned to it.
At the same time, its use isn’t standardized enough. One of its most popular features is Off TV Play. This moves the game’s main display to the gamepad, allowing a game to be enjoyed without the TV being set to the Wii U input, or even turned on at all. It’s a great feature. But it’s purely up to developers to implement. Often its implementation is unintuitive–switching to gamepad mode may require navigating through several layers of clunky menus. Other times it’s literally as simple as a button in the corner of the screen. But it’s really something Nintendo should have worked out on their own beforehand, and placed a button on the gamepad dedicated to its use.
On top of all of this is the ultimate issue…battery life. The gamepad can manage about 2-3 hours on a full charge, depending on use, because it comes equipped with a woefully undersized 1500mAh battery. Nyko sells a 4000mAh pack that fits inside the same compartment, and a larger unit that attaches to the back and doubles as a stand. But Nintendo should have seen this one coming. Even just watching Netflix, with the screen off, drains the battery in less than 3 hours. I would say there should be a way to actually turn the gamepad off, but certain apps require its use, so it would be a moot point. But that just brings me back to my earlier rant, thus completing the circle of bitching.
Nintendo initially announced support for only a single gamepad per base station, but later stated that two was a possibility. Games with support for this have yet to be seen, but the point is moot, because gamepads still cannot be purchased separately. But there are still technical issues with the concept, chief among them being framerates. The gamepad runs at a maximum of 60 frames per second, each of these frames being delivered from the base station to the screen. Two gamepads would mean halving this to 30 at most, often lower than that depending on how busy the screens are. This is all the result of the fact that the gamepad is literally just a wireless screen. It’s not an independent piece of hardware. But you know what is? The 3DS.

wii woes, part 1

Let’s be honest. The Wii U is not doing well. There are a lot of reasons for this. Some are Nintendo’s fault, some aren’t. More importantly, some of these reasons can be compensated for. Some can’t.

Perhaps Nintendo’s single biggest error with the Wii U has been regarding marketing. The name “Wii U” was a terrible choice. It carries the implication that the product is either an addon to, or an upgraded version of, the Wii. Many people are still under the impression that it is nothing more than a tablet that works with the Wii. The direct result of this is that many people don’t feel inclined to buy it. Nintendo hasn’t done enough to differentiate the new from the old.

While the name can’t be changed (at least, not without causing even more confusion), Nintendo can always retool their marketing, and make customers more aware that this is a new product. Meanwhile, there are far bigger issues that need to be confronted by Iwata and company.

Part The First: Third Party Support

This is where Nintendo has traditionally trailed far behind its competitors. Ever since the Nintendo 64, they have struggled to maintain connections with other publishers and developers while Microsoft, Sony and others shovel dozens of games with long-running consumer bases onto their consoles.

At this point the Wii U is stuck in a vicious feedback loop. Currently, Black Ops 2 has an online player base of about 2000-4000 players on a daily basis. Xbox Live tallies about 200,000 on an average day. As a result, Activision feels less inclined to provide higher support, including releasing DLC on the system. As a result of this, less DLC can be sold. So far none of the Black Ops 2 DLC has been released on Wii U.

In a similar boat, the Wii U release of Injustice has recieved significant content support, but still little in comparison to its bretheren. The DLC that has come to the platform has all come with considerable tardiness. On top of this, Injustice lacks a very particular feature: the ability to play with friends online. The only available option is to play against random opponents (or not so random, in the case of ladder games). One cannot simply pick their friends off a list and play them. This can only be done in local multiplayer.

In September 2012, the Mass Effect trilogy was released as a bundle for Xbox 360 and Playstation 3. While it wasn’t much more than a convenient package for 360 customers, it allowed the PS3 to experience the first game for the first time. The trilogy was not released on Wii U, and there are currently no plans to do so. A reworked version of Mass Effect 3 was released making use of the gamepad. It received good reviews; however it only includes DLC that was already on the market beforehand, and EA does not plan on releasing any more of the paid content that was released afterward.

Speaking of local multiplayer, there are some games that omit online entirely, even if it seems like an inescapable conclusion. Tank! Tank! Tank! is one of these games. Despite having broad appeal and a variety of game modes, the best that can be done is four-player local. At this point, the upcoming Arkham Origins is not planned to have any multiplayer at all. While I’m not particularly interested in multiplayer with regards to the Arkham games, no doubt it comes as a slap in the face to the millions of Wii U owners who plan (or were planning) to buy on that system.

While the userbase is lacking compared to competing platforms, the fact remains that a product never placed on the shelf can never be sold. One certainly isn’t going to build consumer confidence when their consumers feel punished for buying their product. The community has been practically begging publishers to release DLC, with responses that can be generously described as indifferent and ambiguous. Then those same publishers turn around and state that upcoming games will not have comparable feature sets because of the lack of sales, seemingly baffled as to the cause.

Someone needs to break this cycle. While the Wii U and Nintendo Network aren’t what everyone wanted, on the whole it’s been a step forward for them. Nintendo finally has a system and a network that can sustain the functionality its predecessors long lacked. It’s time for the publishers to take the risk. Put the content out, and people will buy it. They’ve been begging for the privilege to do just that.

I for one will likely be buying the upcoming Call of Duty: Ghosts on Wii U. While the PC version will likely recieve more content, have a far larger player base and let me do things like listen to music while playing–not to mention the natural advantages of shooters on PC–my desire to see this system move forward trumps that. If the publisher is going to take the risk putting content on the system, I as the consumer will take the risk buying that content, hoping that they will see it’s worth their time to invest further in. Ultimately, the antidote to these vicious cycles breaks down to hope and trust.

xboned

Just a few hours ago a momentous decision was announced by Microsoft: The online DRM scheme previously announced for the Xbox One at E3 will be dropped. No more 24-hour check-ins, no more locked-down used games, no more region locking. It was a huge victory for the consumers who desperately want to throw their money at Microsoft for what may be the most competitive holiday season yet in the gaming industry.

It was a decision that shouldn’t have been forced. It shouldn’t have been a question at all.

The Xbox One didn’t exactly get off on the right foot. From the start, its various DRM schemes were oppressive, bordering on draconian. Even the concept of lending games was in danger of extinction. Even games bought on physical media would be tied to someone’s account, with the possibility of only a single transfer, ever. It’s a ludicrous restriction–the two major incentives left to buying physical media are avoiding huge downloads, and avoiding that sort of DRM. There is (or at least, was) no reason to buy discs on Xbox One, since all that would happen is that the disc would register to your Xbox Live account, and then install itself onto the hard drive. From that moment on, the disc is 100% redundant.

It’s part of a whole campaign publishers are launching; a war against consumers. Publishers want to control what end users do with their products. Every single aspect they can possibly control, they are at least exploring. On PC, their obvious option is to launch their own marketplaces and sell directly to customers, bypassing competitors and retail fronts. Things are bit more complicated on console, where they are still obligated to go through at least one company–be it Microsoft, Sony, or Nintendo–to push their product to market. EA is already attempting to carve their own channel on these platforms; rumor has it their cuts to Wii U releases are the result of Nintendo refusing to allow Origin on the system.

With Xbox One it seemed they found their ultimate answer: make the producer of the system bow to their wishes. According to at least one inside source, Microsoft had two long-term goals with the XB1: transition to an all-digital-download platform, and tip the balance toward publishers. Microsoft seemed to be more concerned with increasing the freedom of the publishers rather than maintaining freedom for the consumers. As if the lords need help limiting the freedom of their dirty peasants who don’t know well enough to enjoy them.

With their reversal, things look a bit brighter. Honestly, the only way Microsoft could dig themselves out of the grave dug for them by their competitors at E3 was to do exactly this kind of complete reversal. There shouldn’t be any such thing as “too far to consumer”; the consumers should have the rights to decide what to do, and the publishers should be the ones asking for their business, not demanding that it be done only on their terms.

Another fact that has been made apparent by this whole ordeal: Micrsoft’s PR team is horrible at their job. From statements like Don Mattrick’s “we have a product for people who can’t go online, it’s called Xbox 360” to “are you on the development team? No?” smacks of people who are, at best, socially inept, and at worst, professional narcissists. Time after time they effectively told the world “you’re getting screwed, learn to love it”, and acted shocked when people reacted negatively. Another tidbit of their PR logic relates to the lockdowns on used copies. Do I want to get screwed by GameStop and get $5 for the game I just paid $50 for, knowing they will turn around and sell that for $30? No. But what will I get for a used game that can’t be transferred to another account? Nothing. How is this better?

Major Nelson has also demonstrated he can’t function as a mouthpiece–his technique of answering questions is on par with high-level politicians. When he’s pressured for actual, direct answers to these questions, he doesn’t react well. (He later defended his behavior in the AJ interview by claiming he was being “screamed at”.) He continues to draw false comparisons between Xbox and Steam (“can you give [a friend] a game on Steam?”), and can’t seem to break from the script even when he knows he needs to.

The real shot to the family jewels was when Reggie Fils-Aime heard that the entire operation was devised as a way to protect the used game sales market. His knee-jerk reaction was the state simply, “make better games“.

Maybe Microsoft just needs a new PR department.

vision problems

Another milestone in computer technology has been reached.  Dropped upon the human race like a monolith of Kubrickian proportions, Apple released the new Macbook Pro a month ago to the usual pomp and circumstance.  The new model is incredibly slim at less than 2cm in thickness, and more importantly, features a breathtaking Retina display, clocked in at an astonishing five-megapixel LED display.

One question that arises is whether this incredible resolution is actually necessary.  Recently there have been calls in some (small) communities for higher-resolution displays, but are there actual, verifiable advantages to this?  Another question is that of the hardware’s ability to support it.  The Macbook Pro’s screen has more than double the pixel count of a 1080p display; the more pixels need to be driven, the greater the load on the hardware driving it.  The laptop features a GeForce GT 650M, which is more than enough to handle this sort of work, but this chip is only active when necessary; under less demanding circumstances it will rely on an Intel HD 4000, which while perfectly fine for multimedia use, isn’t well known for driving the equivalent of two and a half 1080p displaysThis is a lot of pixels for this chip.

Another feature of note is the new Macbook’s incredible thinness. At less than 2 cm, the laptop is slimmer than most keyboard keys are tall.  This feat of engineering was achieved primarily by soldering everything to the motherboard.  Everything.  Even memory is no longer user-replaceable, and while the SSD is not, in fact, soldered, it does save space by connecting through a proprietary daughterboard.  The casing requires the typical pentastar screwdriver to open, and the usual level of destruction to actually access the innards.  All this has resulted in iFixit declaring this the least repairable device of all time.

There is some defense for this new design methodology (emphasis on some).  When such extreme thinness is reached, the element holding back further progress becomes the connectors commonly used for memory and other peripherals, meaning that soldering these components saves valuable millimeters and allows that much more clearance to be reduced.  But therein lies the rub–is it really necessary to make a full size laptop this thin?  This makes more sense with smaller, lighter models (see: Macbook Air), but this is a general-use machine; portability is low on the list of priorities.  Previous versions of the Pro model came in at 2.5cm thick, which is still respectable–my personal laptop is 2.46cm, and I find this more than satisfying.  Even moreso, the Macbook Pro’s battery has been increased to 95Wh (and is, of course, unreplaceable by any means available to the consumer) and gets a healthy seven-hour charge.  By contrast, the Asus UL30 has an 84Wh battery that can easily hit the ten-hour mark; this I can attest to personally.  While the MBP is still getting an admirable battery life considering the hardware, it is clearly considerably less efficient than other designs.

At the same time as the announcement, the 17″ Macbook Pro was taken out of production by Apple.  About a year ago, the baseline Macbook was similarly (though more quietly) retired, though it was still available for institutional purchases until February of this year.  With the removal of the 17″ model, there are now only three laptops in total available from Apple: the 11.6″ Macbook Air, the 13″ Macbook Air, and the 15″ Macbook Pro.  I personally won’t be surprised if one of the two Air models is eliminated, or if the two model lines are merged into one, within a year or so.

Consumers now have to choose from a subnotebook with an SSD and soldered memory that is 1.7cm thick and features an aluminum body, or a notebook with an SSD and soldered memory that is 1.8cm thick and features an aluminum body.  As usual, Apple is dictating to its customers what they may buy–and buy they will, by the caseload.  No matter that owners might eventually want to upgrade their memory, have a need to replace the hard drive, or might not desire an aluminum body or even install a larger battery.  This is what Apple is selling you, and you will buy it.

What’s even worse is the way this will affect the secondary market.  When I say “affect,” it would probably be more accurate to say “destroy,” because that’s what is going to happen.  When in the past at least something could be salvaged from a dead laptop and used for future repairs, this won’t happen with the new MBP.  Getting the casing open alone is a pain, and the only part that could possibly be salvaged with any amount of ease is the hard drive.  There is no incentive to keep a secondary market going; no one will be motivated to take the time and effort to actually get these parts out of a dead MBP, let alone want to solder them into a new one.  This is already happening with the iPad–the device is so difficult to repair, that most reports I hear from users is that if there’s a problem, they are simply handed a new one at the Apple Store, and chances are the old one goes right in the trash.

In regards to Apple I have no cares.  Apple will do what it does, and if its customers are being inconvenienced, that is their business.  But in the last few years Apple has set a dangerous precendent.  Many other tech companies now look up to the guys in Cupertino.  When Apple does something, it (almost) invariably succeeds, and when it succeeds other companies see that success and want to replicate it.

I for one enjoy having a great array of choices when I go shopping for a new laptop.  And I, for one, will not be pleased if in the future every manufacturer is attempting to force me to buy an ultralight with soldered components and no choice of hardware.

iRantalot

Along with about half the developed world, I watched the iPad 3 announcement with keen interest.  I’m not an Apple fan, but neither am I one to simply dismiss a new product without knowing anything about it.  Nevertheless, after watching video and a live blog feed, I felt the same way I felt after the iPad 2’s debut: disappointed.

I feel it necessary at this point to note that I do not simply hate Apple for the sake of hating Apple (or hipsters, turtlenecks, or those newfangled MP3 players for that matter).  Shortly after the first time I used a PowerBook G4 running OS X, I was very nearly converted wholesale to the brand.  I’ve spent most of the past five years doing my damnedest to defend Apple in every way.  And I have to be honest: there is no defense left.  There is no rationalization to justify their business practices, either in the marketing realm or in the way they treat app developers.  That being said, I still go into every new product launch with some hope that they will redeem themselves.  And so far, every new product launch of the past three years has left me jaded and frustrated.  The same pattern has repeated itself twice now with this release of the iPad.  The iPad 3, or the “new iPad” as Apple is calling it simply, has followed its predecessor with glam, flash and pizzazz in spades.  But when it comes to the actual meat and potatoes of the whole affair, this is a rather slim meal.

The new iPad is light on new features.  Though Tim Cook and company were able to pack a 90-minute presentation with content and show dozens of new things available on this iPad, much of this was fluff, not unlike a 10-page essay written by a high school sophmore which only contains about 2 pages of actual content.  Most of the excitement is focused on the tablet’s screen, a so-called Retina Display.  While a true Retina Display clocks in at 336 pixels per inch, the iPad’s is only 264, about 75% that of its little brother (a true Retina-armed iPad would have a 2500×1900 display resolution, a very expensive proposition considering the vehicle).  But Cook is more than happy to say this is “good enough to call it a Retina Display.”  This tells me the Retina name is becoming little more than branding, used to create hype and agitation amongst its legions of sworn defenders.  Moreover, the actual merits of the Retina Display are at best debatable, as the visual acuity of a human eye can still resolve pixels at the intended distance.  But Apple expertly used this, and a few other features, to make the whole thing much more than it really was.

Almost every other new aspect of the iPad is new only to Apple.  The quad-core graphics have been in use by competitors for over a year now.  The 1080p camera, while always an enjoyable feature, has also been done, although to their credit the software support it is nothing to scoff at, in particular the image stabilization.  The camera, however, lacks an LED light, essential for taking pictures when you don’t have that convenient star on your side of the Earth.  4G is also a worthy advancement, although whether this is actually worth it has yet to be seen, as Apple’s history with antennae is questionable.  Cook also claimed that the new 4G model still maintained the same battery life with only a minute increase in thickness and weight–another assertion that will need to stand the test of the real world in a month or two.

And, once again, the iPad still lacks two things the entire remaining computer world doesn’t: SD card support and USB support.  This one still baffles me.  No tablet should be without either of these.  Memory cards make transfer of large data files simple (and, in some cases, more secure) and allow for expansion of existing memory.  Without this, iPad users are stuck with whatever the iPad comes with–64GB may seem like a lot for a tablet now (assuming you can afford it), but if you plan on watching lots of movies on it you will quickly find yourself wanting more.  And that’s when Apple will probably release a 128GB version, no doubt at an exorbitant premium.

Phil Schiller was eager to point out that Android apps on a tablet “[look] like smartphone apps blown up.”  Lots of wasted space and too-large text were displayed on the gigantic screen behind him as he picked apart the Android designs.  Notably, he cherry-picked only a couple apps to compare, and did not run any comparisons against Windows Phone or Blackberry apps.  Badly designed iOS apps do exist, however–not even Apple’s perfect, flawless operating system is immune to the mistakes of developers.  This is clearly another shot at Android, the OS that “ripped off Apple,” even after Apple ripped off Android.

Possibly the most absurd part of the presentation related to gaming.  Opening the segment, Cook stated that in internal polls, the iPad 2 was the preferred gaming device of the majority of households, even being “preferred over home consoles.”  Without overstating it, this is laughable.  Tablet gaming has its uses, but it has just as many limitations, namely that it is difficult to come up with a comfortable control scheme without the user’s hands blocking significant parts of the screen–and then there is the prospect of holding a half-pound object aloft while also attempting to tap away with reasonable accuracy for any considerable amount of time.  No household with real gamers would consider the iPad superior to the PS3, Wii, or even 3DS.  Epic Games’ Mike Capps appeared onstage and demoed a couple games for the new iPad, throwing his unconditional support behind it and even going as far as to tell the audience that it is a superior platform to the Xbox 360.  Considering Apple’s history of stabbing developers in the back, it would be a supremely satisfying irony if this fate were to befall Epic.

Once again, Apple has turned a lot of nothing into something revolutionary.  No doubt millions will mob Apple stores throughout the world to get their own iPad 3, stepping over their own mothers to take out loans against their twice-mortgaged homes just to own one.  What concerns me most is the thought that this model might catch on.  While many products are already being called iPad clones, and many claims are thrown about focusing on who is copying Apple’s design methods, the fact remains almost all of these competing products have more features and are capable of more functions than any iPad on the market.  When other manufacturers start realizing that they can add marginal improvements and bill them as revolutionary progress on par with the invention of the transistor…I shudder to think where the market will go after that.