I don’t have a problem with a model where I pay more money and get more content. And I do think that there are certain things that can only really be done with live service that some people will really enjoy – I don’t think that live service shouldn’t exist. But I generally prefer the DLC model to the live service model.
-
Live service games probably won’t be playable after some point. That sucks if you get invested in them…and for live service games do aim at people who are really invested in playing them.
-
I have increasingly shifted away from multiplayer games over the years. Yeah, there are neat things you can do with multiplayer games. Humans make for a sophisticated alternative to AI. But they bring a lot of baggage. Humans mean griefing. Humans mean needing to have their own incentives taken care of – like, they want to win a certain percentage of the time, aren’t just there to amuse other humans. Most real-time multiplayer games aren’t pausable, which especially is a pain for people with kids, who may need to deal with random-kid-induced-emergencies at unexpected times. Humans optimize to win in competitive games, and what they do to win might not be fun for other players. Humans may not want to stay in character (“xXxPussySlayer69xXx”), which isn’t fantastic for immersion – and even in roleplay-enforced environments, that places load on other players. Multiplayer games generally require always-online Internet connectivity, and service disruption – even an increase in latency, for real-time games – can be really irritating. Humans cheat, and in a multiplayer game, cheating can impact the experience of other players, so that either means dealing with cheating or with anti-cheat stuff that creates its own host of irritations (especially on Linux, as it’s often low-level and one of the major remaining sources of compatibility issues).
-
If there are server problems, you can’t play.
-
My one foray where I was willing to play a live service game was Fallout 76; Fallout 5 wasn’t coming out any time soon, and it was the closest thing that was going to be an option. One major drawback for me was the requirements of making grindable (i.e. inexpensive to develop relative to amount of playtime) multiplayer gameplay was also immersion-breaking – instead of running around in a world where I can lose myself, I’m being notified that random player has initiated an event, which kind of breaks the suspension of disbelief. It also places constraints on the plot. In prior entrants in the Fallout series, you could significantly change the world, and doing so was a signature of the series. In Fallout 76, you’ve got a shared world, so that’s pretty hard to do, other than in some limited, instanced ways. Not an issue for every type of game out there, but was annoying for that game. Elite: Dangerous has an offline mode that pretends to be faux-online – again, the game design constraints from being multiplayer kind of limit my immersion.
They do provide a way to do DRM – if part of the game that you need to play lives on the publisher’s servers, then absent reimplementing it, pirates can’t play it. And I get that that’s appealing for a publisher. But it just comes with a mess of disadvantages.
So, I’ve seen this phenomenon discussed before, though I don’t think it was from the Crysis guys. They’ve got a legit point, and I don’t think that this article does a very clear job of describing the problem.
Basically, the problem is this: as a developer, you want to make your game able to take advantage of computing advances over the next N years other than just running faster. Okay, that’s legit, right? You want people to be able to jack up the draw distance, use higher-res textures further out, whatever. You’re trying to make life good for the players. You know what the game can do on current hardware, but you don’t want to restrict players to just that, so you let the sliders enable those draw distances or shadow resolutions that current hardware can’t reasonably handle.
The problem is that the UI doesn’t typically indicate this in very helpful ways. What happens is that a lot of players who have just gotten themselves a fancy gaming machine, immediately upon getting a game, go to the settings, and turn them all up to maximum so that they can take advantage of their new hardware. If the game doesn’t run smoothly at those settings, then they complain that the game is badly-written. “I got a top of the line Geforce RTX 4090, and it still can’t run Game X at a reasonable framerate. Don’t the developers know how to do game development?”
To some extent, developers have tried to deal with this by using terms that sound unreasonable, like “Extreme” or “Insane” instead of “High” to help to hint to players that they shouldn’t be expecting to just go run at those settings on current hardware. I am not sure that they have succeeded.
I think that this is really a UI problem. That is, the idea should be to clearly communicate to the user that some settings are really intended for future computers. Maybe “Future computers”, or “Try this in the year 2028” or something. I suppose that games could just hide some settings and push an update down the line that unlocks them, though I think that that’s a little obnoxious and would rather not have that happen on games that I buy – and if a game company goes under, they might never get around to being unlocked. Maybe if games consistently had some kind of really reliable auto-profiling mechanism that could go run various “stress test” scenes with a variety of settings to find reasonable settings for given hardware, players wouldn’t head straight for all-maximum settings. That requires that pretty much all games do a good job of implementing that, or I expect that players won’t trust the feature to take advantage of their hardware. And if mods enter the picture, then it’s hard for developers to create a reliable stress-test scene to render, since they don’t know what mods will do.
Console games tend to solve the problem by just taking the controls out of the player’s hands. The developers decide where the quality controls are, since players have – mostly – one set of hardware, and then you don’t get to touch them. The issue is really on the PC, where the question is “should the player be permitted to push the levers past what current hardware can reasonably do?”