Way back in the day, every game had its logic tied to its framerate – As anyone who’s ever tried to run an eighties PC game on a nineties PC only to see it run at 20x speed and completely unplayable can tell you.

But in the modern day this is less common. Generally the game keeps chugging along at the same pace, no matter how fast or slow the frames are being presented (unless, of course, everything is bogged down so hard that even the game logic is struggling)

And yet, you’ll still find a few. Any fan of Dark Souls who played on PC back when Prepare to Die edition first came to PC will remember how unlocking the framerate could cause collision bugs and send you into the void. And I recently watched a video of a gent who massively overclocked a Nintendo Switch OLED and got Tears of the Kingdom to run at 60FPS… Except everything was, indeed, running in fast-forward, rather than just being smoother.

This makes me wonder – Is there some unseen advantage to keeping game logic and framerate tied together? Perhaps something that only really shows on weaker hardware? Or is it just devs going “well the hardware we’re targeting won’t really go over this speed, and we don’t really give a fuck about anything else” and not bothering to implement it?

  • neidu2@feddit.nl
    link
    fedilink
    arrow-up
    70
    ·
    edit-2
    4 months ago

    Easier to code. You don’t need to worry about render interval and tick interval, as they’re both part of the game loop, resulting in a lot less code.

    Once computers became too fast for this to be a practical approach, tick intervals became more common. That way, the Game would run at the same speed on any computer that was fast enough.

    After a while, as graphics generally became more complex, it became apparent that the game logic was rarely what was taking up much of the calculation, so to avoid graphics slowing down the entire game, render intervals became a thing. Basically this allowed the game to run at normal speed, at then just let the graphics update as fast as they could if it couldn’t keep up.

    • Ephera@lemmy.ml
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      4 months ago

      To make this maybe a bit more concrete, without decoupling them, your game logic will look basically like this:

      1. Update all the values (e.g. if something has a velocity, then move it forward).
      2. Read the values to render graphics into the framebuffer, from where your screen can grab it.
      3. Repeat from step 1.

      For these things to be decoupled, you suddenly need them to run at the same time, which means you suddenly have to deal with multiple threads and inconsistencies in the state and whatnot.
      If you’re using a proper game engine, these things are typically largely solved for you, but especially in the 80s, you wouldn’t have game engines that you could just use.

    • kinsnik@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      also, if a game relies in a complex physics engine, the physics simulations has to be done with a stable interval and FPS, which means it had to be decoupled from the rendering (which was not stable and depends on what you are drawing)

  • Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    30
    ·
    edit-2
    4 months ago

    In Leisure Suit Larry this was deliberate. There was a puzzle where you needed to get Larry strong enough to open a grate or something. You could do that by doing a certain number of repetitions on a weight lifting thingy in a gym. The number of repetitions depended on the speed of your CPU. Of course the time for the animation per repetition stayed the same.

    Al Lowe wanted to punish those posh people with their fancy 3’86 machines. Funny joke back then. Nowadays it’s practically impossible. I wonder what they did with this puzzle in ScummVM.

  • HatchetHaro@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    4 months ago

    Games are still very much all functionally tied to framerates; it’s all just a matter of making that logic function the same across different hardware and framerates, generally calculated from frame times (we call it “delta time”).

    Freya Holmér has a fantastic mathematical breakdown of one pretty common (and deceptively complicated) framerate-independent function: lerp smoothing.

  • calcopiritus@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    4 months ago

    If it is tied to frame rate, then a set of inputs results in a predictable set of outputs.

    If not tied to frame rate, those same inputs have to be reproduced with the exact same time delay, which is almost impossible to do.

    Sure, sub-millisecond time differences might not always lead to a different output. But it might.

    Now, when is this determinism useful?

    TAS (tool assisted Speedrun). You can’t tell the game: on frame 83740 press the A button. Given a list of inputs with their exact frames will always lead to the same Speedrun.

    Testing. You can use methods just like TAS to test your game.

    Reproducing bugs. If you record the game state and inputs of a player before the game crashes, you can reproduce the bug, which means that it will be a lot easier to find the cause and fix it.

    Replays. Games like LoL, starcraft, clash of clans have a way to see replays of gameplay moments. If you save a video for each one of those, the storage costs will be prohibitively expensive. What they do instead is record every single action and save that. And when replaying, they run a simulation of the game with those recorded inputs. If the replaying is not deterministic, bugs may appear in the replay. For example if an attack that missed by one pixel in the game was inputted a millisecond earlier in the replay, it may hit instead. So it would not be a faithful replay. This is also why you can’t just “jump to minute 12 of the replay”, you can only run the simulation really fast until you get to minute 12.

    I’m not a game developer so I don’t know if it is used for testing or reproducing bugs or replays. But I know it is used in TAS.

    Of course, for this to be possible you also need your RNG function to be deterministic (in TAS). In the rest of scenarios you can just record what results the RNG gave and reproduce them.

    • Jadey ()@feddit.nl
      link
      fedilink
      arrow-up
      10
      ·
      4 months ago

      You can still absolutely have deterministic inputs without tying the inputs to the frame rendering. Just have a separate thread for the game logic that runs at regular intervals.

      Just look at Minecraft. 20 game ticks per second during which all the game logic runs. But the game absolutely ain’t limited to 20fps.

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        4 months ago

        Yeah, you can still have a “tick rate” that is different from the rendering rate. Factorio also does this.

        I don’t think OP is asking about that case though.

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 months ago

    The developers are considering the ramifications of slowdown and are not concerned about speed up. If the game slowed down and the game logic continued normally, the player would receive fewer frames to respond in - a spider is charging at you and will hit you in 5 realtime seconds, but you only have 1 frame a second, so you get 5 frames to figure it out.

  • I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    7
    ·
    4 months ago

    One interesting case is Command and Conquer Generals. You could set the FPS limit on skirmish matches, or leave it uncapped. On current hardware, that plays “a medium AI will swarm you in under a minute” fast

  • BlackLaZoR@kbin.run
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    4 months ago

    Is there some unseen advantage to keeping game logic and framerate tied together

    Simplicity. AFAIK all first gen GameBoy games were frame rate tied - overclocking the cpu causes all games to run faster, but you could squeeze a lot more into a very limited device

  • doggle@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    The game “logic” you’re talking about is tied to the frame rate by default. Or perhaps it is more accurate to say that a game’s frame rate is a function of available compute power and the complexity of the logic that must be solved every update (every frame).

    Accounting for this usually isn’t difficult, but developers are only human and it’s easy to make mistakes when your game is being developed for very few hardware variants. The original dark souls was ps3/xbox360 only. Since the game performed the same on all it’s target systems, the oversight wasn’t obvious until PTDE released for PC much later, which was apparently ported by a skeleton crew with very little time and budget.

    This doesn’t happen as much these days because developers are much more aware of PC (and its limitless hardware variations) during development. Also, there are more console variations to account for, e.g. the Xbox series S vs X. Devs just can’t get away with being lazy with frame rate variations anymore as it will become a problem very early in quality assurance, and may even cause compliance violations which can bar your game from launching on consoles.

  • pe1uca@lemmy.pe1uca.dev
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    Well, the thing is, if the physics or render steps (not necessarily the logic) don’t advance there’s no change in the world or the screen buffer for the pc to show you, so, that’s what those frame counters are showing, not how many frames the screen shows, but how many frames the game can output after it finishes its calculations. You can also have a game running at 200 frames but your screen at 60.

    So, when someone unlocks the frame rate probably they just increased the physics steps per second which has the unintended consequences you described because the forces are not adjusted to how many times they’re being applied now.

    And a bit yeah, if you know your target is 30fps and you don’t plan on increasing it then it simplifies the devlopment of the physics engine a lot, since you don’t have to test for different speeds and avoid the extra calculations to adjust the forces.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      Finally a correct answer!

      Back in the day game logic was calculated each rebder-frame, because the hardware was too weak (say, C64).

      Not tying the framerate to the logic “frames” makes for smoother user experience when the world gets hard to draw and your screen framerate goes down but your logic “frames” are still ticking on at the same speed.

      Locking those two together today has no meaning with the possible exception of small simple cpu hand healds (say the Nintendo DS for example).

  • Blóðbók@slrpnk.net
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    It may be of critical importance in some games that, no matter how low the framerate is, the player never misses an event due to skipped frames.

    There are also games that are not real time even in their animations, and so there may be no benefit to skipping frames rather than just letting it run at whatever framerate. Slowed tick rate mostly feels weird if one has certain expectations for the passage of time.

  • haywire@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    4 months ago

    Someone else may be able to offer a better solution but it’s more to do with events being tied to the draw of a frame.

    Update positions of objects, check where things are each time a frame is drawn. Speed up the rate frames are drawn at and the logic does those checks much, much faster. Causing issues.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 months ago

    Nearly every game I know of that ties things to the frame rate are games that require frame perfect inputs for many mechanics so I assume it’s just a lot easier to ensure the timings are predictable and achieveable by a wide range of hardware than it would be to base everything on the internal clock or some other time measurement that can vary depending on the machine.

    My only experience with programming time based things is in doing mission creation and scripting for ARMA, but it has very much the same kind of issue where if you want everyone playing on the same server to have the same timings for events and such, you can’t just use the game timer. It would be several milliseconds off every client to the server making it unpredictable and desynchronized from all other players.