In video gaming, if you press and release a button within a 1-frame window, there are two possibilities.
- The press and release happen within the same frame.
- The press happens in one frame, and the release happens in the next frame.
Unless your framerate is extremely low, to the point where someone can distinctly tell when frames start and end, it's generally not possible to control which of the two outcomes you get. If the press happens near the start of a frame, the first possibility is more likely. If the press happens near the end of a frame, the second possibility is more likely. It effectively becomes luck which one you get.
So if your game has different outcomes for these two possibilities, then you won't be able to trigger the outcome you want consistently.
You can make it so a press and release can't happen in the same frame. If your game detects this, buffer the release to happen one frame later. Now only one of the outcomes is possible, the second one, and any input within a 1-frame window will trigger this outcome.
Now let's say your press and release are within a 2-frame window. There are two possibilities again:
- The press happens in one frame, and the release happens in the next frame.
- The press happens in one frame, and the release happens two frames later.
The first possibility is the same as if your input is in a 1-frame window, and you buffer same-frame releases. The outcome will be the same as for a 1-frame input.
The second possibility is different. In platformers with variable-height jumping, normally a press triggers a jump, and a release cuts the jump short in some way. The second possibility would lead to a higher jump than the first possibility, even though both can happen with inputs of the same length.
This leads to the idea of double resolution frame error correction to improve your game feels.
The idea is to run input reading code at 2x the framerate of your game logic code. This means for every "logic frame", there are two "input frames". Input status is updated on each input frame, but cleared only after each logic frame. Design your input system so that when a button is released, you can check (during the following logic frame) how many input frames it was pressed for.
Now you can error-correct as follows:
- Inputs of duration less than or equal to 3 input frames are treated as 1-logic-frame inputs.
- Inputs of duration greater than 3 input frames and less than or equal to 5 input frames are treated as 2-logic-frame inputs.
- Inputs of duration greater than 5 input frames and less than or equal to 7 input frames are treated as 3-logic-frame inputs....
And so on. Now, this correction might be tricky, because for example, an input whose duration is 2 input frames could either span 1 logic frame or 2 logic frames. We need to treat both cases as if there was only 1 logic frame. But in the 2-logic-frame case, the first logic frame has already happened. For the example of variable jumping, this means the character has already completed the first frame of their jump. Cutting their arc short too harshly might look or feel strange. But the idea is to massage the behaviour of both the 1-logic-frame and 2-logic-frame cases to look approximately the same. Similarly, if the duration is 4 input frames, this could take place over 2 logic frames or 3 logic frames, and you would have to make it look or feel the same either way.

In the above GIF animation, the bunny does two jumps with inputs that span 4 input frames. When the bunny faces left, the input spans 2 logic frames, and when the bunny faces right, the jump spans 3 logic frames. Can you tell the difference? The jumps are the same total height, but the arc is slightly different. The different arc might be a problem in extremely technical situations, but for jumping challenges that requiring hitting certain heights consistently, this system is suitable.
Anyways, suppose the player has a margin of error that's less than one input frame, or half a logic frame. So if the player tries to press the button for 4 input frames (2 logic frames) they'll actually land somewhere above 3 input frames and below 5 input frames (1.5 to 2.5 logic frames). Wow, that's exactly the window for the game to error-correct the input and behave as if the player pressed the button for 2 logic frames.
Let's compare it to the original scenario. Suppose you have a super robot that can perfectly do inputs that are exactly 1.79 logic frames long. Well, sometimes this input will fit nicely into 2 logic frames, but other times it will overlap 3 logic frames. So even this robot is pathetic unless it can additionally time its inputs to start near the beginning of a frame.
Now imagine a kitten who can try their best to do inputs that are exactly 2 logic frames long, but they're actually sloppy and always fall between 1.6 logic frames and 2.4 logic frames. This kitten might have an even worse time, because their longer inputs might span up to 4 logic frames. If the kitten does a 2.2 length input right at the end of a logic frame, it could end up like 0.1 - 1.0 - 1.0 - 0.1, starting at the end of frame 1, and ending at the start of frame 4. But with double resolution frame error correction, both the kitten and the robot can consistently jump the same height, because the number of input frames will be either 4 or 5.
You may be wondering if you should implement this system in your game. The answer is no. One thousand million video games have already been made that don't use systems like this and they are fine. You only need to implement this in your game if you already decided that it's useful for your game while reading the post. If you got to the end and you're wondering whether it's worth the trouble, then it's not. If I ever see someone complaining that a game doesn't have double resolution frame error correction, I will scream.
