They didn't actually. The algorithm used in q3 was different and somewhat inferior. You can still use this algorithm by setting cg_zoomSensitivity to 0. It is:If that's correct, I was wondering how ID Software decided on that particular formula to maintain the "feel".
k = arctan(tan(zoomfov * (pi/360)) * height/width ) * (360/pi) / 75
Note: Do not convert zoomfov into degrees in this case.
As far as I know, it was Injx who suggested the current, correct type version.
This is my theory:Think it might involve some programming knowledge to understand though.
To maintain the same "feel", the idea is to make the pixels in the center move at the same speed across the two fovs.
If you were zooming in to an image the way you do in a photo editor, then maintaining the same feel would be as simple as sens * k where k = zoomfov/fov. If you see half the screen, use half the sens.
However quake and various other 3d programs don't work this way. There is a distortion to fit the 3d view into the 2d screen. This distortion takes the form of a pincushion distortion.
To look around the 3d world, you have a view frustum (4 sided pyramid), the "point" of the pyramid being the player's eye. As fov increases, the angle at the "point" increases.
You can kind of see this effect at the bottom of the page here:
One side of a frustum is an isosceles triangle, half that is a right triangle, so imagine that there is another triangle placed under the one in the diagram, such that a 45 degree right triangle is actually part of a 90 degree isosceles triangle.
You see the distance to the curve's center (the curve on the unit circle) increasing relative to the "base" of the isosceles triangle, and I believe that this is the image undergoing a pincushion distortion as fov increases.
However even so, sens * zoomfov/fov still takes care of this since the center remains stable. The trouble comes from the aspect ratio not being 1:1, which means that there is more distortion for one axis than another, and this makes for slower moving pixels on one axis, since they move slower relative to the pixels on the side which move faster to compensate and maintain the same amount of time to be taken for a given amount of change in degrees (sens).
So when I zoom in on say 4:3, I'm going from slower moving center pixels --> faster moving center pixels when I zoom in, and since higher sensitivity moves the center pixels faster (all pixels really), it stands to reason that the zoomed in sensitivity should be lower than what it would be if there was no pincushion.
This coincides with the current zoom calculation. For example, using fov 100, zoomfov 50:
This makes k = 0.461257 , whereas with no pincushion we could just do 50/100 = 0.5
sens * 0.46 < sens * 0.5
I'm pretty sure that 0.75 is the aspect ratio (3/4). Since it also has to calculate for aspect ratios different than 4:3, I think what it does in the code is multiply the result by a correction factor, such that everyone can use the same formula without having to also consider their aspect ratio.
Note that if we had a 1:1 aspect ratio, 0.75 would be replaced with a 1, and since arctan(tan(x)) = x , we would have our zoomfov/fov algorithm again, where half the fov would mean half the sens. So, this 0.75 is the correction for the horizontal being wider than the vertical.
The "x/2" part of tan(x/2) is referring to the isosceles triangle of the frustum being cut in half so that right triangle math applies to it.
One of the great things about QL is its tweakability!I wanted to play other FPS games that had sniper scopes, and was hoping if they didn't have a similar way of handling zoom FOV/sens that I could try to emulate it by having my zoom FOV as 22.5 (QL default) and a similar zoomed in sensitivity as QL. But not all games will let me tweak those parameters so I might just have to get used to their engine.