What is lens distortion video games?

The Battlefield 1 subreddit. Battlefield 1 is developed by DICE and produced by EA. Your place for discussion, help, news, reviews, questions, screenshots, videos, gifs, and anything else BF1 related! From hardcore gamers to converts and newbies, all are welcome in /r/Battlefield_One.

Abstract

Lens distortion in computer games is nothing new, but it’s typically only used for effects like looking through a weapon’s scope or binoculars. This article discusses in detail why and how to apply a certain (barrel) distortion effect to any virtual wide-angle camera (e.g. a first or third person camera) using a post process effect. The proposed effect is meant to reduce the stretch artifacts in perspective-projected virtual scenes as perceived when the display is too small or is viewed from too afar. The implementation requires only 2 additional instruction slots of a post effect fragment shader, making it extremely efficient. To see the effect in real time, open this WebGL sample in your browser or download the more extensive Ogre3D sample from the Downloads section at the bottom of this page.

Pushing the field of view

What is lens distortion video games?

Figure 1. Physical versus virtual FOV.

Rendering a game using a wide FOV may be desirable because it allows a gamer to be more aware of the virtual surroundings or may aid in achieving a particular cinematic look. However, it can cause noticeable visible distortion, depending on where the gamer is relative to the used display. For example, a viewer sitting at A as depicted in Figure 1 wouldn’t notice any distortion, while a viewer sitting at B would. The observed distortion causes on-screen objects to look too small near the screen’s center and too large and stretched near the screen’s edges. This distortion becomes even more objectionable when the virtual camera is rotating, making objects not only rotate around the camera’s position, but also scale and stretch unnaturally along their 2D path over the screen as if they were projected onto a hollow screen somehow. As a result, distances between on-screen objects become harder to judge and the unnatural movement may even trigger feelings of nausea in some people.

Reducing stretch

This article suggests applying a small amount of barrel distortion in order to reduce the negative effects of rendering with a higher-than-ideal FOV. Adding barrel distortion reduces stretch, but it will also bend previously straight lines. Luckily, humans don’t seem to find this side effect too objectionable as long it’s kept subtle.

What is lens distortion video games?

Figure 2. Pinchusion and barrel distortion.

What is lens distortion video games?

Figure 3. A 140° FOV perspective camera.

Maybe that’s because we’re used to looking at pictures created with physical lenses that almost always add some barrel distortion themselves. Or maybe it’s because experiments suggest that the human visual system already produces some barrel distortion by its own, even though most people aren’t consciously aware of this effect. In any case, more extreme FOV cameras will require more stretch compensation, and will therefore also bend straight lines further. Hence, adding barrel distortion to compensate for perspective stretching can only be pushed so far before it becomes objectionable in its own way.

What is lens distortion video games?

Figure 4. Adding barrel distortion to Figure 3.

Additionally, some resolution will effectively be lost when the barrel distortion effect is used to warp an already rendered input image as part of some post effect pipeline. That’s because the center of the image is slightly zoomed in, which causes the pixels near the center to spread over a larger area while pushing thin off-diagonal outer regions near the image’s edges out of the screen rectangle. Consequently, the diagonal FOV will be exactly the same before and after the distortion, while the horizontal and vertical FOV will be slightly smaller afterwards. Compare the perspective-projected Figure 5 to the barrel-distorted Figure 6 for an example. The loss of horizontal and vertical FOV can be compensated for by rendering the original image with a slightly larger FOV to begin with. Compare Figure 6 and 7 to see the difference. How much FOV compensation is needed be will be discussed in a later section.

So in short, adding enough barrel distortion to remove the surplus of stretching in wider-than-ideal FOV cameras is by no means a silver bullet, especially when applied as a post effect. But when the effect is kept subtle enough, the benefits can definitely outweigh the downsides, and the FOV can easily be pushed a bit further with little objection.

Please note that I am actually not the first to suggest using a reprojection to remove certain types of visual distortion, as some of the articles and papers mentioned in the Further Reading section show. This article does, however, introduce a novel more parameterized and generalized stretch-removing reprojection method and implementation.

Introducing the math

Now that the concepts have been explained, it’s time to cover the math involved. For more detailed explanations of FOV and different projection types, see the Further Reading section. Starting with the physical display and the viewer in front of it, the half height i is half the display height divided by the distance between the display and the viewer. So, when the display has the aspect ratio a, a diameter of d inch and a distance to the viewer of x meters, then i is equal to 0.0254\ d\ /\ (2 \sqrt{1 + a^2} x) . The physical vertical FOV, which is the angle subtended vertically by the display as seen from the viewer’s perspective is therefore equal to 2\ atan(i).

The virtual camera will have a vertical FOV as well, from which the half height h can be calculated (i.e. h = tan(FOV_Y/2) ). As the physical and virtual FOV need to match to get a perspective-projected image that is perceived by the viewer to be free of stretch, and a half height value is directly related to a FOV value, the half heights h and i will need to match as well. However, when the virtual FOV becomes larger than the physical FOV, or equivalently, when h is pushed beyond i, the resulting imagery will be perceived as being overly stretched unless sufficient barrel distortion is added to compensate for this effect.

Radial distortion types including barrel distortion are often implemented using Brown’s model, scaling each individual 2D pixel position vector by a weighted sum of powers of this vector’s length. Any type of radial distortion, including any variant of radial barrel distortion, can be realized this way by choosing the right set of weights. This, however, is not per se the most efficient method when a precise distortion profile is required. The barrel distortion function presented here is based on the direct transform from perspective projection to stereographic projection instead. Stereographic projection has the nice property of being conformal, which causes objects to be projected without any stretch at the cost of introducing radial bending. Furthermore, the math involved in this remapping is relatively efficient and easy to invert.

For the barrel distortion function it is assumed that the perspective projection has already been applied and that the resulting 2D screen coordinates lie in or on the rectangle [-1, -1]\ -\ [1,\ 1]. These perspective-projected screen coordinates will be represented by the vector \mathbf{p}, while the vector \mathbf{b} will be used to represent screen coordinates that are barrel distorted. Distorted coordinates can be converted to perspective screen coordinated using the function \mathbf{p}(\mathbf{b}), which is defined as follows:

(1)\quad \mathbf{p}(\mathbf{b}) = \mathbf{b}\ /\ (z\ -\ n_x\ b_x^2\ -\ n_y\ b_y^2) ,

where z = \frac{1}{2} + \frac{1}{2}\sqrt{1 + h^2 s^2 (1 + a^2)}, n_x = a^2 c^2\ n_y  and n_y = (z-1)\ /\ (1 + a^2 c^2). The function \mathbf{p}(\mathbf{b}) can be used to implement a distorting post effect by sampling the scene rasterized into an intermediate buffer (i.e. the render target) at \mathbf{p}(\mathbf{b}) for each final screen pixel at some position \mathbf{b}. The actual derivation of \mathbf{p}(\mathbf{b}) is omitted here for brevity, but can be found in the notebook available in the Downloads section.

The helper constant z defines the zoom factor that is needed to map each corner coordinate to itself so that the corners will lie on the [-1, -1]\ -\ [1,\ 1] rectangle both before and after the barrel distortion. The constants n_x and n_y are related to the rate at which b_x and b_y change the output \mathbf{p} non-linearly, respectively. As before, the values a and h represent the aspect ratio and virtual half height, respectively. c is the cylindrical ratio which can be assumed to be equal to 1 for now and will be covered in a later section in more detail. Lastly, s is the strength of the distortion effect, where 0 means no added distortion, and 1 means full stereographic-to-perspective re-projection.

The above formula can easily be implemented as an OpenGL post effect shader pair, as the following GLSL snippet shows. This snippet is used as-is in the WebGL sample. See the Ogre sample for an equivalent implementation using the Cg shader language.

//////////////////////////////// vertex shader //////////////////////////////////

uniform float strength;           // s: 0 = perspective, 1 = stereographic
uniform float height;             // h: tan(verticalFOVInRadians / 2)
uniform float aspectRatio;        // a: screenWidth / screenHeight
uniform float cylindricalRatio;   // c: cylindrical distortion ratio. 1 = spherical

varying vec3 vUV;                 // output to interpolate over screen
varying vec2 vUVDot;              // output to interpolate over screen

void main() {
    gl_Position = projectionMatrix * (modelViewMatrix * vec4(position, 1.0));

    float scaledHeight = strength * height;
    float cylAspectRatio = aspectRatio * cylindricalRatio;
    float aspectDiagSq = aspectRatio * aspectRatio + 1.0;
    float diagSq = scaledHeight * scaledHeight * aspectDiagSq;
    vec2 signedUV = (2.0 * uv + vec2(-1.0, -1.0));

    float z = 0.5 * sqrt(diagSq + 1.0) + 0.5;
    float ny = (z - 1.0) / (cylAspectRatio * cylAspectRatio + 1.0);

    vUVDot = sqrt(ny) * vec2(cylAspectRatio, 1.0) * signedUV;
    vUV = vec3(0.5, 0.5, 1.0) * z + vec3(-0.5, -0.5, 0.0);
    vUV.xy += uv;
}

/////////////////////////////// fragment shader ////////////////////////////////

uniform sampler2D tDiffuse;     // sampler of rendered scene’s render target
varying vec3 vUV;               // interpolated vertex output data
varying vec2 vUVDot;            // interpolated vertex output data

void main() {
    vec3 uv = dot(vUVDot, vUVDot) * vec3(-0.5, -0.5, -1.0) + vUV;
    gl_FragColor = texture2DProj(tDiffuse, uv);
}

The vertex shader is used here to prepare the inputs to the fragment shader in which the actual result of \bf{p}(\bf{b}) is calculated and used to sample the render target tScene at. Because of these precalculations, the fragment shader itself is extremely efficient. In fact, it requires only one additional interpolation register (that is, one extra varying), and two additional instruction slots (that is, one dot product, and one multiply-and-add) more than the simplest possible pass-through post effect. The texture2DProj() should be available on most hardware and perform as fast as a regular texture2D() operation. However, it may be replaced by the equivalent texture2D(tScene, uv.xy / uv.z) if unavailable.

Reversing the distortion

The distortion function \mathbf{p}(\mathbf{b}) undistorts a barrel-distorted perspective projection. In other words, when the input \mathbf{b} represents the final barrel-distorted screen position then this function’s output represents the corresponding position \mathbf{p} on the perspective-projected rendered image without barrel distortion. This matches what is needed for a post effect and for mouse picking, among other things. It is also possible to calculate the inverse mapping using the following formula, thus converting a perspective-projected position \mathbf{p} to the corresponding distorted position \mathbf{b}.

(2)\quad \mathbf{b}(\mathbf{p}) = z\ \mathbf{p}\ /\ (\frac{1}{2} + \sqrt{\frac{1}{4} + z\ (n_x p_x^2\ +\ n_y p_y^2) } ).

This particular formula may be used to calculate where some 3D object will be visible on the final barrel-distorted screen, for example. To do so, the object’s 3D position first has to be projected using the standard model-view-projection matrix to a 2D coordinate between [-1, -1]\ -\ [1,\ 1], which can then be mapped to \mathbf{b} using the formula above.

Controlling the strength

The formulas \mathbf{b}(\mathbf{p}) and \mathbf{p}(\mathbf{b}) are based on the transform from perspective to stereographic projection and back, respectively, but they have been generalized to (among other things) allow control over the distortion strength through the parameter s. When s is 1, perspective stretch is completely removed. This setting has been used to create the figures so far. However, this is probably too extreme for most actual use cases, as some perspective stretch is still desirable when the image is to be displayed on a flat screen from a finite distance. It is possible to calculate the ideal amount of stretch for a particular viewer and reduce the stretch caused by an overly wide-angled virtual camera back to that same ideal amount. The s that exactly leads to that ideal stretch factor can be calculated as follows:

(3)\quad s = \sqrt{ (h^2 - i ^2)\ /\ (h^2\ (1 + i ^2(1 + a^2)\ )\ ) }

For the derivation of this formula, see the notebook under Downloads. The formula doesn’t take into account any of the downsides of adding barrel distortion like the introduced radial bending and the loss of some effective resolution, which perhaps makes this calculation more of an upper bound for s than an a ideal value. Consequently, for some applications it might be preferable to scale down the output of this formula by some tweaked value, use a value for i closer to h, or perhaps even not use this formula at all and set s it directly to some tweaked constant like s = 0.5. Nevertheless, directly or indirectly controlling s based on the individual viewer might be an interesting idea that could be achieved automatically in real-time using the distance estimation capabilities of a Kinect device, for example.

Controlling cylindricity

Another parameter in the distortion function is the cylindrical ratio c. This parameter doesn’t change the overall amount of the distortion, but affects the main direction of it. To be more exact, it defines the amount of applied horizontal versus vertical distortion without altering the amount of distortion in the image’s corners. Setting c to 1 results in a perfectly spherical distortion, values higher than one result in vertical lines being bent less, and values closer to zero result in more straight horizontal lines. Compare Figures 11, 12 and 13 which show the results of applying a  full strength distortion to a 140° FOV perspective-projected image using three different values of c.

Setting c to a value larger than one may improve the visual quality in many cases. That might be because humans seem to be ‘better’ at interpreting bent horizontal lines as straight than at interpreting bent vertical lines as straight, especially when the virtual camera is rotating. For this or other aesthetic reasons, some movie makers have been known to select certain anamorphic lenses for their films, which possess a type of semi-cylindrical distortion that can be approximated by c = 2. In the Ogre3D sample, it’s also possible to make this value depend smoothly on the virtual camera’s pitch, reverting to a spherical projection when looking up or down, while approaching a tweakable value when gazing closer to the horizon.

Pinning the horizontal FOV

The loss in horizontal and vertical FOV after distortion (as a result of the original outer areas being pushed off the screen) can be compensated for by rendering the scene with a slightly higher FOV in the first place. It’s possible to calculate the half height h that is needed to be rendered with in order to end up with a specified horizontal FOV at some height y on the screen, which will be denoted here as FOV_{X|y}.

The chosen value for y may lie anywhere between 0 and 1. For example, FOV_{X|y=0} represents the horizontal FOV through the center of the screen, which effectively specifies the minimum horizontal FOV for the whole screen. FOV_{X|y=1} can be used to specify the horizontal FOV between the left and right corners of the screen, effectively specifying the the maximum visible horizontal FOV on the screen. FOV_{X|y=1/2} is interesting as well, as it can be shown that y=1/2 is the largest y for which the average horizontal FOV over the whole screen will be at least as large as the horizontal FOV specified. Compare Figure 14, 15 and 16 to see the difference between using h based on c = s = 1 together with FOV_{X|y=0} = 140 ^{\circ}, FOV_{X|y=1/2} = 140 ^{\circ} and FOV_{X|y=1} = 140 ^{\circ}, respectively. Also compare these to the original 140° FOV perspective-projected image in Figure 3, which would be equal to the output for all three FOV calculations if s had been 0 instead of 1.

The half height h to be rendered with in order to end up with the desired FOV_{X|y} after distortion can be computed as follows:

(4)\quad h = \mathbf{p_u}\bigg(\ \mathbf{b_u}\Big(\begin{bmatrix}w \\ w\ y / a\end{bmatrix} \Big)_x\ \ \begin{bmatrix}1\\1 / a\end{bmatrix}\ \bigg)_y

Here, the value w is the desired horizontal half width, which is equal to tan(FOV_{X|y}\ /\ 2). The functions \mathbf{b_u}(\mathbf{p_u}) and \mathbf{p_u}(\mathbf{b_u}) are distort and undistort functions that are equivalent to \mathbf{b}(\mathbf{p}) and \mathbf{p}(\mathbf{b}), respectively, but work on non-normalized coordinates instead. This means that any position \mathbf{p_u} will lie in or on the rectangle [-a\ h, -h]\ -\ [a\ h,\ h], and that these function distort and undistort, respectively, without doing any uniform scaling to guarantee matching corner coordinates. Consequently, the functions \mathbf{b_u}(\mathbf{p_u}) and \mathbf{p_u}(\mathbf{b_u}) may be defined as being identical to Eq. 1 and Eq.2, respectively, but require the values z = 1, n_x = c^2 n_y  and n_y = (s^2 (1 + a^2))\ /\ (4(1 + a^2 c^2))  instead.

It’s worth noting that smaller cylindrical ratios (i.e. smaller values for c) will need a wider camera FOV to fully compensate for the loss of horizontal FOV after barrel distortion, as there is less horizontal loss for higher values of c. Compare Figure 11 and 13. Consequently, when pinning the horizontal FOV, higher values for c cause less image resolution degradation during the resampling of an original perspective-projected image, as more ‘pixels per degree’ will be available to sample from when distorting it into the final visible number of degrees.

Properties in more detail

The previous sections contain all the information needed to implement and use the distortion effect. This section is about better understanding how different input parameters affect different aspects of the output. The math itself has been omitted here for brevity, but is available in the downloadable notebook file. All graphs are plotted assuming that c = 1 and are presented as functions of the diagonal FOV angle, which is equal to arctan(h\ \sqrt{1 + a^2}). By using the diagonal FOV, the data presented here is made independent of the aspect ratio.

What is lens distortion video games?

Figure 17. Maximum object stretch.

The proposed barrel distortion effect is meant to (partially) remove stretch visible in perspective-projected images. Stretch is defined here as the ratio of the amount of scaling in the direction towards the image’s center and the amount of scaling perpendicular to that direction. This stretch ratio is the most extreme near the screen’s corners, which is where the stretch has been measured for the following graph. The curve for s = 0 clearly shows that stretch as a result of perspective projection quickly grows as the FOV increases. In fact, it will have grown to infinity at 180^{\circ} . Note that using the barrel distortion with s = 1 removes all stretching artifacts, as can be expected from the resulting stereographic projection.

What is lens distortion video games?

Figure 18. Maximum object scale.

The distortion also affects the maximum ratio in apparent scale of an object seen in the center of the screen and the same object seen at the same 3D distance near the screen’s corner. This ratio is plotted in the next graph for a number of values for s again. Note that scaling differences get smaller for larger values of s but they are never completely removed, as the presented barrel distortion formula (or any stereographic-based reprojection, for that matter) isn’t capable of perfectly mapping a perspective projection into a so-called equi-solid projection.

What is lens distortion video games?

Figure 19. Maximum object bending.

The amount of bending of straight lines caused by the distortion effect can also be quantified. Bending is the most severe near the screen’s corners, which is why the data presented in the graph on the right has been computed there. The output is given as the ratio of the screen diagonal versus the ‘line diameter’. This line diameter is the diameter of a circle that has the same curvature as the most bent on-screen straight line possible. So, for example, a value of 1/3 means that the most bent on-screen line is as curved as a circle three times the diameter of the circle circumscribing the screen.

What is lens distortion video games?

Figure 20. Minimum resolution.

Adding barrel distortion will cause the image to become somewhat zoomed in near the center of the image. Consequently, when used to resample an already rasterized image instead, the center of the image will suffer the most from loss of resolution. The following graph shows how pixel resolution at the center is affected as a function of the diagonal FOV.

Conclusions

The presented distortion method allows for the exact calculation and compensation of the surplus of stretch that results from pushing a virtual perspective camera beyond its ideal FOV for a particular viewer. As it uses a straightforward generalization of the conversion from perspective projection to stereographic projection and back, it’s both exact and efficient. Furthermore, the introduced stepless strength and cylindricity parameters allow for fine grained tweaking and adaptive control in real time. Although the transform can somewhat degrade resolution and introduce radial bending of features and lines, these downsides only become noticeable when the FOV and strength are set both to particularly high values. Consequently, compared to standard perspective projection, the presented technique allows the FOV to be pushed further before any form of distortion becomes objectionable or uncomfortable. The benefits and drawbacks of the technique have been described in detail through the use of example images and graphs, and two downloadable sample applications have been made available below experiment with the effect in real time.

Further Reading

  • Field of view in video games. A brief explanation of horizontal and vertical FOV and the effects of different aspect ratios on these numbers.
    http://en.wikipedia.org/wiki/Field_of_view_in_video_games
  • About the various projections of the photographic objective lenses. Michel Thoby, 2012. A visual comparison of five different fish eye lens projections by example.
    http://michel.thoby.free.fr/Fisheye_history_short/Projections/Various_lens_projection.html.
  • Perspective Projection: The Wrong Imaging Model. Margeret M. Fleck, Technical Report 95-01, Departement of Computer Science, University of Iowa, 1995. An extensive technical comparison and discussion of five different projection types, including perspective and stereographic projection.
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.8827&rep=rep1&type=pdf.
  • Distortion of binoculars revisited: Does the sweet spot exist? Holger Merlitz, Department of Physics and ITPA, Xiamen University. Discusses how narrow FOV applications can benefit from adding pincushion distortion as a way to compensate for the apparent barrel distortion inherent to the human eye.
    http://holgermerlitz.de/globe/distortion_final.pdf.
  • Correction of Geometric Perceptual Distortions in Pictures. Denis Zorin and Alan H. Barr, in Computer Graphics. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp. 257-264, ACM, 1995. Proposes a (less efficient) linear mix of perspective and stereographic projection of which the mix weight may vary spatially, allowing for a more localized minimization of the perceptual distortion in a particular image.
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.151.2303&rep=rep1&type=pdf.
  • Optimizing Content-Preserving Projections for Wide-Angle Images. Robert Carroll et al. In ACM Transactions on Graphics (TOG) vol. 28 no. 3, ACM, 2009. Proposes an offline distortion energy minimization technique that calculates a locally varying reprojection based on user-specified salient features.
    http://vis.berkeley.e1du/papers/capp/projection-sig09.pdf.
  • Decentering Distortion of Lenses, Duane C. Brown, in Photometric Engineering 32, no. 3, pp. 444-462, 1966. Introduces a commonly used general-purpose analytical model of (decentered) lens distortion.
    https://eserv.asprs.org/PERS/1966journal/may/1966_may_444-462.pdf

Downloads

  • WebGL sample. This sample in three.js should work in any WebGL-enabled browser and demonstrates the basic barrel distortion post effect. It allows the camera angle, FOV, strength and cylindrical ratio can be controlled in real-time. All code is licensed under BSD, which means that it can be used both for commercial and non-commercial purposes as long as credit is given appropriately. Open the HTML page’s source in your browser to view the javascript source. [HTML] [Source ZIP]
  • Ogre3D sample project. All formulas discussed in this article have been implemented in this Ogre3D demo, and all screenshots in this article have been made using it. It includes a Cg version of the distortion post effect, and a compact and reuseable C++ distortion math library. Next to being able to control the FOV, strength and cylindrical ratio, it shows the current values of the properties discussed in the ‘Properties in more detail’ in real time. All code is licensed under BSD, which means that it can be used both for commercial and non-commercial purposes as long as credit is given appropriately. [Windows binary ZIP] [Source code ZIP]
  • Mathematica notebook. The derivations of all presented formulas and properties are available in this Mathematica 9 notebook. [NB] [PDF]

What is lens distortion Battlefield 2042?

Lens distortion and the vignette only impact certain portions of the visual experience, but the other two drag down the entire image. As the name suggests, film grain is a post-process effect that simulates film grain, but those fine dots lead to more visual noise.

Is chromatic aberration good in video games?

Since Chromatic Aberration does not affect the frame rate it is all down to personal preference. However we recommend to turn it off if you favour stronger image quality in your games as it can add a slight blurriness to the image.

How do you get rid of lens distortion?

You can change your position and shoot from a spot further away from your subject with the same lens. Another way is to use a longer lens, which should hopefully exhibit less distortion compared to a wide-angle lens. Unfortunately, both ways will affect your composition and framing, which might not always be desirable.

Does FOV distort?

Rendering a game using a wide FOV may be desirable because it allows a gamer to be more aware of the virtual surroundings or may aid in achieving a particular cinematic look. However, it can cause noticeable visible distortion, depending on where the gamer is relative to the used display.