Variance Shadow Maps


The shadow map algorithm is to create shadow using the depth image from a light view. Similarly, the variance shadow map uses variance of depths to reduce antialiasing. Along with introducing this, several algorithms are tested together for comparison.

1. Percentage-Closer Filtering(PCF)


To resolve the antialiasing problem, PCF averages depths to reference[1]. It is acting like a box blur algorithm, which means it costs by a filter size. Therefore, the filter size can be selected adaptively, and [2] suggests to use derivatives of depth map texture coordinates. In short, the filter size gets decreased when the camera is closer, and increased when the camera is farther. Besides this handling, the depth value should be biased since PCF suffers the shadow acne problem.

2. Variance Shadow Map(VSM)


VSM is working similarly with standard shadow maps, but it stores depth M1M_1 and depth squared M2M_2 called moments for the light view texture. The key point is to assmume that the current depth value represents the expectation of depths. M1=E[x]=xp(x)dxM2=E[x2]=x2p(x)dx\begin{aligned} M_1 &= E[x] = \int xp(x) dx \\ M_2 &= E[x^2] = \int x^2 p(x) dx \end{aligned}

With this moments, the mean μ=M1\mu = M_1 and variance σ2=M2M12\sigma^2 = M_2 - M_1^2 can be derived. Given these mean and variance, an upper bound on the probability that the current surface is lit can be drawn by applying Chebyshev’s inequality. To be specific, what to compute is how likely the current depth tt which is the distance from the current surface to a light source is smaller, P(Xt)P(X \geq t). For this, the one-sided Chebyshev inequality is only required, so Cantelli’s inequality[3] is more suitable. For any u0u \geq 0, P(Xt)=P(Xμ+utμ+u).\begin{aligned} P(X \geq t) = P(X - \mu + u \geq t - \mu + u). \end{aligned}

Let λ=tμ\lambda = t - \mu and Y=XμY = X - \mu, then E[Y]=0E[Y] = 0 and Var[Y]=E[(YE[Y])2]=E[(Xμ)2]=σ2Var[Y] = E[(Y - E[Y])^2] = E[(X - \mu)^2] = \sigma^2. From this fact, P(Xt)=P(Xμ+utμ+u)=P(Y+uλ+u)P(Y+uλ+u)+P(Y+u(λ+u))=P((Y+u)2(λ+u)2).\begin{aligned} P(X \geq t) &= P(X - \mu + u \geq t - \mu + u) = P(Y + u \geq \lambda + u) \\ &\leq P(Y + u \geq \lambda + u) + P(Y + u \leq -(\lambda + u)) \\ &= P((Y + u)^2 \geq (\lambda + u)^2). \end{aligned}

Meanwhile, Markov’s inequality[4] gives the following upper bound. If XX is a nonnegative random variable and a>0a > 0, E[X]=xp(x)dx=0xp(x)dx=0axp(x)dx+axp(x)dxaxp(x)dxaap(x)dx=aap(x)dx=aP(Xa).\begin{aligned} E[X] &= \int_{-\infty}^{\infty} xp(x) dx = \int_{0}^{\infty} xp(x) dx \\ &= \int_{0}^{a} xp(x) dx + \int_{a}^{\infty} xp(x) dx \geq \int_{a}^{\infty} xp(x) dx \\ &\geq \int_{a}^{\infty} ap(x) dx = a \int_{a}^{\infty} p(x) dx = a P(X \geq a). \end{aligned}

Going back to Cantelli’s inequality from this result, P(Xt)P((Y+u)2(λ+u)2)E[(Y+u)2](λ+u)2=E[Y2]+2uE[Y]+u2(λ+u)2=σ2+u2(λ+u)2=g(u).\begin{aligned} P(X \geq t) &\leq P((Y + u)^2 \geq (\lambda + u)^2) \leq \frac{E[(Y + u)^2]}{(\lambda + u)^2} \\ &= \frac{E[Y^2] + 2uE[Y] + u^2}{(\lambda + u)^2} = \frac{\sigma^2 + u^2}{(\lambda + u)^2} = g(u). \end{aligned}

By differentiating g(u)g(u), the extremum uu_* can be found. g(u)=2u(λ+u)22(σ2+u2)(λ+u)(λ+u)4=2(uλσ2)(λ+u)3=0u=σ2λ\begin{aligned} g'(u) &= \frac{2u(\lambda + u)^2 - 2(\sigma^2 + u^2)(\lambda + u)}{(\lambda + u)^4} = \frac{2(u\lambda - \sigma^2)}{(\lambda + u)^3} = 0 \\ u_* &= \frac{\sigma^2}{\lambda} \end{aligned}

Consequently, Cantelli’s inequality is derived. P(Xt)g(u)σ2+u2(λ+u)2=σ2+σ4/λ2(λ+σ2/λ)2=λ2σ2+σ4(λ2+σ2)2=σ2(λ2+σ2)(λ2+σ2)2=σ2λ2+σ2=σ2(tμ)2+σ2\begin{aligned} P(X \geq t) &\leq g(u) \leq \frac{\sigma^2 + u_*^2}{(\lambda + u_*)^2} = \frac{\sigma^2 + \sigma^4 / \lambda^2}{(\lambda + \sigma^2 / \lambda)^2} = \frac{\lambda^2 \sigma^2 + \sigma^4}{(\lambda^2 + \sigma^2)^2} = \frac{\sigma^2 (\lambda^2 + \sigma^2)}{(\lambda^2 + \sigma^2)^2} = \frac{\sigma^2}{\lambda^2 + \sigma^2} \\ &= \frac{\sigma^2}{(t - \mu)^2 + \sigma^2} \end{aligned}

This inequality gives a valid upper bound only when tμt \geq \mu. If t<μt < \mu, the upper bound must be 11 because the surface is fully lit. While calculating variance, it is better to clamp the minimum variance to a small value such as 10610^{-6} according to [2]. This clamping handles any numeric issues.

3. Parallel-Split Variance Shadow Map(PSVSM)


This algorithm is based on parallel-split shadow mapping as discussed before. However, it is combined with variance shadow maps, so it is using the upper bound from Cantelli’s inequality when calculating a shadow value.

4. Summed Area Table Variance Shadow Map(SATVSM)


Summed area table makes calculating the summation of a specific range in a texture fast. If this table is generated from the texture moments stored in, shadow calculation by arbitrary area can be accelerated. This table can be created with the method from [5] which is called recursive doubling. Like PCF, a filter size is required when calculating shadows, but this time another method is applied so that the filter size gets increased when the camera is closer to the shadow area to hide artifacts. Besides, bilinear interpolation is considered together while averaging moments in an area. Combined with summed area tables, shadow filtering can be processed in constant time providing plausible soft shadow results.

Light Bleeding


One flaw of variance shadow maps is light bleeding which lights a surface that shadow should be cast on. As shown in the above figure, the big depth gap causes variance σ2\sigma^2 to increase, so the upper bound from Cantelli’s inequality is close to 11 as well. In fact, this artifact gets worse as d1d2\frac{d_1}{d_2} is larger. One simple solution for this is to normalize the shadow value with an experimental constant.

  float reduceLightBleeding(in float shadow)
    const float light_bleeding_reduction_amount = 0.18f;
    return clamp( (shadow - light_bleeding_reduction_amount) / (one - light_bleeding_reduction_amount), zero, one );






[5] Hensley, Justin, Thorsten Scheuermann, Greg Coombe, Montek Singh, and Anselmo Lastra, Fast Summed-Area Table Generation and Its Applications, Computer Graphics Forum 24(3).


© 2023. All rights reserved.