www.xbdev.net
xbdev - software development
Friday February 20, 2026
Home | Contact | Support | Computer Graphics Powerful and Beautiful ...
     
 

Physically-Based
Rendering

Lights and Rays ...

 



[TOC] Chapter 12: Monte Carlo Integration


Monte Carlo Integration is crucial in rendering as it uses probability to approximate complex lighting effects like global illumination and soft shadows by sampling random light paths. By leveraging statistical techniques, it estimates the distribution of light interactions with surfaces, making it possible to handle high-dimensional and intricate problems realistically.


Probability Theory Basics


Probability Theory is the branch of mathematics that deals with randomness and uncertainty. In ray tracing, we utilize random sampling to estimate lighting, color, and material properties.

Probability Density Function (PDF)


A Probability Density Function (PDF) is a function that describes the relative likelihood of a random variable to take on a particular value. For a continuous random variable \(X\), the PDF \(p(x)\) must satisfy:

\[
\int_{-\infty}^{\infty} p(x) \, dx = 1
\]

This means that the total area under the PDF curve equals 1.

Example: PDF of Uniform Distribution



For a uniform distribution defined over the interval \([a, b]\):

\[
p(x) = \frac{1}{b - a} \quad \text{for } a \leq x \leq b
\]

This means that every value in this range has an equal probability of being sampled.

Cumulative Distribution Function (CDF)


The Cumulative Distribution Function (CDF), \(F(x)\), gives the probability that the random variable \(X\) is less than or equal to \(x\):

\[
F(x) = \int_{-\infty}^{x} p(t) \, dt
\]

Example: CDF for Uniform Distribution



For the uniform distribution defined above, the CDF can be expressed as:

\[
F(x) = \begin{cases}
0 & x < a \\
\frac{x-a}{b-a} & a \leq x < b \\
1 & x \geq b
\end{cases}
\]

This CDF shows that for any value of \(x\) less than \(a\), the probability is 0; for \(x\) in the range \([a, b]\), the probability increases linearly, and for \(x\) greater than \(b\), the probability is 1.

The Monte Carlo Estimator


The Monte Carlo estimator is a statistical method for estimating the value of an integral. It relies on random sampling to approximate integrals of functions over a domain \(D\).

Monte Carlo Estimator Equation


The integral \(I\) of a function \(f(x)\) over a domain \(D\) can be estimated using \(N\) random samples \(x_i\) drawn from a probability density function \(p(x)\):

\[
I \approx \frac{1}{N} \sum_{i=1}^{N} \frac{f(x_i)}{p(x_i)}
\]

This equation provides a way to compute the integral by weighting the function value \(f(x_i)\) by the inverse of the probability density \(p(x_i)\) at that point, which corrects for the likelihood of sampling from the distribution.

Example Code Snippet


Here's how to implement a Monte Carlo estimator in JavaScript:

function monteCarloEstimator(fpnumSamples) {
    
let sum 0;

    for (
let i 0numSamplesi++) {
        const 
sampleFromDistribution(p); // Sample from the distribution
        
sum += f(x) / p(x); // Weight the function value by the inverse PDF
    
}

    return 
sum numSamples// Average over all samples
}

// Sample from uniform distribution
function sampleFromDistribution(p) {
    
// Assuming p(x) is uniform over [0, 1]
    
return Math.random();
}

// Example function to integrate: f(x) = x^2
function f(x) {
    return 
x// The integral over [0, 1] is 1/3
}

// Run the estimator
const numSamples 10000;
const 
result monteCarloEstimator(f=> 1numSamples); // p(x) = 1
console.log("Estimated integral:"result); // Should approximate 1/3


In this example, we define a function \(f(x) = x^2\) and use the Monte Carlo estimator to approximate the integral over the interval \([0, 1]\).

Sampling Random Variables


Sampling Techniques


Sampling is the process of selecting points from a probability distribution. Various techniques can be employed to sample random variables effectively.

Inverse Transform Sampling


Inverse Transform Sampling is a widely used method where we first sample from a uniform distribution and then transform that sample using the inverse of the CDF.

If \(u\) is drawn from \(U(0, 1)\), then the corresponding sample from the distribution with CDF \(F\) can be computed as:

\[
x = F^{-1}(u)
\]

Example of Inverse Transform Sampling


For a uniform distribution over \([a, b]\):

1. Generate a random value \(u \sim U(0, 1)\).
2. Compute \(x = a + (b - a) \cdot u\).

function inverseTransformUniform(uab) {
    return 
+ (a) * u// Generate a sample from U(a, b)
}

// Usage
const 01;
const 
Math.random(); // Sample from U(0, 1)
const sample inverseTransformUniform(uab);
console.log("Sampled value:"sample);


This method is efficient for simple distributions like the uniform distribution.

Metropolis Sampling


The Metropolis algorithm is a method for generating samples from a probability distribution when direct sampling is difficult. It is particularly useful for high-dimensional spaces.

Metropolis Algorithm Steps


1. Initialize: Start with an initial sample \(x_0\).
2. Proposal: Generate a new candidate sample \(x'\) based on the current sample.
3. Acceptance Probability: Calculate the acceptance probability:

\[
A(x, x') = \min\left(1, \frac{p(x')}{p(x)}\right)
\]

4. Accept or Reject: Accept \(x'\) with probability \(A\). If rejected, the current sample \(x\) is retained.

Example Code Snippet


function metropolisSampling(px0numSamples) {
    
let samples = [x0]; // Store the samples
    
let current x0// Current sample
    
    
for (let i 1numSamplesi++) {
        const 
xPrime proposalDistribution(current); // Generate a proposal sample
        
const acceptanceProbability Math.min(1p(xPrime) / p(current)); // Calculate acceptance probability
        
        // Accept or reject the new sample
        
if (Math.random() < acceptanceProbability) {
            
current xPrime// Accept the new sample
        
}
        
samples.push(current); // Store the sample
    
}
    
    return 
samples;
}

// Example proposal distribution (normal distribution)
function proposalDistribution(current) {
    const 
stepSize 0.1// Step size for random walk
    
return current + (Math.random() - 0.5) * stepSize// Random walk
}

// Example probability density function
function p(x) {
    return 
Math.exp(-2) / Math.sqrt(Math.PI); // Standard normal distribution
}

// Generate samples
const samples metropolisSampling(p01000); // Start from x0 = 0
console.log("Metropolis samples:"samples);


In this code, we use a random walk to generate new samples, demonstrating how the Metropolis algorithm helps explore complex probability distributions.

Transforming between Distributions


When transforming samples from one distribution to another, we can use the change of variables method. This method is useful when you have a transformation \(Y = g(X)\) and want to derive the PDF of \(X\).

The relationship between the PDFs can be expressed as:

\[
p_X(x) = p_Y(g^{-1}(x)) \left| \frac{dg^{-1}}{dx} \right|
\]

Where \(\frac{dg^{-1}}{dx}\) is the Jacobian of the transformation.

Example


If \(Y\) follows a known distribution and we want to derive \(X\) from \(Y\):

function transformDistribution(y) {
    
// Example transformation: Y to X = Y^2
    
return y// Squaring the value
}


This function demonstrates a simple transformation, and if \(Y\) is uniformly distributed, we can use this to find the distribution of \(X\).

2D Sampling with Multidimensional Transformations


Multidimensional Integrals


In ray tracing, we often deal with multidimensional integrals, such as when estimating the color contribution from light sources in a 3D scene. The general form of a 2D integral is:

\[
I = \int_D f(x, y) \, dx \, dy
\]

Using Monte Carlo estimation, we can approximate this integral as follows:

\[
I \approx \frac{1}{N} \sum_{i=1}^{N} \frac{f(x_i, y_i)}{p(x_i, y_i)}
\]

where \( (x_i, y_i) \) are random samples drawn from a joint PDF \( p(x, y) \).

Example Code Snippet


function monteCarlo2D(fpnumSamples) {
    
let sum 0;

    for (
let i 0numSamplesi++) {
        const 
sampleFromDistribution(p); // Sample x
        
const sampleFromDistribution(p); // Sample y
        
sum += f(xy) / (p(x) * p(y)); // Weight the function value
    
}

    return 
sum numSamples// Average over all samples
}

// Example function to integrate in 2D
function f2D(xy) {
    return 
y// Integral over [0, 1] x [0, 1] is 1/4
}

// Run the estimator
const numSamples2D 10000;
const 
result2D monteCarlo2D(f2D, () => 1numSamples2D); // p(x, y) = 1
console.log("Estimated 2D integral:"result2D); // Should approximate 1/4


This code approximates the integral of the function \(f(x, y) = xy\) over the unit square.

Russian Roulette and Splitting


Russian Roulette is a variance-reduction technique used in Monte Carlo rendering to terminate rays that contribute little to the final image, thus saving computational resources. When a ray is traced and reaches a point, we can decide probabilistically whether to continue tracing or terminate it.

Acceptance Probability


If a ray is active with probability \(p\), we generate a random number \(r \sim U(0, 1)\):

- If \(r < p\): continue tracing the ray.
- If \(r \geq p\): terminate the ray and assign a value of zero.

Example Code Snippet


ript
function russianRoulette(rayprobability) {
    const 
Math.random(); // Random number for roulette
    
if (probability) {
        
// Continue tracing the ray
        
return traceRay(ray); // Implement your ray tracing logic here
    
}
    return 
0// Terminate the ray
}


This function illustrates how Russian Roulette can be implemented in ray tracing.

Splitting


Splitting is another variance reduction technique where rays are divided into multiple parts that are traced independently. This technique is particularly effective in complex scenes where certain directions contribute significantly to lighting.

ript
function splitRay(raynumSplits) {
    const 
splitRays = [];
    for (
let i 0numSplitsi++) {
        
// Create new rays based on splitting logic
        
splitRays.push(ray.clone().offset(stepSize)); // Assuming stepSize is defined
    
}
    return 
splitRays;
}


This function takes a ray and generates multiple split rays that can be traced independently, potentially improving the quality of the final image.

Bias


Bias refers to the systematic error introduced when an estimator consistently overestimates or underestimates a value. In Monte Carlo integration, bias can lead to incorrect estimations.

Reducing Bias


To reduce bias, we can:

1. Use more samples: Increasing the number of samples can help converge to the true value.
2. Apply importance sampling: Focusing on more critical areas can improve accuracy.

Example Code Snippet


function biasedEstimator(samplestrueValue) {
    const 
estimate samples.reduce((sums) => sum s0) / samples.length;
    const 
bias estimate trueValue// Calculate bias
    
console.log("Bias:"bias);
}


This code calculates the bias of an estimator by comparing the estimated value to a known true value.

Importance Sampling


Importance Sampling is a technique used to reduce variance by sampling more frequently from important regions of the domain. This is particularly useful in ray tracing for sampling light sources or materials that contribute significantly to the final image.

Importance Sampling Equation


The integral can be expressed as:

\[
I = \int_D f(x) p(x) \, dx
\]

where \(p(x)\) is the importance sampling distribution. We then estimate this as:

\[
I \approx \frac{1}{N} \sum_{i=1}^{N} \frac{f(x_i)}{p(x_i)}
\]

Example Code Snippet


function importanceSampling(fpnumSamples) {
    
let sum 0;

    for (
let i 0numSamplesi++) {
        const 
sampleFromImportanceDistribution(p); // Sample from importance distribution
        
sum += f(x) / p(x); // Weight the function value
    
}

    return 
sum numSamples// Average over all samples
}

// Example importance distribution (e.g., Gaussian)
function sampleFromImportanceDistribution(p) {
    
// Gaussian distribution sampling logic here
}

// Run importance sampling
const resultImportance importanceSampling(fp10000);
console.log("Estimated integral with importance sampling:"resultImportance);


In this code, we demonstrate how to apply importance sampling to improve the accuracy of Monte Carlo integration.







Ray-Tracing with WebGPU kenwright WebGPU Development Cookbook - coding recipes for all your webgpu needs! WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Games WGSL 2d 3d interactive web-based fun learning WebGPU Compute WebGPU API - Owners WebGPU & WGSL Essentials: A Hands-On Approach to Interactive Graphics, Games, 2D Interfaces, 3D Meshes, Animation, Security and Production Kenwright graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu for dummies kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright WebGPU Shader Language Development: Vertex, Fragment, Compute Shaders for Programmers Kenwright wgsl webgpugems shading language cookbook kenwright WGSL Fundamentals book kenwright WebGPU Data Visualization Cookbook kenwright Special Effects Programming with WebGPU kenwright WebGPU Programming Guide: Interactive Graphics and Compute Programming with WebGPU & WGSL kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.