www.xbdev.net
xbdev - software development
Friday February 20, 2026
Home | Contact | Support | Computer Graphics Powerful and Beautiful ...
     
 

Physically-Based
Rendering

Lights and Rays ...

 



[TOC] Chapter 2: Mathematics and Transforms


Mathematics and transforms are essential as they help us define how objects and light interact in a 3D scene. Coordinate systems and vectors are used to position objects and describe directions. Transforms like moving, rotating, or scaling objects align them in the scene. Rays, which represent paths of light, are traced to find where they hit objects, while bounding boxes make these calculations faster by narrowing down what needs to be checked. These tools work together to simulate real-world lighting and materials accurately and efficiently.

Coordinate Systems


A coordinate system is a method for uniquely determining the position of points in a space.

They are fundamental for representing positions in space, and their handedness (left vs. right) has significant implications.
Understanding handedness is essential for ensuring consistent behavior in simulations, animations, and any application involving spatial calculations.

The most common coordinate systems are:

- Cartesian coordinates: defined by orthogonal axes (X, Y, Z).
- Polar coordinates: defined by a distance from a reference point and an angle.
- Spherical coordinates: defined by a distance from a reference point and two angles.

Handedness (Left vs Right)


Handedness refers to the orientation of the coordinate system in three-dimensional space. It helps in determining the relative position and rotation of objects within that space. There are two primary types of handedness:

1. Right-Handed Coordinate System (RHS)
2. Left-Handed Coordinate System (LHS)

Right-Handed Coordinate System (RHS)



In a right-handed coordinate system:

• Imagine you are holding the three axes of the coordinate system with your right hand.
• Your thumb points in the direction of the positive Z-axis, your index finger points in the direction of the positive X-axis, and your middle finger points in the direction of the positive Y-axis.

This arrangement can be visualized as follows:


Right-Handed Coordinate System (RHS).
Right-Handed Coordinate System (RHS).


In this system, if you curl your fingers from the X-axis toward the Y-axis, your thumb points in the direction of the Z-axis.

Left-Handed Coordinate System (LHS)



In a left-handed coordinate system:

• You hold the axes with your left hand instead.
• Your thumb points in the direction of the positive Z-axis, your index finger points in the direction of the positive X-axis, and your middle finger points in the direction of the negative Y-axis.

This arrangement looks like this:


Left-Handed Coordinate System (LHS).
Left-Handed Coordinate System (LHS).


In this case, curling your fingers from the X-axis toward the Y-axis points your thumb in the direction of the negative Z-axis.


Importance of Handedness



1. Consistency in Graphics and Physics: Handedness is critical in 3D graphics and physics simulations. A right-handed system is commonly used in most computer graphics applications (e.g., OpenGL), while a left-handed system might be used in others (e.g., Direct3D).

2. Vector Operations: Operations such as the cross product can yield different results based on the handedness of the coordinate system, affecting object orientation and transformations.

3. Geometric Interpretations: Handedness can affect how shapes are drawn and manipulated in a 3D space. If you switch from one handedness to another, the interpretation of the angles, rotations, and object orientations may change.


Comparing Coordinate System.
Comparing Coordinate System.


Mathematical Representation of Handedness



Let's look at how the right-hand and left-hand systems can be mathematically represented. Suppose you have two vectors \( \mathbf{a} \) and \( \mathbf{b} \) in 3D space.

In a right-handed system, the cross product \( \mathbf{c} = \mathbf{a} \times \mathbf{b} \) gives a vector \( \mathbf{c} \) that points in the positive Z direction.

In a left-handed system, the same cross product yields a vector \( \mathbf{c} \) that points in the negative Z direction.

Right-Handed Cross Product Example



Given vectors:

\[
\mathbf{a} = (1, 0, 0) \quad \text{(X-axis)}
\]
\[
\mathbf{b} = (0, 1, 0) \quad \text{(Y-axis)}
\]

The cross product \( \mathbf{c} = \mathbf{a} \times \mathbf{b} \) is calculated as:

\[
\mathbf{c} = (a_y \cdot b_z - a_z \cdot b_y, a_z \cdot b_x - a_x \cdot b_z, a_x \cdot b_y - a_y \cdot b_x)
\]
\[
\mathbf{c} = (0 \cdot 0 - 0 \cdot 1, 0 \cdot 1 - 1 \cdot 0, 1 \cdot 0 - 0 \cdot 0)
\]
\[
\mathbf{c} = (0, 0, 1) \quad \text{(Z-axis)}
\]

Left-Handed Cross Product Example


Using the same vectors in a left-handed system, the interpretation of the cross product is different:

\[
\mathbf{c}_{LHS} = (1, 0, 0) \times (0, 1, 0) = (0, 0, -1) \quad \text{(Negative Z-axis)}
\]

JavaScript Example: Handedness and Cross Product



Let's write a JavaScript function that calculates the cross product of two vectors, and we can observe how the handedness affects the result.

function crossProduct(ab) {
    return [
        
a[1] * b[2] - a[2] * b[1], // X component
        
a[2] * b[0] - a[0] * b[2], // Y component
        
a[0] * b[1] - a[1] * b[0]  // Z component
    
];
}

// Example vectors for right-handed system
const vectorA = [100]; // X-axis
const vectorB = [010]; // Y-axis

const rightHandedCrossProduct crossProduct(vectorAvectorB);
console.log("Cross Product (Right-Handed):"rightHandedCrossProduct); // Output: [0, 0, 1]

// Example vectors for left-handed system (manually flipping the Z-axis)
const leftHandedCrossProduct crossProduct(vectorA, [-0, -10]); // flipping Y to negative
console.log("Cross Product (Left-Handed):"leftHandedCrossProduct); // Output: [0, 0, -1]



Vectors


A vector is a mathematical object that has both magnitude (length) and direction. Vectors are often represented in a Cartesian coordinate system with components, e.g., a 3D vector can be represented as \( \mathbf{v} = (x, y, z) \).


Dot Product


The dot product of two vectors \( \mathbf{a} = (a_x, a_y, a_z) \) and \( \mathbf{b} = (b_x, b_y, b_z) \) is calculated as:

\[
\mathbf{a} \cdot \mathbf{b} = a_x \cdot b_x + a_y \cdot b_y + a_z \cdot b_z
\]

Geometric Interpretation: The dot product gives a scalar value that reflects the cosine of the angle \( \theta \) between the two vectors:

\[
\mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \cos(\theta)
\]

This means that the dot product can be used to determine whether two vectors are perpendicular (if the dot product is zero).

JavaScript Example for Dot Product



function dotProduct(ab) {
    return 
a[0] * b[0] + a[1] * b[1] + a[2] * b[2];
}

// Example usage
const vectorA = [123];
const 
vectorB = [456];
const 
result dotProduct(vectorAvectorB);
console.log("Dot Product:"result); // Output: Dot Product: 32


Cross Production


The cross product of two vectors \( \mathbf{a} \) and \( \mathbf{b} \) results in a vector that is perpendicular to both \( \mathbf{a} \) and \( \mathbf{b} \). It is calculated as:

\[
\mathbf{a} \times \mathbf{b} = \left( a_y b_z - a_z b_y, a_z b_x - a_x b_z, a_x b_y - a_y b_x \right)
\]

Geometric Interpretation: The magnitude of the resulting vector is given by:

\[
|\mathbf{a} \times \mathbf{b}| = |\mathbf{a}| |\mathbf{b}| \sin(\theta)
\]

This represents the area of the parallelogram formed by the two vectors.

JavaScript Example for Cross Product



function crossProduct(ab) {
    return [
        
a[1] * b[2] - a[2] * b[1],
        
a[2] * b[0] - a[0] * b[2],
        
a[0] * b[1] - a[1] * b[0]
    ];
}

// Example usage
const vectorA = [123];
const 
vectorB = [456];
const 
crossResult crossProduct(vectorAvectorB);
console.log("Cross Product:"crossResult); // Output: Cross Product: [-3, 6, -3]


Normalization


Normalization of a vector involves scaling the vector so that it has a magnitude of 1, which is known as a unit vector. The formula for normalizing a vector \( \mathbf{v} = (v_x, v_y, v_z) \) is:

\[
\text{Normalized } \mathbf{v} = \left( \frac{v_x}{|\mathbf{v}|}, \frac{v_y}{|\mathbf{v}|}, \frac{v_z}{|\mathbf{v}|} \right)
\]

where \( |\mathbf{v}| = \sqrt{v_x^2 + v_y^2 + v_z^2} \).

JavaScript Example for Normalization



function normalize(v) {
    const 
length Math.sqrt(v[0]**v[1]**v[2]**2);
    return [
v[0] / lengthv[1] / lengthv[2] / length];
}

// Example usage
const vector = [340];
const 
normalizedVector normalize(vector);
console.log("Normalized Vector:"normalizedVector); // Output: Normalized Vector: [0.6, 0.8, 0]



Coordinate System from a Vector


We often need to construct a local coordinate system when provided with a single 3D vector. Using the cross product of two vectors, we can create a set of three orthogonal vectors that define a local coordinate system. Since the cross product between two vectors is perpendicular to both, applying the cross product twice yields three orthogonal vectors. However, the second and third vectors are unique only up to a rotation around the given vector.

Steps to Construct the Coordinate System



1. Start with a normalized vector \( \mathbf{v_1} \).

The given vector \( \mathbf{v_1} \) is assumed to be normalized, meaning \( \mathbf{v_1} \cdot \mathbf{v_1} = 1 \).

2. Generate a second vector \( \mathbf{v_2} \) that is perpendicular to \( \mathbf{v_1} \).

One simple way to do this is by zeroing one component of \( \mathbf{v_1} \), swapping the remaining two, and negating one of them.

3. Cross product of \( \mathbf{v_1} \) and \( \mathbf{v_2} \) to get a third vector \( \mathbf{v_3} \) that is perpendicular to both.

The resulting vector \( \mathbf{v_3} \) will complete the orthogonal set of vectors.

Example of Vector Construction



Let \( \mathbf{v_1} = (x, y, z) \) be the normalized input vector.

1. Construct a Perpendicular Vector \( \mathbf{v_2} \):

If \( x \) is non-zero, we can choose \( \mathbf{v_2} = (-y, x, 0) \).
This ensures that \( \mathbf{v_1} \cdot \mathbf{v_2} = 0 \), meaning the two vectors are perpendicular.

2. Calculate the Third Vector \( \mathbf{v_3} \):

Use the cross product:
\[
\mathbf{v_3} = \mathbf{v_1} \times \mathbf{v_2}
\]

We chose the first non-zero - however, you could also modify the algorithm to search for the largest component instead (better numerical).

JavaScript Example



Here's a simple JavaScript implementation that constructs an orthogonal coordinate system from a given normalized vector:

function crossProduct(v1v2) {
    return [
        
v1[1] * v2[2] - v1[2] * v2[1],
        
v1[2] * v2[0] - v1[0] * v2[2],
        
v1[0] * v2[1] - v1[1] * v2[0]
    ];
}

function 
normalize(v) {
    const 
length Math.sqrt(v[0]**v[1]**v[2]**2);
    return [
v[0] / lengthv[1] / lengthv[2] / length];
}

function 
constructCoordinateSystem(v1) {
    
// Assume v1 is already normalized
    
let v2;

    
// Choose a simple perpendicular vector
    
if (Math.abs(v1[0]) > 0.001) {
        
v2 = [-v1[1], v1[0], 0]; // swap y and x, and negate one
    
} else {
        
v2 = [0, -v1[2], v1[1]]; // swap z and y, and negate one
    
}

    
// Normalize v2
    
v2 normalize(v2);

    
// Compute v3 as the cross product of v1 and v2
    
const v3 crossProduct(v1v2);

    return {
        
v1v1,
        
v2v2,
        
v3v3
    
};
}

// Example Usage
const v1 normalize([123]); // Input vector (normalized)
const coordinateSystem constructCoordinateSystem(v1);

console.log("v1:"coordinateSystem.v1);
console.log("v2:"coordinateSystem.v2);
console.log("v3:"coordinateSystem.v3);


Points


A point is defined as a specific location in space, characterized by its coordinates in a given coordinate system. For instance, in a 2D Cartesian coordinate system, a point can be represented as \( P(x, y) \), indicating its position along the X and Y axes. Similarly, in a 3D space, a point is represented as \( P(x, y, z) \), indicating its location along the X, Y, and Z axes. Points are static entities; they do not possess direction or magnitude, but rather, they denote a precise location in space.

Differences Between Points and Vectors


The primary difference between points and vectors lies in their characteristics and operations:

1. Representation:

A point is represented solely by its coordinates, while a vector is represented by its components and includes a direction.

2. Nature:

A point is a static location, whereas a vector is a dynamic entity that describes a direction and magnitude.

3. Operations:

Points can be manipulated using vector operations, such as addition and subtraction. For example, if you have a point \( P_1(x_1, y_1) \) and a vector \( \mathbf{v}(dx, dy) \), you can find a new point \( P_2 \) by adding the vector to the point:
\[
P_2 = P_1 + \mathbf{v} = (x_1 + dx, y_1 + dy).
\]

This operation effectively "moves" the point in the direction specified by the vector.

Common Operations


1. Translation: As mentioned above, translating a point involves adding a vector to the point's coordinates. This operation shifts the point to a new location.

2. Distance Calculation: The distance between two points can be calculated using the Euclidean distance formula. For points \( P_1(x_1, y_1) \) and \( P_2(x_2, y_2) \) in 2D space, the distance \( d \) is given by:
\[
d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}.
\]
In 3D space, the formula extends to:
\[
d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2 + (z_2 - z_1)^2}.
\]

Normals


A normal vector is perpendicular to a surface at a given point and plays a crucial role in lighting calculations, surface shading, and collision detection.

How Normals Are Computed


Normals can be calculated for various geometrical primitives:

1. Flat Surfaces: For a flat polygon (like a triangle), the normal can be computed using the cross product of two of its edges. For a triangle defined by three points \( P_1, P_2, P_3 \):

Let:
\[
\mathbf{v_1} = P_2 - P_1
\]
\[
\mathbf{v_2} = P_3 - P_1
\]

The normal \( \mathbf{N} \) is computed as:
\[
\mathbf{N} = \mathbf{v_1} \times \mathbf{v_2}
\]

Normalize \( \mathbf{N} \) to ensure it has a unit length.

2. Smooth Surfaces: For a smooth surface, normals can be averaged from the normals of the surrounding polygons that share a vertex. This helps create a smoother appearance on curved surfaces.

Example: Calculating Normals in JavaScript


Here's an example of how to compute the normal vector for a triangle defined by three vertices in JavaScript:

// Function to calculate the normal of a triangle
function calculateNormal(p1p2p3) {
    
// Create vectors v1 and v2
    
const v1 = [p2[0] - p1[0], p2[1] - p1[1], p2[2] - p1[2]]; // Vector from p1 to p2
    
const v2 = [p3[0] - p1[0], p3[1] - p1[1], p3[2] - p1[2]]; // Vector from p1 to p3

    // Calculate the cross product to find the normal
    
const normal = [
        
v1[1] * v2[2] - v1[2] * v2[1], // x component
        
v1[2] * v2[0] - v1[0] * v2[2], // y component
        
v1[0] * v2[1] - v1[1] * v2[0]  // z component
    
];

    
// Normalize the normal vector
    
const length Math.sqrt(normal[0] ** normal[1] ** normal[2] ** 2);
    return [
normal[0] / lengthnormal[1] / lengthnormal[2] / length]; // Return normalized normal
}

// Example usage
const p1 = [000];
const 
p2 = [100];
const 
p3 = [010];

const 
normal calculateNormal(p1p2p3);
console.log("Normal Vector:"normal); // Output: [0, 0, 1] (Normal points in the positive Z direction)




Rays


Rays are fundamental to rendering techniques such as ray tracing, where they are used to simulate the interaction of light with surfaces to produce realistic images. A ray is typically defined by its origin point and a direction vector, allowing it to describe a path in three-dimensional space. Mathematically, a ray can be represented as:

\[
\mathbf{R}(t) = \mathbf{O} + t \cdot \mathbf{D}
\]

where:
\( \mathbf{R}(t) \) is the point along the ray at parameter \( t \),
\( \mathbf{O} \) is the ray's origin,
\( \mathbf{D} \) is the direction vector,
\( t \) is a scalar that determines how far along the ray we are.

Rays are instrumental in various rendering processes, including shadow calculations, reflection, and refraction, making them crucial for achieving photorealistic imagery.

Ray Differentials


Ray differentials enhance the basic ray concept by considering variations in rays that are closely associated with a primary ray. In rendering scenarios, particularly when dealing with anti-aliasing and depth of field, understanding how rays deviate from their original paths is essential for producing high-quality images.

Ray differentials help quantify how a primary ray, originating from a pixel on the image plane, can be perturbed to create additional rays that sample the scene more effectively. These variations are especially important in scenarios where light interacts with complex geometries, as they can significantly improve image quality by reducing artifacts such as jagged edges and blurry backgrounds.

Importance of Ray Differentials


1. Anti-Aliasing: By generating multiple rays for a single pixel, ray differentials enable better sampling of a scene, allowing for smoother edges and reducing visual artifacts. This is particularly useful in scenes with high-frequency details.

2. Depth of Field: In realistic rendering, depth of field effects blur objects that are out of focus. Ray differentials allow for simulating the various positions from which light can enter a lens, thereby producing more natural transitions between sharp and blurred areas.

3. Global Illumination: Ray differentials contribute to more accurate global illumination calculations by allowing for a more comprehensive sampling of light interactions within a scene, improving realism in lighting.

Calculating Ray Differentials


Ray differentials can be calculated by perturbing the origin of the primary ray and its direction. For instance, if \( \mathbf{R}_0 \) is the primary ray, you can generate ray differentials \( \mathbf{R}_{x} \) and \( \mathbf{R}_{y} \) to sample neighboring rays around the pixel:

Perturbing the Origin:
\[
\mathbf{O}_{x} = \mathbf{O} + \delta_x \cdot \mathbf{D}_{u}
\]
\[
\mathbf{O}_{y} = \mathbf{O} + \delta_y \cdot \mathbf{D}_{v}
\]

Here, \( \delta_x \) and \( \delta_y \) are small perturbations, and \( \mathbf{D}_{u} \) and \( \mathbf{D}_{v} \) are direction vectors along the image plane.

Perturbing the Direction:
\[
\mathbf{D}_{x} = \mathbf{D} + \epsilon_x
\]
\[
\mathbf{D}_{y} = \mathbf{D} + \epsilon_y
\]

Here, \( \epsilon_x \) and \( \epsilon_y \) are small perturbations that adjust the ray's direction slightly.

Example: Implementing Ray Differentials in JavaScript


Here is an example that demonstrates how to compute ray differentials in a JavaScript context:

class Ray {
    
constructor(origindirection) {
        
this.origin origin// [x, y, z]
        
this.direction direction// [dx, dy, dz]
    
}

    
// Method to create ray differentials
    
createDifferentials(deltaXdeltaY) {
        
// Define small perturbations for origin and direction
        
const perturbationX = [deltaX00]; // X perturbation
        
const perturbationY = [0deltaY0]; // Y perturbation

        
const rayX = new Ray(
            [
this.origin[0] + perturbationX[0], this.origin[1] + perturbationX[1], this.origin[2] + perturbationX[2]],
            
this.direction
        
);

        const 
rayY = new Ray(
            [
this.origin[0] + perturbationY[0], this.origin[1] + perturbationY[1], this.origin[2] + perturbationY[2]],
            
this.direction
        
);

        return { 
rayXrayY };
    }
}

// Example usage
const primaryRay = new Ray([000], [00, -1]);
const { 
rayXrayY } = primaryRay.createDifferentials(0.010.01);

console.log("Primary Ray:"primaryRay);
console.log("Ray Differential X:"rayX);
console.log("Ray Differential Y:"rayY);



Bounding Boxes


Bounding boxes are essential geometric structures used to simplify collision detection, visibility testing, and spatial partitioning. They enclose a geometric shape or object in a defined volume, providing a quick way to check for intersections or proximity without needing to consider the object's detailed geometry.

Why Bounding Boxes?


Bounding boxes serve several crucial purposes:

1. Collision Detection: They simplify the process of checking for collisions between complex objects. If the bounding boxes of two objects do not intersect, it can be inferred that the objects themselves do not intersect.

2. Efficiency: Calculating intersections between bounding boxes is computationally cheaper than testing intersections between complex shapes. This efficiency is vital in real-time applications, such as video games.

3. Culling: In rendering, bounding boxes can help determine whether objects are within the view frustum. If a bounding box is outside the frustum, the object can be culled (not rendered), improving performance.

4. Spatial Partitioning: Bounding boxes can be used to organize and index objects in a scene, facilitating more efficient rendering and collision detection through structures like quad-trees or BSP trees.

Axis-Aligned Bounding Boxes (AABBs)


Axis-aligned bounding boxes (AABBs) are bounding boxes whose faces are aligned with the coordinate axes. An AABB is defined by two opposite corners, usually represented as the minimum and maximum points:

\[
\text{AABB} = [\text{min}(x, y, z), \text{max}(x, y, z)]
\]

Characteristics of AABBs



- Simplicity: AABBs are easy to calculate and manipulate, as they only require the minimum and maximum coordinates of the enclosed geometry.
- Fast Intersection Tests: Checking if two AABBs intersect is straightforward and efficient, requiring only comparisons of the min and max coordinates along each axis.

Example: Implementing AABBs in JavaScript



Here's a basic implementation of an AABB in JavaScript, including a method to check for intersection:

class AABB {
    
constructor(minmax) {
        
this.min min// [xMin, yMin, zMin]
        
this.max max// [xMax, yMax, zMax]
    
}

    
// Method to check intersection with another AABB
    
intersects(other) {
        return (
            
this.min[0] <= other.max[0] && this.max[0] >= other.min[0] && // X-axis
            
this.min[1] <= other.max[1] && this.max[1] >= other.min[1] && // Y-axis
            
this.min[2] <= other.max[2] && this.max[2] >= other.min[2]    // Z-axis
        
);
    }
}

// Example usage
const aabb1 = new AABB([000], [111]);
const 
aabb2 = new AABB([0.50.50.5], [1.51.51.5]);
const 
aabb3 = new AABB([222], [333]);

console.log("AABB1 intersects AABB2:"aabb1.intersects(aabb2)); // true
console.log("AABB1 intersects AABB3:"aabb1.intersects(aabb3)); // false


Oriented Bounding Boxes (OBBs)


Oriented bounding boxes (OBBs) are bounding boxes that can rotate and align with the object's orientation, providing a tighter fit around complex shapes than AABBs. An OBB is defined by a center point, an orientation (usually represented by a rotation matrix), and half-extents along each axis.

Characteristics of OBBs



- Better Fit: OBBs can conform more closely to the shape of the object, reducing empty space and improving collision detection accuracy.
- Complex Intersection Tests: Checking for intersection between OBBs is more complex than AABBs, often requiring computational geometry techniques like the Separating Axis Theorem (SAT).

Example: Implementing OBBs in JavaScript



Here's a basic implementation of an OBB in JavaScript, including a method to check for intersection using the Separating Axis Theorem:

class OBB {
    
constructor(centerhalfExtentsrotationMatrix) {
        
this.center center// [xCenter, yCenter, zCenter]
        
this.halfExtents halfExtents// [halfX, halfY, halfZ]
        
this.rotationMatrix rotationMatrix// 3x3 rotation matrix
    
}

    
// Method to check intersection with another OBB using the Separating Axis Theorem (SAT)
    
intersects(other) {
        
// Transform the center of the other OBB to the local space of this OBB
        
const toOtherCenter = [
            
other.center[0] - this.center[0],
            
other.center[1] - this.center[1],
            
other.center[2] - this.center[2]
        ];

        
// Project the half extents of both OBBs onto the axes
        
const axes this.getAxes();
        for (const 
axis of axes) {
            const [
projection1projection2] = this.projectOntoAxis(axisothertoOtherCenter);
            if (
this.overlap(projection1projection2) === 0) {
                return 
false// No overlap, OBBs do not intersect
            
}
        }
        return 
true// OBBs intersect
    
}

    
// Helper methods (getAxes, projectOntoAxis, overlap) would need to be implemented for full functionality
}

// Example usage
const obb1 = new OBB([000], [111], /* rotation matrix */);
const 
obb2 = new OBB([0.50.50.5], [111], /* rotation matrix */);
const 
obb3 = new OBB([333], [111], /* rotation matrix */);

console.log("OBB1 intersects OBB2:"obb1.intersects(obb2)); // Depends on actual implementation
console.log("OBB1 intersects OBB3:"obb1.intersects(obb3)); // Depends on actual implementation




Transformations


Transformations are essential for efficiently and clearly manipulating data. They include operations such as translation, scaling, and rotation. To perform these transformations efficiently and consistently, homogeneous coordinates are used, which facilitate linear transformations in projective geometry.


Homogeneous Coordinates


Homogeneous coordinates extend traditional Cartesian coordinates by adding an extra coordinate, \(w\). For a point \((x, y, z)\) in 3D space, the homogeneous coordinates are represented as \((x, y, z, w)\). The conversion from Cartesian to homogeneous coordinates is given by:

\[
(x, y, z) \rightarrow (x, y, z, 1)
\]

When \(w \neq 0\), the original coordinates can be retrieved by dividing by \(w\):

\[
\text{Cartesian} = \left( \frac{x}{w}, \frac{y}{w}, \frac{z}{w} \right)
\]

The use of homogeneous coordinates allows transformations to be represented as matrix multiplications, which can simplify the computation of multiple transformations (e.g., rotation followed by translation).

Example: The homogeneous coordinates for the point \((2, 3, 4)\) would be \((2, 3, 4, 1)\).


Identity Transformation


The identity transformation is a transformation that leaves objects unchanged. In matrix form, the identity matrix \(I\) for 3D transformations is:

\[
I = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

Multiplying any vector in homogeneous coordinates by the identity matrix yields the same vector:

\[
\mathbf{v'} = I \cdot \mathbf{v} = \mathbf{v}
\]

Example: If \(\mathbf{v} = \begin{bmatrix} 2 \\ 3 \\ 4 \\ 1 \end{bmatrix}\), then:

\[
I \cdot \mathbf{v} = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix} \cdot \begin{bmatrix} 2 \\ 3 \\ 4 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \\ 4 \\ 1 \end{bmatrix}
\]


Translations



Translation moves an object in space by adding a vector \((tx, ty, tz)\) to each of its coordinates. The translation matrix \(T\) for 3D transformations is:

\[
T = \begin{bmatrix}
1 & 0 & 0 & tx \\
0 & 1 & 0 & ty \\
0 & 0 & 1 & tz \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

To translate a point \(\mathbf{v} = (x, y, z, 1)\):

\[
\mathbf{v'} = T \cdot \mathbf{v}
\]

Example: Translating the point \((2, 3, 4)\) by \((1, 1, 1)\):

\[
T = \begin{bmatrix}
1 & 0 & 0 & 1 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

\[
\mathbf{v} = \begin{bmatrix} 2 \\ 3 \\ 4 \\ 1 \end{bmatrix}, \quad \mathbf{v'} = T \cdot \mathbf{v} = \begin{bmatrix} 3 \\ 4 \\ 5 \\ 1 \end{bmatrix}
\]

JavaScript Code:
function translate(pointtxtytz) {
    const 
= [
        [
100tx],
        [
010ty],
        [
001tz],
        [
0001]
    ];
    
    const 
= [point[0], point[1], point[2], 1];
    
    const 
result = [];
    for (
let i 04i++) {
        
result[i] = 0;
        for (
let j 04j++) {
            
result[i] += T[i][j] * v[j];
        }
    }
    
    return 
result.slice(03); // return (x, y, z)
}

// Example usage
const translatedPoint translate([234], 111);
console.log(translatedPoint); // Output: [3, 4, 5]



Scaling


Scaling modifies the size of an object. The scaling matrix \(S\) for 3D transformations is:

\[
S = \begin{bmatrix}
sx & 0 & 0 & 0 \\
0 & sy & 0 & 0 \\
0 & 0 & sz & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

where \(sx\), \(sy\), and \(sz\) are the scaling factors along the x, y, and z axes, respectively.

To scale a point \(\mathbf{v}\):

\[
\mathbf{v'} = S \cdot \mathbf{v}
\]

Example: Scaling the point \((2, 3, 4)\) by factors of \(2, 3, 4\):

\[
S = \begin{bmatrix}
2 & 0 & 0 & 0 \\
0 & 3 & 0 & 0 \\
0 & 0 & 4 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

\[
\mathbf{v'} = S \cdot \begin{bmatrix} 2 \\ 3 \\ 4 \\ 1 \end{bmatrix} = \begin{bmatrix} 4 \\ 9 \\ 16 \\ 1 \end{bmatrix}
\]

JavaScript Code:
function scale(pointsxsysz) {
    const 
= [
        [
sx000],
        [
0sy00],
        [
00sz0],
        [
0001]
    ];

    const 
= [point[0], point[1], point[2], 1];

    const 
result = [];
    for (
let i 04i++) {
        
result[i] = 0;
        for (
let j 04j++) {
            
result[i] += S[i][j] * v[j];
        }
    }

    return 
result.slice(03); // return (x, y, z)
}

// Example usage
const scaledPoint scale([234], 234);
console.log(scaledPoint); // Output: [4, 9, 16]



Euler Rotations


Euler rotations are a method of representing 3D rotations using three angles, typically around the x, y, and z axes. The rotation matrices for each axis are:

Rotation about the X-axis by angle \(\theta_x\):

\[
R_x(\theta_x) = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & \cos(\theta_x) & -\sin(\theta_x) & 0 \\
0 & \sin(\theta_x) & \cos(\theta_x) & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

Rotation about the Y-axis by angle \(\theta_y\):

\[
R_y(\theta_y) = \begin{bmatrix}
\cos(\theta_y) & 0 & \sin(\theta_y) & 0 \\
0 & 1 & 0 & 0 \\
-\sin(\theta_y) & 0 & \cos(\theta_y) & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

Rotation about the Z-axis by angle \(\theta_z\):

\[
R_z(\theta_z) = \begin{bmatrix}
\cos(\theta_z) & -\sin(\theta_z) & 0 & 0 \\
\sin(\theta_z) & \cos(\theta_z) & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

The total rotation can be achieved by multiplying these matrices together:

\[
R = R_z(\theta

_z) \cdot R_y(\theta_y) \cdot R_x(\theta_x)
\]

Example: Rotate a point \((1, 0, 0)\) by \(90^\circ\) around the z-axis.

Using the z-rotation matrix:

\[
\theta_z = 90^\circ = \frac{\pi}{2} \text{ radians}
\]
\[
R_z\left(\frac{\pi}{2}\right) = \begin{bmatrix}
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

\[
\mathbf{v'} = R_z \cdot \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \end{bmatrix}
\]

JavaScript Code:
function rotateZ(pointangle) {
    const 
theta angle * (Math.PI 180); // Convert degrees to radians
    
const R_z = [
        [
Math.cos(theta), -Math.sin(theta), 00],
        [
Math.sin(theta), Math.cos(theta), 00],
        [
0010],
        [
0001]
    ];

    const 
= [point[0], point[1], point[2], 1];

    const 
result = [];
    for (
let i 04i++) {
        
result[i] = 0;
        for (
let j 04j++) {
            
result[i] += R_z[i][j] * v[j];
        }
    }

    return 
result.slice(03); // return (x, y, z)
}

// Example usage
const rotatedPoint rotateZ([100], 90);
console.log(rotatedPoint); // Output: [0, 1, 0]


Rotation around an Arbitrary Axis


To rotate around an arbitrary axis defined by a unit vector \(\mathbf{u} = (u_x, u_y, u_z)\) by an angle \(\theta\), we can use the Rodrigues' rotation formula. The rotation matrix \(R\) is given by:

\[
R = I + \sin(\theta) K + (1 - \cos(\theta)) K^2
\]

where \(K\) is the skew-symmetric matrix of \(\mathbf{u}\):

\[
K = \begin{bmatrix}
0 & -u_z & u_y \\
u_z & 0 & -u_x \\
-u_y & u_x & 0
\end{bmatrix}
\]

Example: Rotate a point \((1, 0, 0)\) around the axis defined by the vector \((0, 0, 1)\) by \(90^\circ\).

1. Compute \(K\):

\[
K = \begin{bmatrix}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}
\]

2. Compute \(K^2\):

\[
K^2 = K \cdot K = \begin{bmatrix}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 0
\end{bmatrix}
\]

3. Compute \(R\) using \(\theta = 90^\circ\):

\[
R = I + \sin\left(\frac{\pi}{2}\right) K + (1 - \cos\left(\frac{\pi}{2}\right)) K^2
\]

4. The final rotation can be applied to the point \((1, 0, 0)\).

JavaScript Code:
function rotateAroundAxis(pointaxisangle) {
    const 
theta angle * (Math.PI 180); // Convert degrees to radians
    
const [u_xu_yu_z] = axis;

    const 
cosTheta Math.cos(theta);
    const 
sinTheta Math.sin(theta);

    
// Rotation matrix
    
const = [
        [
cosTheta u_x u_x * (cosTheta), u_x u_y * (cosTheta) - u_z sinThetau_x u_z * (cosTheta) + u_y sinTheta0],
        [
u_y u_x * (cosTheta) + u_z sinThetacosTheta u_y u_y * (cosTheta), u_y u_z * (cosTheta) - u_x sinTheta0],
        [
u_z u_x * (cosTheta) - u_y sinThetau_z u_y * (cosTheta) + u_x sinThetacosTheta u_z u_z * (cosTheta), 0],
        [
0001]
    ];

    const 
= [point[0], point[1], point[2], 1];

    const 
result = [];
    for (
let i 04i++) {
        
result[i] = 0;
        for (
let j 04j++) {
            
result[i] += R[i][j] * v[j];
        }
    }

    return 
result.slice(03); // return (x, y, z)
}

// Example usage
const rotatedPointAroundAxis rotateAroundAxis([100], [001], 90);
console.log(rotatedPointAroundAxis); // Output: [0, 1, 0]





Look-At Transformation


The look-at transformation creates a viewing matrix that positions the camera at a specific point in 3D space and directs it towards a target point. The transformation typically requires the following inputs:

- Eye (camera position): \(\mathbf{E} = (e_x, e_y, e_z)\)
- Target (point of interest): \(\mathbf{T} = (t_x, t_y, t_z)\)
- Up vector (defines the "up" direction): \(\mathbf{U} = (u_x, u_y, u_z)\)

1. Compute the forward vector:

\[
\mathbf{F} = \mathbf{T} - \mathbf{E}
\]

2. Normalize the forward vector:

\[
\mathbf{f} = \frac{\mathbf{F}}{|\mathbf{F}|}
\]

3. Compute the right vector:

\[
\mathbf{R} = \mathbf{f} \times \mathbf{U}
\]

4. Normalize the right vector:

\[
\mathbf{r} = \frac{\mathbf{R}}{|\mathbf{R}|}
\]

5. Recompute the up vector:

\[
\mathbf{U'} = \mathbf{r} \times \mathbf{f}
\]

6. Create the look-at matrix \(M\):

\[
M = \begin{bmatrix}
r_x & r_y & r_z & -\mathbf{r} \cdot \mathbf{E} \\
u'_x & u'_y & u'_z & -\mathbf{u'} \cdot \mathbf{E} \\
-f_x & -f_y & -f_z & \mathbf{f} \cdot \mathbf{E} \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

JavaScript Code:
function lookAt(eyetargetup) {
    const 
= [target[0] - eye[0], target[1] - eye[1], target[2] - eye[2]];
    const 
normalize(F);
    
    const 
cross(fup);
    const 
normalize(R);
    
    const 
cross(rf);

    return [
        [
r[0], r[1], r[2], -dot(reye)],
        [
U[0], U[1], U[2], -dot(Ueye)],
        [-
f[0], -f[1], -f[2], dot(feye)],
        [
0001]
    ];
}

// Helper functions
function normalize(v) {
    const 
length Math.sqrt(v[0] ** v[1] ** v[2] ** 2);
    return [
v[0] / lengthv[1] / lengthv[2] / length];
}

function 
cross(v1v2) {
    return [
        
v1[1] * v2[2] - v1[2] * v2[1],
        
v1[2] * v2[0] - v1[0] * v2[2],
        
v1[0] * v2[1] - v1[1] * v2[0]
    ];
}

function 
dot(v1v2) {
    return 
v1[0] * v2[0] + v1[1

] * v2[1] + v1[2] * v2[2];
}

// Example usage
const viewMatrix lookAt([005], [000], [010]);
console.log(viewMatrix);



Applying Transformations


Transformations are essential for manipulating points, vectors, normals, rays, and bounding boxes. Using homogeneous coordinates and matrix multiplication, complex sequences of transformations like translation, scaling, and rotation can be applied efficiently. In this section, we will cover how transformations are applied to different entities, their mathematical representations, and code implementations in JavaScript.


Points


When applying a transformation to a point, we typically work with homogeneous coordinates \((x, y, z, 1)\). This allows us to use a 4x4 transformation matrix to perform operations like translation and scaling.

Mathematical Representation



Given a point \(P = (x, y, z, 1)\) and a transformation matrix \(T\), the transformed point \(P'\) is calculated as:

\[
P' = T \cdot P
\]

Example: Translation



Translating the point \(P = (2, 3, 4)\) by \(t_x = 1\), \(t_y = 1\), and \(t_z = 1\):

\[
T = \begin{bmatrix}
1 & 0 & 0 & 1 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

\[
P' = T \cdot \begin{bmatrix} 2 \\ 3 \\ 4 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \\ 5 \\ 1 \end{bmatrix}
\]

JavaScript Code



function applyTransformationToPoint(pointmatrix) {
    const [
xyz] = point;
    const 
= [xyz1]; // Convert point to homogeneous coordinates

    
const transformed = [];
    for (
let i 04i++) {
        
transformed[i] = 0;
        for (
let j 04j++) {
            
transformed[i] += matrix[i][j] * v[j];
        }
    }

    return 
transformed.slice(03); // return (x', y', z')
}

// Example: Translation matrix
const translationMatrix = [
    [
1001],
    [
0101],
    [
0011],
    [
0001]
];

const 
point = [234];
const 
translatedPoint applyTransformationToPoint(pointtranslationMatrix);
console.log(translatedPoint); // Output: [3, 4, 5]



Vectors


Unlike points, when applying transformations to vectors, we ignore the translation component of the matrix (i.e., the last column). For scaling and rotation, we apply the same matrix as with points, except the vector's homogeneous coordinate is \((x, y, z, 0)\).

Mathematical Representation



Given a vector \(V = (x, y, z, 0)\) and a transformation matrix \(T\), the transformed vector \(V'\) is calculated as:

\[
V' = T \cdot V
\]

Example: Scaling



Scaling a vector \(V = (1, 2, 3)\) by factors of \(2\) along all axes:

\[
S = \begin{bmatrix}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

\[
V' = S \cdot \begin{bmatrix} 1 \\ 2 \\ 3 \\ 0 \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \\ 6 \\ 0 \end{bmatrix}
\]

JavaScript Code



function applyTransformationToVector(vectormatrix) {
    const [
xyz] = vector;
    const 
= [xyz0]; // Homogeneous coordinate for vectors is (x, y, z, 0)

    
const transformed = [];
    for (
let i 04i++) {
        
transformed[i] = 0;
        for (
let j 04j++) {
            
transformed[i] += matrix[i][j] * v[j];
        }
    }

    return 
transformed.slice(03); // return (x', y', z')
}

// Example: Scaling matrix
const scalingMatrix = [
    [
2000],
    [
0200],
    [
0020],
    [
0001]
];

const 
vector = [123];
const 
scaledVector applyTransformationToVector(vectorscalingMatrix);
console.log(scaledVector); // Output: [2, 4, 6]



Normals


Transforming normals is different from transforming vectors because scaling and shearing can distort the direction of the normal. To correctly transform a normal, we use the inverse transpose of the transformation matrix.

Mathematical Representation



Given a normal \(N = (x, y, z, 0)\) and a transformation matrix \(T\), the transformed normal \(N'\) is calculated using the inverse transpose of \(T\):

\[
N' = (T^{-1})^T \cdot N
\]

Example: Scaling with a Matrix



For a scaling matrix \(S\), its inverse is:

\[
S^{-1} = \begin{bmatrix}
\frac{1}{sx} & 0 & 0 & 0 \\
0 & \frac{1}{sy} & 0 & 0 \\
0 & 0 & \frac{1}{sz} & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

Then we transpose this matrix and apply it to the normal.

JavaScript Code



function applyTransformationToNormal(normalmatrix) {
    const 
inverseMatrix inverse(matrix);
    const 
transposeMatrix transpose(inverseMatrix);
    return 
applyTransformationToVector(normaltransposeMatrix);
}

// Helper functions: Matrix inverse and transpose (assuming 4x4 matrices)
function inverse(matrix) {
    
// Compute the inverse of the 4x4 matrix (implementation depends on the library)
    
....
    
}

function 
transpose(matrix) {
    const 
result = [];
    for (
let i 04i++) {
        
result[i] = [];
        for (
let j 04j++) {
            
result[i][j] = matrix[j][i];
        }
    }
    return 
result;
}




Rays


To apply a transformation to a ray, both the origin and the direction must be transformed independently. The origin is treated as a point \((x, y, z, 1)\), and the direction is treated as a vector \((x, y, z, 0)\).

Mathematical Representation



Given a ray \(R = (O, D)\) where \(O\) is the origin and \(D\) is the direction, the transformed ray \(R'\) is:

\[
O' = T \cdot O, \quad D' = T \cdot D
\]

Example



Apply a translation to a ray with origin \((0, 0, 0)\) and direction \((1, 0, 0)\):

\[
T = \begin{bmatrix}
1 & 0 & 0 & 1 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1
\end{bmatrix}
\]

Transformed ray:

\[
O' = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}, \quad D' = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}
\]

JavaScript Code



function applyTransformationToRay(raymatrix) {
    const 
transformedOrigin applyTransformationToPoint(ray.originmatrix);
    const 
transformedDirection applyTransformationToVector(ray.directionmatrix);
    return { 
origintransformedOrigindirectiontransformedDirection };
}

const 
ray = { origin: [000], direction: [100] };
const 
transformedRay applyTransformationToRay(raytranslationMatrix);
console.log(transformedRay); 
// Output: { origin: [1, 1, 1], direction: [1, 0, 0] }



Bounding Boxes


Discuss applying a transformation to an AABB. We apply a transform to an AABB by transforming all of its corners and compute a new AABB that bounds the transformed points.

Example: Transform an AABB



For an AABB defined by its minimum and maximum corners \((x_{min}, y_{min}, z_{min})\) and \((x_{max}, y_{max}, z_{max})\), transform all 8 corners using the transformation matrix.

JavaScript Code



function applyTransformationToAABB(aabbmatrix) {
    const 
corners = [
        [
aabb.min[0], aabb.min[1], aabb.min[2]],
        [
aabb.max[0], aabb.min[1], aabb.min[2]],
        [
aabb.min[0], aabb.max[1], aabb.min[2]],
        [
aabb.max[0], aabb.max[1], aabb.min[2]],
        [
aabb.min[0], aabb.min[1], aabb.max[2]],
        [
aabb.max[0], aabb.min[1], aabb.max[2]],
        [
aabb.min[0], aabb.max[1], aabb.max[2]],
        [
aabb.max[0], aabb.max[1], aabb.max[2]],
    ];

    const 
transformedCorners corners.map(corner => applyTransformationToPoint(cornermatrix));

    const 
newMin transformedCorners.reduce((minp) => [Math.min(min[0], p[0]), Math.min(min[1], p[1]), Math.min(min[2], p[2])], transformedCorners[0]);
    const 
newMax transformedCorners.reduce((maxp) => [Math.max(max[0], p[0]), Math.max(max[1], p[1]), Math.max(max[2], p[2])], transformedCorners[0]);

    return { 
minnewMinmaxnewMax };
}

const 
aabb = { min: [000], max: [111] };
const 
transformedAABB applyTransformationToAABB(aabbtranslationMatrix);
console.log(transformedAABB); 
// Output: { min: [1, 1, 1], max: [2, 2, 2] }



Composition of Transformations


Transformations can be combined by multiplying their corresponding matrices. This allows multiple transformations to be applied in sequence. For example, combining translation, scaling, and rotation can be done by multiplying the matrices for each transformation.

Mathematical Representation



Given transformations \(T_1, T_2, \ldots, T_n\), the final transformation \(T\) is:

\[
T = T_n \cdot \ldots \cdot T_2 \cdot T_1
\]

Example: Combining Translation and Rotation



Combine translation by \((1, 1, 1)\) and rotation around the z-axis by \(90^\circ\):

\[
T = R_z(90^\circ) \cdot T_{translate}
\]

JavaScript Code



function multiplyMatrices(AB) {
    const 
result = [];
    for (
let i 04i++) {
        
result[i] = [];
        for (
let j 04j++) {
            
result[i][j] = 0;
            for (
let k 04k++) {
                
result[i][j] += A[i][k] * B[k][j];
            }
        }
    }
    return 
result;
}

const 
combinedMatrix multiplyMatrices(rotationMatrixZtranslationMatrix);




Transformations and Coordinate System Handedness


When applying transformations, the choice of handedness can affect how rotations and cross-products behave.

Right-handed systems: Positive rotations follow the right-hand rule.
Left-handed systems: Positive rotations follow the left-hand rule.

When switching between handedness, the sign of the z-coordinate (or y-coordinate depending on the system) must be inverted.

JavaScript Code



To switch between coordinate systems:

function switchHandedness(point) {
    return [
point[0], point[1], -point[2]]; // Switch from right-handed to left-handed by inverting z-axis
}




Animating Transformations


Animations are often achieved by smoothly transitioning between different transformations. This involves methods like quaternion interpolation for smooth rotations and managing transformations applied to objects like bounding boxes. In this section, we will cover quaternions, quaternion interpolation, and animating bounding boxes in detail, with equations and JavaScript code snippets.


Quaternions


Quaternions are an efficient way to represent and compute rotations in 3D space. Unlike Euler angles, quaternions avoid issues like gimbal lock, providing smoother and more reliable rotation interpolation.

A quaternion \(q\) is represented as:

\[
q = w + xi + yj + zk
\]

where \(w\) is the scalar part, and \((x, y, z)\) is the vector part. In practice, quaternions are stored as a four-component vector \((w, x, y, z)\).

Quaternion Rotation



To rotate a vector \(v = (x, y, z)\) by a quaternion \(q = (w, x_q, y_q, z_q)\), the vector is converted to a quaternion \(v_q = (0, x, y, z)\). The rotated vector \(v'\) is given by:

\[
v' = q \cdot v_q \cdot q^{-1}
\]

where \(q^{-1}\) is the inverse of the quaternion \(q\), and \(\cdot\) represents quaternion multiplication.

Quaternion Multiplication



The multiplication of two quaternions \(q_1 = (w_1, x_1, y_1, z_1)\) and \(q_2 = (w_2, x_2, y_2, z_2)\) is given by:

\[
q_1 \cdot q_2 = \left( w_1w_2 - x_1x_2 - y_1y_2 - z_1z_2, \, w_1x_2 + x_1w_2 + y_1z_2 - z_1y_2, \, w_1y_2 + y_1w_2 + z_1x_2 - x_1z_2, \, w_1z_2 + z_1w_2 + x_1y_2 - y_1x_2 \right)
\]

JavaScript Code for Quaternion Rotation



function quaternionMultiply(q1q2) {
    const [
w1x1y1z1] = q1;
    const [
w2x2y2z2] = q2;

    return [
        
w1 w2 x1 x2 y1 y2 z1 z2,
        
w1 x2 x1 w2 y1 z2 z1 y2,
        
w1 y2 y1 w2 z1 x2 x1 z2,
        
w1 z2 z1 w2 x1 y2 y1 x2
    
];
}

function 
rotateVectorByQuaternion(vq) {
    const [
xyz] = v;
    const 
vQuat = [0xyz];
    const 
qInverse inverseQuaternion(q);

    const 
rotatedQuat quaternionMultiply(quaternionMultiply(qvQuat), qInverse);
    return 
rotatedQuat.slice(1); // Extract (x', y', z')
}

function 
inverseQuaternion(q) {
    const [
wxyz] = q;
    const 
norm z;
    return [
norm, -norm, -norm, -norm]; // Inverse is the conjugate normalized
}

// Example usage:
const quaternion = [0.70700.7070]; // 90-degree rotation around y-axis
const vector = [100]; // Vector pointing along x-axis

const rotatedVector rotateVectorByQuaternion(vectorquaternion);
console.log(rotatedVector); // Output: [0, 0, -1] (rotated 90 degrees to point along the -z axis)


Quaternion Interpolation


Quaternion Interpolation (Slerp: Spherical Linear Interpolation) is a technique used to smoothly interpolate between two quaternions, which represent rotations. Unlike linear interpolation, slerp provides a constant angular velocity, producing smoother rotational transitions.

Slerp Equation



Given two quaternions \(q_1\) and \(q_2\), and a parameter \(t\) between 0 and 1, the Slerp function computes the intermediate quaternion \(q(t)\):

\[
q(t) = \frac{\sin((1-t)\theta)}{\sin(\theta)} q_1 + \frac{\sin(t\theta)}{\sin(\theta)} q_2
\]

Where \(\theta = \cos^{-1}(q_1 \cdot q_2)\) is the angle between the quaternions.

JavaScript Code for Quaternion Interpolation (Slerp)



function slerp(q1q2t) {
    
let dot q1[0] * q2[0] + q1[1] * q2[1] + q1[2] * q2[2] + q1[3] * q2[3];

    
// If the dot product is negative, slerp won't take the shorter path.
    // In that case, invert one quaternion.
    
if (dot 0.0) {
        
q2 q2.map(=> -v);
        
dot = -dot;
    }

    if (
dot 0.9995) {
        
// If the quaternions are too close, use linear interpolation.
        
const result q1.map((vi) => (t) * q2[i]);
        const 
len Math.sqrt(result.reduce((sumv) => sum v0));
        return 
result.map(=> len); // Normalize the result
    
}

    const 
theta Math.acos(dot);
    const 
sinTheta Math.sqrt(1.0 dot dot);

    const 
Math.sin((t) * theta) / sinTheta;
    const 
Math.sin(theta) / sinTheta;

    return 
q1.map((vi) => q2[i]);
}

// Example usage:
const q1 = [0.70700.7070]; // 90-degree rotation around y-axis
const q2 = [0001]; // 180-degree rotation around z-axis
const 0.5// Halfway between the two quaternions

const interpolatedQuaternion slerp(q1q2t);
console.log(interpolatedQuaternion);



Animating Bounding Boxes


When animating objects in 3D space, their bounding boxes also need to be updated as the object moves, scales, or rotates. A bounding box can either be Axis-Aligned (AABB) or Oriented (OBB).

Axis-Aligned Bounding Box (AABB)



An AABB can be animated by transforming the 8 corners of the box according to the object's transformation. However, since AABBs are axis-aligned, even a slight rotation can change the dimensions of the AABB. To animate AABBs, we recalculate the minimum and maximum coordinates after applying transformations.

JavaScript Code: Animating an AABB



function applyTransformationToAABB(aabbmatrix) {
    const 
corners = [
        [
aabb.min[0], aabb.min[1], aabb.min[2]],
        [
aabb.max[0], aabb.min[1], aabb.min[2]],
        [
aabb.min[0], aabb.max[1], aabb.min[2]],
        [
aabb.max[0], aabb.max[1], aabb.min[2]],
        [
aabb.min[0], aabb.min[1], aabb.max[2]],
        [
aabb.max[0], aabb.min[1], aabb.max[2]],
        [
aabb.min[0], aabb.max[1], aabb.max[2]],
        [
aabb.max[0], aabb.max[1], aabb.max[2]],
    ];

    const 
transformedCorners corners.map(corner => applyTransformationToPoint(cornermatrix));

    const 
newMin transformedCorners.reduce((minp) => [Math.min(min[0], p[0]), Math.min(min[1], p[1]), Math.min(min[2], p[2])], transformedCorners[0]);
    const 
newMax transformedCorners.reduce((maxp) => [Math.max(max[0], p[0]), Math.max(max[1], p[1]), Math.max(max[2], p[2])], transformedCorners[0]);

    return { 
minnewMinmaxnewMax };
}

const 
aabb = { min: [000], max: [111] };
const 
translationMatrix = [
    [
1002],
    [
0103],
    [
00,

 
14],
    [
0001]
];

const 
animatedAABB applyTransformationToAABB(aabbtranslationMatrix);
console.log(animatedAABB); // Updated AABB after translation










Ray-Tracing with WebGPU kenwright WebGPU Development Cookbook - coding recipes for all your webgpu needs! WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Games WGSL 2d 3d interactive web-based fun learning WebGPU Compute WebGPU API - Owners WebGPU & WGSL Essentials: A Hands-On Approach to Interactive Graphics, Games, 2D Interfaces, 3D Meshes, Animation, Security and Production Kenwright graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu for dummies kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright WebGPU Shader Language Development: Vertex, Fragment, Compute Shaders for Programmers Kenwright wgsl webgpugems shading language cookbook kenwright WGSL Fundamentals book kenwright WebGPU Data Visualization Cookbook kenwright Special Effects Programming with WebGPU kenwright WebGPU Programming Guide: Interactive Graphics and Compute Programming with WebGPU & WGSL kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.