Skip to content

jarek1992/-Zenith

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

183 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Zenith | Modern C++20 Path Tracing Engine

A high-performance, physically-based path tracing engine built with C++20. This project features a robust real-time diagnostic UI, sophisticated environmental lighting, and a professional post-processing pipeline.

render_main

🌟 Key Project Highlights:

  • Global Illumination Out-of-the-box: Full Monte Carlo Path Tracing that naturally handles indirect lighting, soft shadows, and color bleeding.
  • Physically Based Camera & Optics: Thin-lens simulation providing cinematic Depth of Field (Bokeh) and real-time focus pulling.
  • Production-Ready Post-Processing: Complete HDR pipeline featuring ACES Tone Mapping, physically-based Bloom, and Auto-Exposure.
  • High-Performance Scalability: O(logN) ray-traversal via custom BVH structures and 100% CPU utilization via OpenMP.
  • Interactive Diagnostic Suite: Real-time G-Buffer visualization (Normals, Albedo, Depth) and a live Luminance Histogram.
  • Intelligent State Synchronization: Decoupled UI and Render states using a Dirty Flag system, allowing for fluid parameter manipulation with smart accumulator management.

πŸ›  Core Technical Features

    Engine Specifications

    • Language: C++20 (utilizing modern standards: std::clamp, std::shared_ptr, and advanced lambdas).
    • Rendering Model: Progressive Path Tracing (real-time sample accumulation).
    • Integration Method: Monte Carlo (stochastic sampling of light paths).
      //ACES Tone Mapping to map HDR radiance to [0.0, 1.0] display range
      inline color apply_aces(color x) {
      	const double a = 2.51;
      	const double b = 0.03;
      	const double c = 2.43;
      	const double d = 0.59;
      	const double e = 0.14;
      
      	return color(
      		std::clamp((x.x() * (a * x.x() + b)) / (x.x() * (c * x.x() + d) + e), 0.0, 1.0),
      		// ... applied to all channels
      	);
      }
    • Hardware Acceleration(CPU): OpenMP (parallelized computation across all available processor threads).
    • #pragma omp parallel for schedule(dynamic)
      for (int j = 0; j < image_height; ++j) {
      	//pixel processing logic...
      }
    • Data Structures: BVH(Bounding Volume Hierarchy) – optimizes ray-object intersection tests from O(N) to O(logN).

    • aabb_bvh_diagram

      Spatial bounding volume hierarchy (BVH) reduces the complexity of collision tests from O(N) to O(log N) through recursive division of the scene into containers AABB.


    • Memory Management: Dirty Flag System (needs_update, needs_ui_sync) – intelligent buffer reloading triggered only upon parameter changes.
      //example of smart state synchronization
      if (ImGui::SliderFloat("Aperture", &cam.aperture, 0.0f, 0.5f)) {
      	my_post.needs_update = true; //trigger post-process recalculation
      }
            
      if (ImGui::IsItemDeactivatedAfterEdit()) {
      	reset_accumulator(); //only reset samples when user finishes interaction
      }
    Camera & Optics System
      Feature Description Key Parameters
      Thin Lens Model Simulation of a physical lens for realistic bokeh effects Aperture (f-stop), Focus Distance
      Dynamic FOV Full perspective control for architectural or macro shots Vertical FOV (degrees)
      Interactive Navigation Smooth 3D space movement and orientation LookAt, LookFrom, Up Vector
      Anti-aliasing(AA) Sub-pixel jittering eliminates jagged edges by slightly
      offsetting rays within each pixel footprint,
      resulting in smooth, film-like edges
      Stratified Sampling (per pixel)
      CAMERA OPTICS - aperature=3 0 focus_distance=3 0, 9 0, 18 0

      Aperature 3.0, Focus_distance 3.0       Aperature 3.0, Focus_distance 9.0        Aperature 3.0, Focus_distance 18.0.

    Environment & Lighting Systems

    The engine provides a versatile lighting suite, allowing users to switch between physical sky simulations, image-based lighting, and studio backgrounds.

    • HDRI Maps(IBL):
    • For photorealistic reflections and complex ambient lighting, the engine supports industry-standard High Dynamic Range images.

        HDRI Maps(IBL)

        Comparison of the same scene in 3 different HDR maps included in the engine library

      • Technology: Native support for 32-bit .hdr files (IBL) providing a massive range of luminance data.
      • 3-Axis Transformation: Full spherical orientation control using Yaw, Pitch, and Roll to align the environment with your scene geometry perfectly.
      • Asset Management: Integrated file observer allows for dynamic refreshing of the HDRI library. Add new maps to the directory and select them in-app without a restart.

    • Astronomical Daylight System:
    • This module simulates the sun's position and color based on real-world celestial mechanics. It offers two distinct modes of operation:

        ASTRONOMICAL DAYLIGHT SYSTEM_6am, 12pm, 6pm

        Comparison of the same scene in 3 different hours of a day (from left: 6am / 12pm / 6pm)

      • Astronomical Mode: Calculates the sun's position automatically using geographical and temporal data.
        • Parameters: Latitude (coordinates), Day of the Year (1-365), and Time of Day (0-24h).
        • Celestial Math: Implementation of solar declination, hour angle, azimuth, and elevation algorithms to ensure pinpoint accuracy.

      • Manual Mode: Provides direct control over the sun's direction vector for artistic lighting setups.
      • Auto-Sun Color: Dynamically adjusts the solar spectrum based on altitude.
        • Atmospheric Simulation: Implements a simplified Rayleigh Scattering model; as the sun nears the horizon, the increased optical path length through the atmosphere shifts the light toward warmer, reddish wavelengths, while high-altitude sun positions produce a crisp, cooler white.

    • Solid Background:
    • Designed for product photography and clean architectural presentations.

        SOLID BACKGROUND

        The same scene with solid background

      • Control: Full RGB spectrum selection via a precision color picker.
      • Intensity: Adjustable background radiance, allowing for neutral studio setups that don't overpower the scene's primary light sources.
    Material Library(PBR)
      Material Physical Property Key Features Textures(Albedo) Bump Map
      Lambertian Ideal Diffuse Simulates matte surfaces with perfect light scattering. βœ… Yes βœ… Yes
      Metal Specular Reflection Includes a Fuzz parameter to control surface roughness/blurriness of reflections. βœ… Yes βœ… Yes
      Dielectric Refraction Handles transparent materials like glass or water with IOR (Index of Refraction) and Total Internal Reflection. βœ… Color βœ… Yes
      Emissive Light Emission Turns any geometry into a physical light source (Area Light) with adjustable radiance. βœ… Color/Power ❌ No
      Material Library(PBR)

      Material comparision - diffused lambertian, bumped metal, glass(dielectric) and emissive.

    • Technical Highlights & Material System
    • The engine's material system is built on physical principles, ensuring that every interaction between light and geometry behaves as it would in the real world.

      • Energy Conservation: Every material is mathematically constrained to ensure reflected light never exceeds incoming energy, preserving physical consistency and preventing "unrealistic glowing" artifacts.
      • Stochastic Importance Sampling: Reflection and refraction directions are calculated using Monte Carlo importance sampling. This enables the simulation of complex optical effects like soft reflections and frosted glass with high efficiency.
      • Ray-Material Interaction: Each material implements a unique scattering function. Based on physical constants (like IOR or Fuzz), the engine decides whether a ray is absorbed, reflected, or refracted.

    • Texture & Surface Mapping
      • Texture-Material Integration: The engine supports mapping textures to any geometric primitive. You can blend procedural or image-based textures with any PBR material(e.g., a textured metal or a patterned emissive surface).
      • Bump Mapping(Beta): Preliminary support for Bump Mapping is available for basic primitives(cube and sphere), allowing for fine-grained surface detail without increasing polygon count.
        • Note: Currently, Bump Mapping is not supported for .obj triangle meshes; this is planned for a future update.

    Cinematic Post-Process Pipeline

    Beyond path tracing, the engine includes a high-performance post-processing stack to achieve a production-ready look.

    Cinematic Post-Process Pipeline_before after

    Raw image | Image with post-production(Aces + Bloom etc...)

    • ACES Tone Mapping:Implementation of the Academy Color Encoding System to transform High Dynamic Range (HDR) data into cinematic Low Dynamic Range (LDR) output.
    • Bloom Engine: A physically-inspired glow effect that extracts highlights and bleeds them into surrounding pixels using a configurable threshold and blur radius.
    • Exposure Control:
      • Auto-Exposure: Note: Dynamically calculates scene luminance to adjust brightness.

      • Luminance Histogram: auto-exposure is dynamically adjusted based on real image data

      • EV Compensation: Photographic control allowing for Β±5.0 stops of brightness adjustment.

🎬 Scene Configuration & Workflow

    The entire scene configuration is centralized in scene_management.hpp. You don't need to recompile the core engine to swap assets or materials.
    Assets Loader

    A dedicated struct sceneAssetsLoader for pre-loading heavy .obj models (like the included teapot.obj or bowl.obj) into memory once and stored as shared pointers to optimize RAM usage. Models can be placed /assets/models/

    struct sceneAssetsLoader {
    	shared_ptr<model> teapot;
    
    	sceneAssetsLoader() {
    		teapot = make_shared<model>("assets/models/teapot.obj", nullptr, 0.4);
    		//add more .obj models here...
    	}
    };
    Material Library & Instancing

    Zenith Engine features a high-performance Material Library system that separates surface properties from geometric data. This allows for massive memory savings through the Flyweight pattern.

    • Centralized Registry: Define all your materials once in a global library (load_materials). It supports lambertian, metal, dielectric(glass), and emissive.
    • Texture & Bump Mapping: Enhance surface detail using high-resolution texture and bump maps. Pass an image_texture to the material constructor to add tactile depth. Place your own textures directly into /assets/textures/ and bump maps inside /assets/bump_maps/
    • void load_materials(MaterialLibrary& mat_lib) {
      	//bump map textures
      	auto wood_bump = make_shared<image_texture>("assets/bump_maps/wood_bump_map.jpg");
      	//... more bump maps as needed
      
      	//add some predefined materials to the library
      	mat_lib.add("wood_bumpy_texture", 
      	make_shared<lambertian>(make_shared<image_texture>("assets/textures/fine-wood.jpg"), wood_bump, 2.0));
      	//... add more materials as needed 
      }

      Note: Currently, Bump Mapping is not supported for .obj triangle meshes; this is planned for a future update.

    • Zero-Copy Instancing: Instead of duplicating heavy geometry, use a material_instance class to wrap a shared mesh with a specific material. This allows you to render hundreds of unique-looking objects while keeping only one copy of the mesh in RAM.
    • void load_materials(MaterialLibrary& mat_lib) {
      	auto wood_bump = make_shared<image_texture>("assets/bump_maps/wood_bump_map.jpg");
          
      	//define a "bumpy" wood material in the library
      	mat_lib.add("wood_bumpy", make_shared<lambertian>(
      	make_shared<image_texture>("assets/textures/fine-wood.jpg"), 
      	wood_bump, 2.0));
      }
      
      //in build_geometry: Reuse the same sphere geometry with different materials
      auto sphere_geom = make_shared<sphere>(point3(0,0,0), 1.0, nullptr);
      world.add(make_shared<material_instance>(sphere_geom, mat_lib.get("wood_bumpy")));
      world.add(make_shared<material_instance>(sphere_geom, mat_lib.get("gold_mat")));
    Scene geometry

    Place a geometry inside build_geometry() to compose your world. Use material_instance to apply shared materials to different shapes.

    • Transformations: Easily wrap objects in translate, rotate_x/y/z, and scale instances.
    • //cube
      auto big_cube_geom = make_shared<cube>(point3(0.0, 0.0, 0.0), nullptr);
      auto big_cube_instance = make_shared<material_instance>(big_cube_geom, mat_lib.get("foggy_glass"));
      world.add(make_shared<translate>(big_cube_instance, point3(0.0, 1.0, 2.5)));
    • Volumetric Fog: Enable global environmental fog by setting use_fog to true and adjusting fog_density and fog_color.
    • // - 5. environmental fog
      if (use_fog) {
      	//set radius and center of the fog volume (can be adjusted to fit the scene better)
      	auto fog_boundary = make_shared<sphere>(point3(0.0, 0.0, 0.0), 50.0, nullptr);
      	//fog density 0.1 is extremely high (impenetrable wall). 
      	//values 0.001 - 0.02 gives best visual results.
      	world.add(make_shared<constant_medium>(fog_boundary, fog_density, fog_color));
      }

πŸ•Ή Interactive UI Overview

    The engine features a custom-built, real-time diagnostic interface powered by Dear ImGui, providing deep insights into the path-tracing process and color pipeline. It supports Windows, Linux, and macOS.
    Diagnostic G-Buffer Visualizer

    Real-time inspection of internal engine data to verify scene integrity and material properties. These modes are directly linked to the debug_mode flags in the post-processing stage.

      Mode Description Technical Use
      Albedo Pass Raw material colors Verify texture mapping and base reflectivity
      Normal Pass Surface orientation Check for smoothing groups and geometry errors
      Z-Depth Pass Spatial distance Debug focus distance for the Depth of Field (DoF) system
      Luminance Brightness map Analyze the input for the Auto-Exposure algorithm
      BVH Mode Spatial Hierarchy Audit tree health and node culling directly from the UI.
      Diagnostic G-Buffer_Modes

      Render modes: RGB/Albedo/Normals/Z-depth/Luminance/BVH Wireframe

    Real-Time Analytics & Control

    Professional tools for lighting and exposure management.


    Monitor the brightness distribution in real-time. This allows you to prevent highlight clipping and ensure that the ACES Tone Mapping has enough dynamic range to work with.

    • Luminance Histogram: Visualize the impact of scene lights on the final exposure.
    • Channel Isolation: Toggle R, G, B, views to perform precise noise analysis before the Intel OIDN pass.
      • Technical Detail: Controlled via debug_mode flags in the post-processing shader.

    Fluid Interaction System

    The engine features a deeply integrated communication layer between the Dear ImGui interface and the rendering core, focusing on a seamless user experience.


    Smooth changes between render passes

    • Smart Accumulator Sync: To prevent constant frame flickering during UI interaction, the path-tracing accumulator only resets when a change is "finalized" (utilizing IsItemDeactivatedAfterEdit). This allows you to explore parameters smoothly, with the engine only committing to a full re-render once you release a slider.
    • Non-Blocking Real-Time Updates: Key parametersβ€”including Sun Position, Light Intensity, and Focus Distanceβ€”provide immediate visual feedback, allowing for rapid look-dev and lighting adjustments without breaking the creative flow.
    • Architectural Console & Logging: A custom-built engine console provides categorized, filtered feedback directly within the viewport.
      • Smart Categorization: Logs are tagged as [System], [Config], or [Debug] for easy navigation and troubleshooting.
      • Anti-Spam Logic: Prevents log flooding during rapid parameter changes, ensuring that critical engine messages remain visible and the UI stays responsive even under heavy interaction.

πŸ— Build & Requirements

    This project is built using C++20 and utilizes CMake with vcpkg (as a submodule) for seamless, cross-platform dependency management. It supports Windows, Linux, and macOS.
    Hardware & System Requirements

    • OS: Windows 10/11, macOS (Intel/Apple Silicon), or Linux.
    • CPU: Multi-core processor with OpenMP support. Note: Intel OIDN requires a CPU with SSE4.1 instructions or Apple Silicon (M1/M2/M3).
    • GPU: OpenGL 3.3 compatible(used for hardware-accelerated viewport display).
    Software Prerequisites

    • Compiler: A C++20 compliant compiler (MSVC 2022, GCC 11+, or Clang 12+).
    • Build System: CMake 3.15+
    • Package Manager: vcpkg (included as a git submodule).
    • Version Control: Git (required to manage vcpkg submodules and dependencies).
    Dependencies

    The project relies on the following libraries. Ensure they are installed or available in your environment:

    • Managed via vcpkg: SDL3, Dear ImGui, Glad.
    • External (Manual): Intel Open Image Denoise (OIDN).
    • System: OpenMP (Multi-threading), OpenGL (Graphics API).
    Build the Project

      1. Clone the Repository

      Clone the project along with the vcpkg submodule:

      git clone --recursive https://github.com/jarek1992/-Zenith.git
      cd .\-Zenith\ 

      Note: If you downloaded the repo without --recursive, run git submodule update --init --recursive inside the project folder to fetch vcpkg.

      2. Install Dependencies(System Specific)

      Since OIDN binaries are platform-specific and large, they are not included in the repository.

      • Windows (Manual OIDN)
        • a. Download oidn-2.3.0.x64.vc14.windows.zip from Intel OIDN Releases
          b. Extract the contents so the structure looks like this:
          • libs/oidn/bin/ (contains .dll files)
          • libs/oidn/include/ (contains headers)
          • libs/oidn/lib/ (contains .lib files)

      • macOS / Linux (System Package)
        On these systems, OIDN is handled globally. Run:
        • macOS: brew install openimagedenoise sdl3
        • Linux (Ubuntu/Debian): sudo apt install libopenimagedenoise-dev libsdl3-dev

      3. Bootstrap vcpkg:
      Initialize the package manager (required for ImGui, Glad, and SDL3 on Windows):
      • Windows:
        .\vcpkg\bootstrap-vcpkg.bat
        
      • macOS/Linux:
        ./vcpkg/bootstrap-vcpkg.sh
        

      4. Configure & Install Dependencies:

      Run CMake to trigger vcpkg and configure the build system. This will automatically download and build SDL3, ImGui, and Glad.

      cmake -B build -S . "-DCMAKE_TOOLCHAIN_FILE=vcpkg/scripts/buildsystems/vcpkg.cmake"

      5. Build the Project:

      Compile the executable in Release mode for maximum performance:

      cmake --build build --config Release

      6. Run the Engine:
      The build system automatically copies the necessary OIDN and SDL3 DLLs to the output folder. Launch the application:
      • Windows:
        .\build\Release\zenith_path_tracer.exe
        
      • macOS/Linux:
        ./build/zenith_path_tracer
        

      Note on Image Quality: The engine features a built-in ACES Tone Mapping curve (see common.hpp) and Auto-Exposure logic. When running in debug_mode::RED or GREEN, you can observe the raw output of specific channels, while the main render utilizes Intel's AI Denoising for a noise-free experience.

πŸ“ˆ Performance & Optimization

    The engine is engineered for high-performance path tracing, balancing raw computational power with intelligent image reconstruction.
    Multi-Threaded Rendering Core

    The rendering engine is built to scale with your hardware, ensuring that every CPU cycle is utilized.

    • OpenMP-Powered Scanline Parallelism: The engine partitions the image into horizontal blocks of rows, distributed across all logical CPU cores. This ensures maximum hardware utilization and linear performance scaling.
    • Adaptive Auxiliary Sampling: To optimize bandwidth and compute cycles, the engine uses a decoupled sampling strategy. While the Beauty Pass uses full samples_per_pixel, the auxiliary buffers (Albedo, Normals, Z-Depth) are computed using a clamped subset of samples, drastically reducing overhead without sacrificing denoising quality.
    • Live Progress Feedback: Real-time synchronization between the rendering threads and the UI layer provides immediate visual feedback on the render's progress via atomic line-counting.

    • Progressive refinement with scanline rendering and progress bar

    AI-Accelerated Denosing (Intel OIDN)

    Instead of relying on brute-force sampling (which can take hours), the engine uses industry-standard AI to "clean" the image.

    • Sample Reduction: Achieving noise-free results with 10x–50x fewer samples per pixel compared to traditional path tracing.
    • Pre-filter Pass: The engine feeds Albedo and Normal buffers into the OIDN neural network, preserving sharp geometric edges while removing Monte Carlo variance.
    • Performance Gain: High-quality 1080p renders that would normally take minutes are usable in seconds.

    • RGB Denoiser before/after

    Smart Memory & Resource Management

    • vcpkg Manifest Mode: All dependencies (SDL3, Glad, ImGui) are compiled with optimized release flags (-O3 / /Ox), ensuring zero overhead from third-party libraries.
    • Accumulation Caching: The engine detects when the camera or scene is static and accumulates light energy over time, avoiding redundant full-frame re-draws during idle periods.
    • Fast Post-Processing: The ACES Tone Mapping and Auto-Exposure passes are implemented as highly optimized per-pixel operations, adding negligible overhead to the final frame delivery.
    Bounding Volume Hierarchy (BVH)

    Traditional ray tracing checks every ray against every object (O(N) complexity) which is unsustainable for complex scenes with high primitive counts.

    • Logarithmic Scaling: By using an O(logN) traversal algorithm, the engine can handle scenes with thousands of primitives while maintaining high frame rates.
    • Intersection Culling: Rays that do not intersect a parent node's bounding box are immediately discarded, skipping all child nodes and primitives within.
    • Balanced Tree Construction: Objects are sorted along the longest axis to ensure a balanced hierarchy, minimizing box overlap and maximizing traversal efficiency.

    Integrated BVH Diagnostic Suite

    The engine features a custom real-time visualizer to audit the health of the BVH tree directly from the Engine Control Panel.

      Feature Description
      BVH-Mode Toggle Activates neon wireframe rendering for the active BVH layer
      Level Sliders Scans the hierarchy depth-by-depth to verify spatial partitioning.
      Leaf Isolation Special mode (Level: -1) that highlights only the final leaf nodes
      wrapping individual primitives.

      Bounding Volume Hierarchy (BVH)_level_1-6
      BVH tree levels example on "sphere's funnel"

    Dirty Flags & State Management

    To avoid redundant calculations and ensure a fluid UI, the engine utilizes a Dity Flag System to track changes in the scene.

    • Selective Re-renders: Instead of blindly resetting the path-tracing accumulator every frame, the engine only triggers a reset when a "Dirty Flag" is raised (e.g., when any of light parameter is changed, mode has been switched to BVH or back RGB and one of the camera parameter is changed).
    • UI Synchronization: Integrating with Dear ImGui, the engine uses IsItemDeactivatedAfterEdit to mark data as "dirty" only when the user finishes an interaction. This allows you to slide values smoothly.
    • Resource Efficiency: If no flags are dirty, the engine focuses 100% of its power on accumulating samples for the current frame, leading to rapid noise disappearance.

    • IsItemDeactivatedAfterEdit resets accumulator when release a slidder

    Performance Metrics (Example)

    Tested on: Intel Core i7-12700F (12C/20T)/ 64GB RAM / Windows 11

      Feature Status Impact
      Multi-threading Enabled(OpenMP) ~95% CPU Utilization across 20 threads
      SIMD Instructions Enabled Accelerated Ray-Sphere intersection math
      Denoising Intel OIDN 2.3 Clean images at 10-20 samples per pixel
      UI Overhead Minimal Zero-copy frame buffer updates via Glad/OpenGL


      As soon as the render starts all the logical processors hit their maximum utilization to accelerate render speed

πŸ“Š Real-time Analytics

  • Luminance Histogram: A real-time distribution of pixel brightness, used to calibrate the Auto-Exposure and prevent highlight clipping.
    Real-time Analytics_Luminance Histogram
  • G-Buffer Suite: Toggle between Albedo, Normals, and Z-Depth passes to inspect the scene's geometric health.

  • Frame Profiler: Engineered for maximum throughput. The engine utilizes all logical CPU cores via OpenMP, providing real-time progress tracking via atomic line counters for smooth UI updates.
    // Every 10 lines update progress bar for atomic safety
    if (local_lines_done % 10 == 0 || j == end_y - 1) {
    	// fetch_add ensures thread-safe updates across all CPU cores
    	this->lines_rendered.fetch_add(local_lines_done);
    	local_lines_done = 0; // reset local thread counter
    }

πŸ“Έ Gallery & Showcase


Astronomical real-time sun movement

Gallery   Showcase_render_debug_views
Render debug channels

Gallery   Showcase_Exterior Sunlight_Sunset
Exterior sunlight/sunset views

Gallery   Showcase_Indoor Scene
Indoor render

Gallery   Showcase_Macro Photography
Macro Photography

πŸ—Ί Future Roadmap

Where the project is headed. Contributions and suggestions are welcome!

  • Advanced Geometry Support: Extending Bump Mapping to .obj triangle meshes (currently limited to primitives like Spheres and Boxes).
  • ImGui UI: Improve the default Dear ImGui outlook for users.
  • GPU Acceleration: Porting the core kernels to CUDA or OptiX for 100x performance gains.
  • Advanced Denoising: Adding support for NVIDIA DLSS 3.5 (Ray Reconstruction).
  • Material Extensions: Implementation of Subsurface Scattering (SSS) for skin and wax, and Clear-Coat for car paints.
  • Complexity: Full Wavefront Path Tracing architecture to better handle divergent rays.
  • Animation: Integrated timeline for keyframing camera paths and light transitions.

Updates:

to be continued ...

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors