Watch

Blender 2.76 and Mitsuba Renderer 2

Watch Render using Mitsuba Render 3 Watch Render in LuxRender 1.6
      In this scene, highly specular metal materials, glass, and a mirror create a difficult to resolve lighting scenario due to an abundance of high contribution light paths. Such paths often result in hot pixels or "fireflies" which detract from the rendered image image. In many cases, the resolution to such a problem lies in more complex rendering techniques like Stochastic Progressive Photon Mapping (SPPM) or Metropolis Light Transport (MLT), however another brute-force solution lies simply in rendering the image to higher sample counts.

      The tradeoff of using higher sample counts to achieve convergence is generally one of time, in that exhaustive random sampling the scene tends to miss difficult light paths while expending significant computational resources. However, this can be significantly accelerated by using a GPU or other wide vector processor. Mitsuba Renderer 2 exploits Intel's AVX 512 instruction set and the 16-component-wide floating point registers, mask registers, and special instructions of certain Skylake and later processors to significantly augment the speed of its path tracer. Moreover, Mitsuba 2 applies Coherent Pseudo-Marginal MLT (CPMMLT) (p. 11) to improve ray and thereby cache coherence when processing many rays in parallel. The result is a significantly higher sample rate, and therefore faster convergence on supported hardware.

      The images at right present an equal-time comparison of the same scene rendered in Mitsuba 2 (top) and LuxRender 1.6 (bottom). While LuxRender reached 10,000 samples per pixel, Mitsuba exceeded 30,000 Samples Per Pixel (SPP) in the same amount of time, resulting in noticeably higher convergence on the watch band. Additional speedup in Mitsuba 2 was achieved by using Intel's OneAPI DPC++ Compiler under Linux, rather than the recommended Clang, which required minor modification to some makefiles.