Thinking with Portals

Companies like Pixar are surprisingly open about their craft. Indeed, they publish several papers each year laying out the technologies they’ve developed for their feature films as part of their Renderman renderer. One such paper recently caught my eye: Into the Voyd: Teleportation of Light Transport in Incredibles 2.

Anyone that’s watched Incredibles 2 will remember Voyd, an up-and-coming superhero who can open portals, connecting different areas of space; just like in the Portal game franchise if you’ve ever played those.

A portal pair is made of two parts: a ‘here’ opening and a ‘there’ opening. If you were to place them back to back, then you would see nothing happen. No teleportation occurs. Place them either side of a plane and you get a window. But if you place them in completely different places, you get a CCTV-style view of the other location. These portals allowed for interesting effects in Incredibles 2, where you could look into one portal and you see out of the other – as if your eye were teleportedacross the scene.

Which, it turns out, is pretty much how Pixar, and now Gaia, manages it.

Most of Pixar’s paper is focused on how to make this technology usable by animators, allowing them to visualise the effect inside modelling programs like Maya. This is useful in a production environment, but is ultimately useless for Gaia. Gaia can’t connect to such programs – scenes are coded in by hand – and so all we care about from the paper is how to make the effect work at all.

My first guess at how to approach this method was to simply teleport any intersecting rays on one portal to their relative position on the other, which upon further reading turned out to be exactly how Pixar approached the problem as well.

When a ray intersects with a ‘here’ opening, Gaia calculates the point of intersection and the angle of incidence in the local coordinate system of the opening. On a textured object, these are referred to as UV-coordinates or UVs. Using UVs allow us to ensure that any light rays entering ‘here’ will leave ‘there’ in the same relative position.

The same is true with the angle of the ray. By default, Gaia calculates ray paths using a global coordinate system using two three-vector variables: the origin and the direction. Both are simply positions in 3D space, with Gaia then drawing a line between them – a vector – and looking for any intersections along that line.

Gaia looks for ray-primitive intersections anywhere along this line, although intersections behind the origin are ignored.

By describing these points in the local coordinate system of the opening, we can move the vector to the other opening, convert back to the global coordinate system and continue the ray in the correct direction., as if there was nothing even there.

To do this, we need a a local coordinate system for each opening. Gaia creates one using the surface normal of the shape, creating two more perpendicular vectors to act as axes for our new coordinate system – a set of orthonormal basis vectors. To convert a global coordinate to this local system, we multiply the vector’s components by the basis vectors. For a global position (x,y,z), the same position in a local coordinate system defined by basis vectors {A, B, C} is given by:

\begin{pmatrix} A_1 & B_1 & C_1 \\ A_2 & B_2 & C_2 \\ A_3 & B_3 & C_3 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \vec{A} x + \vec{B}y + \vec{C}z

By using the inverse of this matrix, we can transform local coordinates into global ones.

All this occurs inside the portal material class, meaning a portal can (in theory) be applied to any primitive, so long as there are two of them linked together properly. The final effect is pretty cool, although in the future I would like to try and implement edge effects similar to those used in Incredibles 2 and Portal.

The portal also has an albedo property, allowing them to tint the scene through the portal: you really can now see the world through rose-tinted glasses.

The rear portal here has a pink albedo applied, which has a visible effect on the caustic effect of the portal. Each portal can have a different albedo applied.

This isn’t to say there aren’t problems though. The portals cause havoc with the shadow rays, so copies of the main light are added to the light list, but shifted so as to be in the correct location after rays have passed through each portal (as if there were duplicate rooms beyond the portal). And even then there are still some lighting artefacts. More work is needed here for sure.

On the whole, implementing these portals was fun – it was fairly easy to do and provided impressive, easy to see results. It also showed how useful the heat map can be for debugging – there were several bugs that were easily solved by looking at the heat map.

Rays hitting the left portal here were being emitted behind the other portal, resulting in each ray hitting the ray-bounce limit (50 bounces) as seen by the red here.

Next stop I think is textures – now that Gaia can handle UV transformations this should be fairly easy to do.

Version 0.2.1: Monte Carlo Rendering and Shadow Rays

That’s quite a long title there…

When I first learned that you can find the answer to certain problems by throwing random numbers at them, I was understandably rather shocked. If I could just throw a random number generator at everything, had all the maths I’d been learning up to this point been pointless?

Fortunately, no. No it wasn’t.

Monte Carlo methods are exactly this: a way of statistically calculating the answer to questions by using random numbers.

The easiest (and most common, sorry) example is calculating pi using a square and a circle:

The ratio between the number of points inside this circle vs outside it is 3.12 (n=100), a close estimation of pi! Increasing n results in a more accurate estimation, to a point.

By selecting random points inside the square, then finding the ratio between the number of points inside vs outside of the circle, you can find an estimation for the value of pi! The accuracy of this value is proportional to the number of points used, although the law of diminishing returns does come into play.

Thankfully, we can improve our estimate by using a probability density function (pdf).

A pdf may be thought of as a continuous function that, for any point f(x), defines the relative probabilty that a randomly generated number would equal that point. The more the pdf follows f(x), the better it describes the distribution and the faster our Monte-Carlo estimation will converge on the correct answer.

Unfortunately a pdf, and importance sampling in general, can only be used when the answer to the problem is known, preventing its use in fields such as particle physics, but this technique can be applied to computer graphics. Each type of material and object has a pdf that defines the distribution of the ray bounce directions according to the properties of the material, resulting in less noise in the final images:

montecarlocornell.png
The first (correct) Monte-Carlo render.

To further improve the quality of the renders, Gaia takes into account both primary and secondary light sources (eg reflections) through the use of Shadow Rays. Shadow rays are additional rays that, for any randomly chosen ray bounce, are sent directly towards a randomly selected light source in the scene rather than bounce around the scene. This ‘sampling’ of the light source results in the image converging far faster. The light sources must be explicitly defined however, otherwise the image will be incorrectly exposed.

To make the most of this advancement, I need to make some improvements to how the scene is described: currently scenes must be described around any meshes that are being imported, which isn’t ideal and results in a lot of trial-and-error when setting them up.

Release 0.2.1

Rendering methods:

  • Monte Carlo path tracing
  • Edge Line pass
  • Z-Depth pass

Objects:

  • Spheres
  • Quads (buggy)
  • Triangles
  • Triangle meshes from .obj files (with subdivision)

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights
  • Gooch shading

Acceleration structures:

  • In-core multithreading support
  • Bounding volume hierarchies
  • Axis-aligned bounding boxes

Output file types:

  • .hdr
  • .ppm

Version 0.2: OpenSubDiv

OpenSubDiv is an open soure library developed by Pixar (yes, that Pixar) to efficiently subdivide meshes. Its a really cool piece of software that greatly simplified my quest to pretty up meshes in Gaia.

mini OSD.png
The same scene as last time (with better lighting) after subdividing the mesh twice. Rendered at 1000×500 with 500 samples per pixel. The weird patterning in the glass is, as far as I can tell, a weird interference artifact. It disappears if the mesh is subdivided three times.

Subdivision is, in a nutshell, the process of taking a mesh and splitting each quad or triangle that its made of into many more, which makes the mesh look much smoother when rendered. OpenSubDiv provides two algorithms for this: Catmull-Clark (“Catmark”) and Loop subdivision. Catmark is more commonly used (many CAD programs use this), however for triangle-based meshes Loop subdivision is preferable. Gaia triangulates all meshes when loaded to prevent any errors anyway, so it made sense to pick Loop.

Subdivision is done recursively, and you can set a limit as to how many times the scene should be subdivided. In an ideal world we’d go for as many subdivisions as possible, but each subdivision greatly increases the memory required by the scene and massively slows down the render due to the increased complexity of the bvh.

After subdivision, the triangles are assigned the material of thier ‘parent’ object; which I’m very glad that OpenSubDiv keeps track of as that would be a pain to deal with otherwise. This allows me to easily propagate down the existing material assignments to the new triangles.

Due to the performance, I’m probably going to keep subdivision off in test scenes going forward. But when looking for beauty shots I’ll probably stick to ~2-3 iterations, depending on the mesh being used. Four subdivisions of the Mini mesh takes up over 4GB in memory, meaning it is infeasible to go above this on my laptop with a paltry 8GB of RAM. But given how long it would take to render at that point I’m not too upset at that – I like being able to use my laptop for other things as well.

For now though I’m pretty happy with where the mesh system in Gaia has got to – it’s a bit clunky for the most part but does the job nicely. Next I want to try and tackle Monte Carlo rendering again, hopefully cracking this should allow for far less noisy images in the future.

 

Release 0.2

Rendering methods:

  • Distributed (stochastic) path tracing
  • Monte Carlo path tracing (in-progress)
  • Edge Line pass
  • Z-Depth pass

Objects:

  • Spheres
  • Quads (buggy)
  • Triangles
  • Triangle meshes from .obj files (with subdivision)

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights
  • Gooch shading

Acceleration structures:

  • In-core multithreading support
  • Bounding volume hierarchies
  • Axis-aligned bounding boxes

Output file types:

  • .hdr
  • .ppm

Version 0.1.2

This is actually a big update to Gaia, not in the amount of code introduced, but in what it allows: multi-material meshes.

 

multicolour_mini_test.png
First (successful) test at applying multiple materials to the same mesh, using the Mini model from last time. Rendered at 500×1000 with 1000 samples per pixel.

Unfortunately its not a particularly elegant solution. The mesh must be prepared using a program like Autodesk Maya by assigning different materials to the model, and then exported with those materials attached.

The materials used don’t actually matter, as Gaia can’t import from .MTL files yet; but the .obj file uses a material identifier, that can be used to differentiate between the parts of the mesh that we want to colour separately. Therefore when importing a mesh, if I pass it a list of materials then it can assign them according to this identifier.

Unfortunately, getting the materials to align with the correct parts of the mesh is pretty much trial-and-error…

I’d love to be able to create textures as defined in the .MTL files, but Gaia’s shaders aren’t versatile enough to be able to try implementing this yet. Maybe after I can get Phong to work…

The biggest problem with the meshes now is their definition. Blocky meshes look really poor in certain scenarios, especially when there are reflective materials involved:

mini_dark_10k_5k_supersampled
This was rendered at 10000×5000 and then reduced to 1000×500 to reduce noise, especially on the interior of the car. Still noisy in there despite this. The bonnet looks pretty poor with the blocky mesh.

so the next plan is definitely to implement some form of subdivision.

 

Release 0.1.2

Rendering methods:

  • Distributed (stochastic) path tracing
  • Monte Carlo path tracing (in-progress)
  • Edge Line pass
  • Z-Depth pass

Objects:

  • Spheres
  • Quads (buggy)
  • Triangles
  • Triangle meshes from .obj files (improved) 

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights
  • Gooch shading

Acceleration structures:

  • In-core multithreading support
  • Bounding volume hierarchies
  • Axis-aligned bounding boxes

Output file types:

  • .hdr
  • .ppm

Version 0.1

Version 0.1 adds support for a number of features, as well as enabling some that have been sitting in the code dormant for quite a while.

The first of these is support for outputting a Z-Depth pass, useful for implementing a depth of field effect to the scene, as opposed to adding the effect as part of the beauty pass. This allows the depth of field of the scene to be changed far more easily, as the Z-Depth pass takes a fraction of the time to render as a beauty pass. An example of one of these passes is below:

v 0.1 z depth example
The Z-Depth pass of the random sphere scene shown in previous entries.

In a Z-Depth pass, objects are shaded according to how far they are from the camera. Two distances are defined prior to rendering: a maximum and minimum distance. If the object is closer to the camera than the minimum distance, it is shaded as pure black, whereas any objects further away than the maximum distance are shaded pure white. Any objects between these two points are shaded according to where they are between these points. The further from the camera, the whiter the object. This results in nice gradients that, when composited with a beauty render, results in objects blurring and fading out in the distance – depth of field.

output
The above Z-Depth pass composited with the edge line composition shown from Version 0.0.4

Next is the addition of axis-aligned bounding boxes (AABB) and bounding volume hierarchies (BVH). The implementation of these largely follows the implementation by Peter Shirley in his minibook Ray Tracing The Next Week. This is for one simple reason:

His implementation works.

In time I will expand and likely replace a large amount of this code, adding support for other entities such as directional lights, but for now it is a simple and relatively elegant solution that allows Gaia to really begin to stretch her legs as a path tracer.

For a while now the code for importing a .obj mesh into Gaia has been sitting there dormant and unincluded. This is because for anything but the simplest model the amount of time it took to perform intersection tests between each ray and every primitive in the scene made it utterly useless.

However, by introducing support for bounding volume hierarchies, a complex mesh can be included in the scene without much of a performance decrease – compared to before at least.

.obj files are imported using the Tiny Obj Loader single header library, available at https://github.com/syoyo/tinyobjloaderg. All meshes are triangulated when they are loaded so that Gaia doesn’t get confused by any strange n-sided shapes.

glasscar-e1548012210562.png
A glass Mini Cooper imported from a .obj file.

There is still a long way to go with meshes. Currently they only support using a single material for the whole mesh, just like with the Mini above, and importing materials from the file is also not supported. The imported mesh also cannot be manipulated in any way, so translation, rotation and scaling of meshes will be high on the to-do list. Their inclusion does, however, allow for far more complex scenes than Gaia has been able to render up till now and as such is quite a milestone.

 

Release 0.1

Rendering methods:

  • Distributed (stochastic) path tracing
  • Monte Carlo path tracing (in-progress)
  • Edge Line pass
  • Z-Depth pass (new)

Objects:

  • Spheres
  • Quads (buggy)
  • Triangles
  • Triangle meshes from .obj files (new) 

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights
  • Gooch shading

Acceleration structures:

  • In-core multithreading support
  • Bounding volume hierarchies (new)
  • Axis-aligned bounding boxes (new)

Output file types:

  • .hdr
  • .ppm

Version 0.0.4

Edge lines are often used in non-photorealistic art styles such as Cel Shading, but are often generated in post by detecting sharp colour gradients in the colour image.

random edge
The edge line pass of a scene described in Peter Shirley’s Ray Tracing in One Weekend containing randomly generated spheres.

Gaia generates it’s edge lines slightly differently. By using the algorithm outlined in this paper, which was presented at the 2009 International Symposium on Non-Photorealistic Animation and Rendering, gradients in the geometry are detected and drawn to a black and white edge line pass, as seen above, which can be composited with an RGB render to produce a final image.

random colour temp
A composite image of the edge line pass above with an RGB render.

The edge lines are produced by using a ‘ray stencil’, where additional rays are sent in concentric rings around a sample ray, returning the Object ID of the struck piece of geometry. If any of the stencil rays returns an Object ID different to that of the central sample ray, there is an edge present.

By increasing the number of additional rays, the quality of the edge lines will be increased, whereas increasing the radius of the ray stencil results in thicker edge lines being rendered.

Gaia currently does not support edge lines in reflections,  as seen in the composite image above, although this could be something I improve on in the future.

Release 0.0.4

Rendering methods:

  • Distributed (stochastic) path tracing
  • Monte Carlo path tracing (in-progress)
  • Edge Line pass (new)

Objects:

  • Spheres
  • Quads
  • Triangles (unstable)

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights
  • Gooch shading

Acceleration structures:

  • In-core multithreading support

Output file types:

  • .hdr
  • .ppm

Version 0.0.3

Monte Carlo rendering brings many problems to the table, and will be something that I continue to develop over a long period of time. Version 0.0.3 begins this process, which will be developed over the coming versions, but for now is non-functional for all but the most basic scenes.

v003 gooch render.png
A sphere that has been Gooch shaded, using a red lambertian base material. The blue side of the sphere is pointing towards the ‘light’, although constant luminance is used in this scene.

Version 0.0.3 also implements the Gooch shading algorithm, as proposed in her paper here. This shading algorithm results in a cool-warm gradient on the object, orientated towards a specified light. The specifics of how this works will be outlined in a future post. This gives a more cartoonish visual, which could be combined with other techniques such as edge lines to create a non-photorealistic appearance.

 

Release 0.0.3

Rendering methods:

  • Distributed (stochastic) path tracing
  • Monte Carlo path tracing (in-progress) (new)

Objects:

  • Spheres
  • Quads
  • Triangles (unstable)

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights
  • Gooch shading (new)

Acceleration structures:

  • In-core multithreading support

Output file types:

  • .hdr
  • .ppm

Next release: edge detection and edge line pass

Version 0.0.2

Gaia now supports the rendering of quads, allowing me to render the first, of many, Cornell box. Expect to see a lot more of these in the coming updates.

v_002_cornell_box.png
The first Cornell box. 500×500 at 1000 spp.

The current implementation of quads isn’t perfect. It requires three points for inputs which is neither the minimum number required to render simple squares and rectangles, nor is it enough to support the full range of possible quads. While that’s more ideal for use in bounding boxes later on, it requires defining axis-aligned rectangles for each pair of axes, making the code messier.

Implementing these will be on my to-do list, but the current solution works well enough for now.

 

Release 0.0.2

Rendering methods:

  • Distributed (stochastic) path tracing

Objects:

  • Spheres
  • Quads (new)
  • Triangles (unstable)

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights

Acceleration structures:

  • In-core multithreading support

Output file types:

  • .hdr
  • .ppm

Plan for next release: Monte Carlo methods

 

Version 0.0.1

Gaia is working!

v_001_first_render
One of the first successful tests of Gaia’s materials system. 1000×500 at 1000 samples per pixel. Left: hollow glass sphere. Centre: ideal Lambertian. Right: metallic.

Well, the basics are at least. I’ve implemented Kajiya’s rendering equation well enough to get physical looking results, but there’s still a long way to go.  Monte Carlo integration or non-spherical objects for example…

Version 0.0.1 allows the rendering of simple scenes built from spheres, using idealised material BRDFs and basic lighting. The Oren-Nayar reflectance model was also implemented as a first attempt at a true BRDF, but the current implementation can leave unwanted artefacts.

The code is available on github: https://github.com/tim0901/Gaia

Release 0.0.1

Rendering methods:

  • Distributed (stochastic) path tracing

Objects:

  • Spheres
  • Triangles (unstable)

Material BRDFs:

  • Ideal Lambertian, dielectric, metallic
  • Oren-Nayar reflectance model (buggy)
  • Blinn-Phong shading model (in-progress)
  • Diffuse area lights

Acceleration structures:

  • In-core multithreading support

Output file types:

  • .hdr
  • .ppm

Plan for 0.0.2: quads and a Cornell box.

The Plan

Before truly getting started on the development of Gaia, there were some important design choices that had to be made. Some were easier than others, some I might come to regret, but they have been made nonetheless.

My language of choice had to be C++, simply because it is the language that I am most comfortable with. This works well for compatibility with external libraries such as OpenGL, which will continue to be used to monitor render progress.

Gaia will be, at its core, a bidirectional path tracer, meaning rays are traced both from the camera and from the light source, to be connected in the middle. This is more computationally expensive per ray but significantly reduces noise. I also hope to implement the Metropolis light transport algorithm, although this will come later.

Design-wise, Gaia is going to be built as a movie renderer, and as such will be a biased renderer. Eventually, I hope to use techniques such as ray sorting to accelerate rendering, which is more important than physical accuracy to a movie studio.

On top of this, Gaia will eventually be able to support a variety of primitives including spheres, triangles, planes and curves; as well as volumes, all of which will support motion blur when desired.

Of course this is all subject to change, but for now it gives me quite a few areas to work on!

Design a site like this with WordPress.com
Get started