Following on my work from Assignment[1], my goal in this midterm assignment is to simulate parallax between apparent objects based only on data contained in 3D signed distance fields (SDFs). I go beyond the Assignment [1] demos by adding color, writing shader code in Javascript using three.js, and using a higher number of key points within respective 3D SDFs as bases for simulating parallax.
This work follows on the precedent of extensive use of "parallax scrolling" in the earlier days of computer games. However, it seeks to identify ultracompact methods of replicating similar behavior starting only from static data that is contained in a 3D SDF. The goal sought is a standardized format for simulating 3D scenes through a 2D viewport based solely on the contents of static data files used in conjunction with a standardized media player whose only other input is an xyz camera location in 3D space. This standardization would greatly simplify the process of using a screen as an artificial set extension that simulates a scene with depth in a filmmaking process, and is something I am pursuing in parallel with more complex methods.
The ultimate goal of this research is to see whether, by programming for a standard media player, scenes with complex parallax behavior can be compactly stored and later replayed in a way that shows far more detail than what has been stored. This is essentially a form of steganography.
In this assignment, I simulate parallax like effects using a basic 3D SDF structure as a starting point.
As previously discussed in Assignment[1], the signed distance field of a sphere, for example, is simply the distance of an arbitrary point in 3D space from that sphere's center, minus the sphere's radius. By using an alternate convention to define a distance from the sphere's center, such as the cross product of two vectors associated with three points including the sphere's center, information can be extracted from the same SDF formula in a different way.
As one example, imagine that the 2D screen that displays a shader is in the z = 0 plane, placing a point S on this screen at coordinates (x, y, 0). A virtual camera at point C, which has coordinates (xc, yc, zc), where zc > 0, is located in the space in front of the screen. Next, define another point P located behind the screen at coordinates (xp, yp, zp), where zp < 0. Next, define vector SC as (x-xc, y-yc, zc) and vector SP as (xp-x, yp-y, zp), noting that the sign conventions for these vectors' components are intentional. Finally, define a new vector that is the cross product of vectors SC and SP, and use this new vector as an input into 3D SDF calculations.
In this example, I define 6 key points "behind" the 2D screen that displays a shader, which is defined as lying in the z = 0 plane. I then call a 3D sphere SDF formula for each of these points using respectively modified vectorial inputs and separately defining respective radii for each. I then use an algorithm to determine how to prioritize display of the points when they are shown simultaneously.
In this example, I define 6 key points "behind" the 2D screen that displays a shader, which is defined as lying in the z = 0 plane. I then call a 3D sphere SDF formula for each of these points using respectively modified vectorial inputs and separately defining respective radii for each. I then use an algorithm to determine how to prioritize display of the points when they are shown simultaneously. As can be seen, the method I used to modify vectorial inputs is more complex in this example than the previous one.
In this example, I define 6 key points "behind" the 2D screen that displays a shader, which is defined as lying in the z = 0 plane. I then call a 3D box SDF formula for each of these points using respectively modified vectorial inputs. I then use an algorithm to determine how to prioritize display of the points when they are shown simultaneously.
In this example, I define 6 key points "behind" the 2D screen that displays a shader, which is defined as lying in the z = 0 plane. I then call a 3D sphere SDF formula for each of these points using respectively modified vectorial inputs. The approach used is significantly more complex than in the above examples, and simulates a gemstone or windowlike effect as the camera moves.
I briefly discussed caustics in my Assignment [1] writeup, and am interested in the potential for generating caustics from double-sided laser etching on acrylic. One potential precedent is an MIT innovation in which printing algorithmically calculated dot patterns on two sides of a transparent substrate was able to generate apparent 3D images to viewers over a limited range of angles. It seems that it would be simpler to estimate caustics from double sided etching on acrylic based on incident light perpendicular to the original surface of the acrylic, but I will probably have to leave any such research for another time.