/r/computergraphics

Photograph via snooOG
Welcome to r/computergraphics.

This subreddit is open to discussion of anything related to computer graphics or digital art.

Computer Graphics FAQ

Submission Guidelines

Anything related to CG is welcome.

  • If you submit your own work, please be ready to receive critiques.

  • If you submit other peoples work, please give credit.

For more information about posting to our subreddit, read the r/computergraphics FAQ

Technical Questions?

Here are subreddits dedicated to specific fields:

r/vfx
r/GraphicsProgramming
r/MotionDesign
r/Programming
r/gamedev
r/Low_Poly
r/archviz
r/3Dmodeling
r/DigitalArtTutorials

Software Questions?

Questions about specific software. We'd love to help, and please feel free to ask, but these software specific communities may be able to provide you with more in-depth answers.

r/Maya
r/3dsMax
r/Cinema 4D
r/Blender

/r/computergraphics

53,429 Subscribers

2

Ray vs KD-Tree intersection test is not working

Quick context

I am writing a basic CPU based path tracer and I am currently in the middle of trying to implement the accelerated data structure "KD-Tree" in order to speed up Ray vs Object intersections.

The problem

The Ray vs KD-Tree intersection test is not working. I do believe that the KD-Tree construction is working, however I am not entirely sure about that. Below I will be walking through a short test that seems to suggest that the tree has been built correctly.

What I am NOT looking for

I know that my current KD-Tree implementation is not optimized, which is fine because naïve is good enough for now. It is always possible to optimize after having got it to work first.


Part 1: KD-Tree implementation.

According to my understanding, the following is the general idea behind the building of a KD-Tree.

Starting point: A set of triangles that make up a mesh, and an empty KdNode.

  1. Compute an AABB that contains the current set of triangles. In the very beginning, this will be the AABB of the full mesh.

  2. Choose a splitting-axis. In the case of a KD-Tree, the splitting order is fixed. I have chosen Y -> Z -> X. Furthermore, compute the mid-point on that axis that will be used to decide on which side of the chosen axis a triangle is.

  3. Split the current set of triangles into two subsets, each one corresponding to triangles with their centroid on one side of the chosen splitting axis.

  4. Recursively build the child nodes using the two new subsets of triangles.

In order to avoid building huge trees I have set a depth limit for tree, as well as a minimum size of the triangle set. If we are too deep in the tree, stop recursion and make a leaf out of the remaining triangles. The same situation occurs when the current set of triangles is too small as at that point we do not gain much from e.g. splitting 8 triangles into two subsets. We'd likely be better off just iterating over the remaining few triangles doing a Ray vs Triangle intersection tests with each of them.

Furthermore, note that I am working with ONLY static scenes. The KD-Tree is being built before the path-tracer runs.

KdNode class

// KdNode.h
class KdNode {
public:
	KdNode() = default;
        ~KdNode();
	
	KdNode* Build(std::vector<Triangle>& triangles, const int& depth);
	bool IntersectsRay(const Ray& ray, HitPayload& hp) const;

	bool IsLeaf{ false };
	
	KdNode* left{ nullptr };
	KdNode* right{ nullptr };
	
	std::vector<Triangle> tris{};
	AABB bbox{};
};

The building of the tree

// KdNode.cpp
KdNode* KdNode::Build(std::vector<Triangle>& triangles, const int& depth)
{
	KdNode * const parent = new KdNode(); // TODO: Use smart pointers.

	if (triangles.size() == 0)
		return parent; // <-- empty node

	// Don't want too big of a tree. Arbitrarily chosen numbers for now.
	if (depth >= 20 || triangles.size() <= 10)
	{
		parent->IsLeaf = true;
		parent->tris = triangles;
		return parent;
	}

	// Splitting time.
	
    // Compute the bounding box of the current set of triangles
	for (const Triangle& tri : triangles) {
		parent->bbox.Min = glm::min(parent->bbox.Min, tri.GetMin());
		parent->bbox.Max = glm::max(parent->bbox.Max, tri.GetMax());
	}

	const int axis = (depth + 1) % 3; // Alternating XYZ
	const glm::vec3 midPoint = 0.5f * (parent->bbox.Min + parent->bbox.Max);

	// Partition the triangles into left and right sets

	std::vector<Triangle> left{};
	std::copy_if(triangles.begin(), triangles.end(), std::back_inserter(left), [&](const Triangle& tri) {
		const glm::vec3 centroid = tri.GetCentroid();
		return centroid[axis] <= midPoint[axis];
	});

	std::vector<Triangle> right{};
	std::copy_if(triangles.begin(), triangles.end(), std::back_inserter(right), [&](const Triangle& tri) {
		const glm::vec3 centroid = tri.GetCentroid();
		return centroid[axis] > midPoint[axis];
	});

	// Recursively build the left and right child nodes
	parent->left = Build(left, depth + 1);
	parent->right = Build(right, depth + 1);

	return parent;

}

Part 2: Does the KdNode::Build(...) function seem to be working?

I do not know how to robustly test if it works or not, however I have still tried to do so.

I have created a "test mesh" using the 3D modelling program "Blender" I modelled a UV-Sphere with radius=3. The sphere is centered at the origin in it's local coordinate system. I am using a right-handed coordinate system in my application, as such I have taken care to export the sphere using the correct settings. The forward axis is -Z and the up direction is +Y. I load this .obj model into my application using the tinyobjloader C++ library.

After loading the model I observe the resulting KD-Tree, and the following are some of the AABBs of some of the trees nodes. See image below. Note how the root AABB correctly is in [-3, 3] on all axes. Furthermore, note how for each right child we follow, we see how we essentially cut off everything on one side. For example, the green AABB representing the right-child of the Kd-Tree root has x, z in [-3, 3] but y in [0, 3]. Further observing the other AABBs seems to suggest that the Kd-Tree building is correct.

Debug-view of the sphere mesh's Kd-Tree


Part 3: Performing a Ray vs Sphere intersection test (yields incorrect result!)

I simply load in the mentioned sphere, create a ray through screen-space pixel (900, 500) (I'll explain why in a second), and execute the Ray vs Mesh intersection test.

Note that the sphere center is on the origin, and radius=3. Therefore, if I place the camera at e.g. (0, 0, 3.5) and look at (0, 0, 0), the sphere would essentially be very close to the camera and assuming a reasonable FOV (45 deg in my case) it would also take up most of the pixels. So, if I shoot at a ray through "the middle of the screen" (image resolution is 1920x1080), then surely the ray will hit the sphere.

Furthermore, note that I don't think I need to do any "model to world-space" transformation for the loaded in sphere mesh. The local (model) coordinates of the sphere are already what I want the world-space coordinates to be.

The below test yields no hit.

// main.cpp
int main(void) {

    // World-Space coordinates.
    camera.SetPosition({ 0.0f, 0.0f, 3.5f });
    camera.LookAt({0.0f, 0.0f, 0.0f});
    camera.SetVerticalFov(45); // Degrees

    // Image is 1920x1080.

    std::unique_ptr<Mesh> sphere = std::make_unique<Mesh>("sphere.obj");
    
    // World-Space ray.
    const Ray ray = camera.GetRayThrough(900.0f, 500.0f);

    HitPayload hp{};
    const bool hit = sphere->IntersectsRay(ray, hp);
    if (hit) {
        std::cout << "Hit!\n";
        return 0;
    }
    
    std::cout << "No hit!\n";
    return -1;

};

Part 3: Ok, so let's see how I am performing the Ray vs KD-Tree intersection test then.

Here is the relevant function for that.

// KdNode.cpp
bool KdNode::IntersectsRay(const Ray& ray, HitPayload& hp) const
{

	// If the ray misses the bounding box, return false
	if (!bbox.IntersectsRay(ray))
		return false;

	// If we are a leaf node, check each triangle
	if (IsLeaf)
	{
		bool hit = false;
		float minDist = std::numeric_limits<float>::max();
		for (const Triangle& tri : tris)
		{
			const float t = tri.IntersectsRay(ray);
			if (t > 0.0f && t < minDist)
			{
				minDist = t;
				hp.t = minDist;
				hp.Normal = tri.GetNormal();
				hit = true;
			}
		}
		return hit;
	}

        // If not, then try to recursively intersect with children
	bool hitLeft = left->IntersectsRay(ray, hp);
	bool hitRight = right->IntersectsRay(ray, hp);

	return hitLeft || hitRight;

}

There are two places this could go wrong as far as I can see, either my Ray vs AABB intersection test is incorrect or my Ray vs Triangle intersection test is incorrect.

Ray vs AABB

float AABB::IntersectsRay(const Ray& ray) const
{

	// Ray vs AABB using 'Slab method'.
	// Source: https://tavianator.com/fast-branchless-raybounding-box-intersections/

	const float rxinv = 1.0f / ray.Dir().x;
	const float ryinv = 1.0f / ray.Dir().y;
	const float rzinv = 1.0f / ray.Dir().z;

	const float tx1 = (Min.x - ray.Origin().x) * rxinv;
	const float tx2 = (Max.x - ray.Origin().x) * rxinv;

	float tmin = std::min(tx1, tx2);
	float tmax = std::max(tx1, tx2);

	float ty1 = (Min.y - ray.Origin().y) * ryinv;
	float ty2 = (Max.y - ray.Origin().y) * ryinv;

	tmin = std::max(tmin, std::min(ty1, ty2));
	tmax = std::min(tmax, std::max(ty1, ty2));

	float tz1 = (Min.z - ray.Origin().z) * rzinv;
	float tz2 = (Max.z - ray.Origin().z) * rzinv;

	tmin = std::max(tmin, std::min(tz1, tz2));
	tmax = std::min(tmax, std::max(tz1, tz2));
	
	// No intersection
	if (tmax < tmin)
		return -1.0f;

	// Ray origin is inside the AABB
	if (tmin < Utils::Constants::ALMOST_ZERO)
		return tmax;

	return tmin;

}

Ray vs Triangle

float Triangle::IntersectsRay(const Ray& ray) const
{

	// Möller–Trumbore intersection algorithm
	// Source: https://en.wikipedia.org/wiki/M%C3%B6ller%E2%80%93Trumbore_intersection_algorithm#Check_if_the_ray-plane_intersection_lies_outside_the_triangle
    static constexpr float epsilon = std::numeric_limits<float>::epsilon();

    const glm::vec3 ray_cross_e2 = glm::cross(ray.Dir(), e1);
    const float det = glm::dot(e0, ray_cross_e2);

    if (det > -epsilon && det < epsilon)
        return -1.0f; // This ray is parallel to this triangle.

    const float inv_det = 1.0f / det;
    const glm::vec3 s = ray.Origin() - v0;
    const float u = inv_det * glm::dot(s, ray_cross_e2);

    if (u < 0 || u > 1)
        return -1.0f;

    const glm::vec3 s_cross_e1 = glm::cross(s, e1);
    const float v = inv_det * glm::dot(ray.Dir(), s_cross_e1);

    if (v < 0 || u + v > 1)
        return -1.0f;

    // At this stage we can compute t to find out where the intersection point is on the line.
    const float t = inv_det * glm::dot(e1, s_cross_e1);

    // Ray intersection
    if (t > epsilon)
        return t;
    
    // This means that there is a line intersection but not a ray intersection.
    return -1.0f;

}

If I am going wrong somewhere in the above tests, I can't see it.


Pastebin links to relevant pieces of code for easier viewing:

0 Comments
2024/04/17
19:48 UTC

2

How does this wormhole effect shader make the distortion?

Below is the code for a simple wormhole effect shader made in shadertoy.com.

I understand everything else except:

uv.x = time + .1/(r);
uv.y = time + sin(time/2.575) + 3.0*a/3.1416;

Here's what I think it's doing:

  • `uv.x = time + .1/r` is making the later `texture()` take the color for the pixel from the right side of the wormhole's texture, making it look like it's moving forward. And I think the `+.1/r` is there to make the center of the wormhole to distort more.

  • I have no idea what the `uv.y` is doing.

    void mainImage( out vec4 fragColor, in vec2 fragCoord ) { float time = iTime; vec2 p = -1.0 + 2.0 * fragCoord.xy / iResolution.xy; vec2 uv; p.x += 0.5*sin(time);

    p.y += 0.5*cos(time*1.64); float r = length(p); float a = atan(p.y,p.x); uv.x = time + .1/(r); uv.y = time + sin(time/2.575) + 3.0a/3.1416; float w = rr; vec3 col = texture(iChannel0,uv).xyz; fragColor = vec4(col,1.0); }

1 Comment
2024/04/17
19:12 UTC

3

Demand for 10-100 billion particles/voxels fluid simulation on single work station ?

0 Comments
2024/04/17
07:45 UTC

4

Backrooms animation

1 Comment
2024/04/16
20:55 UTC

1

Join PixelMentor Discord Server

📷 Exciting News! Join Our Vibrant CG & VFX Community on Discord! 📷

https://discord.gg/TRrw4ZZaXt

Are you passionate about Computer Graphics & Visual Effects? 📷 Look no further! Our Discord channel is the ultimate hub for professionals, enthusiasts, and newcomers alike! 📷

📷 Dive into a world of:
- Latest News: Stay updated with the hottest trends and breakthroughs in CG & VFX.
- Creative Challenges: Fuel your creativity with stimulating challenges and competitions.
- In-depth Tutorials: Learn from the best with tutorials covering a wide range of topics.
- Top Tools & Resources: Discover essential tools and resources to level up your skills.
- Networking Opportunities: Connect with industry professionals and fellow enthusiasts for collaborations and discussions.

Don't miss out on the opportunity to be a part of our dynamic community! Click the link below to join now and unlock a world of endless possibilities in CG & VFX! 📷📷 hashtag#CG hashtag#VFX hashtag#DiscordCommunity hashtag#JoinUs

Join Now!!!
https://discord.gg/TRrw4ZZaXt

https://preview.redd.it/670hlh4qspuc1.jpg?width=4096&format=pjpg&auto=webp&s=f63bdd0706695c4df7c949118b5197279a00a31d

0 Comments
2024/04/15
21:54 UTC

9

Trying to be healthy

1 Comment
2024/04/15
19:26 UTC

10

RoPes

0 Comments
2024/04/15
16:19 UTC

6

Why we use oval shape as a single unit in the point cloud in 3D Gaussian splatting?

Maybe my understanding is incorrect but 3DGS is basically a point cloud formed by ovals that have variable colour depending on the view point.

Just wondering why oval is the preferred shape and not other shapes?

Is there a specific the technique/paper that indicates the process of finding the ideal shape for a single unit in a point cloud?

Many thanks!

6 Comments
2024/04/14
04:40 UTC

1

Suggestions for spiderweb interactive maps for teaching

Idk if yall have seen those presentations that literally are like spiderweb maps and you click different locations of it to look at that info. Can anyone help me pinpoint what to use to create it? Im trying to create a visual interactive diagram for different OS's and things for class. Any ideas?

0 Comments
2024/04/13
00:27 UTC

9

I Created the Walt Disney Castle after 5 YEARS

I challenged myself to recreate the Walt Disney castle after a 5-year hiatus. The new project was built using Blender 4.1, while the previous one was created with Blender 2.79.

I would love to hear your thoughts on the comparison between the two versions.

If you’re interested, you can watch the creation and animation here: https://youtu.be/l89sj4DiMNo

0 Comments
2024/04/12
20:10 UTC

9

LegoBat

0 Comments
2024/04/11
15:43 UTC

2

Setting shader array from external source (ILGPU, ComputeSharp) or create ComputeBuffer from pointer in Unity

I am writing an application that is using Unity as a visualizer, but as the core of this application has to be somewhat portable/disconnected from Unity I am using ILGPU (can change to ComputeSharp if need be, went with ILGPU as it supports more platforms) to write some performance critical code to take advantage of the GPU (advection/diffusion fluid engine). This all works fine and the code runs, but now I want to display the simulation on screen.

The naive way would be to do work on GPU through ILGPU, fetch the data to the CPU, send it over to Unity and then fill a ComputeBuffer with the data, then point to that buffer in the shader. This will be slow.

So my question is, is it possible to directly set the ComputeBuffer in Unity using the pointer from ILGPU or some other magic on the shader end?

0 Comments
2024/04/10
18:46 UTC

5

A380 Vs Ion Man

2 Comments
2024/04/09
19:27 UTC

8

Farm

0 Comments
2024/04/08
14:15 UTC

13

Do you talk to your dog?

0 Comments
2024/04/07
16:01 UTC

4

Getting into stylized character modeling and animation recently, feedback appreciated!! (Yes, I'm already aware of the imperfect loop of the walk cycle)

2 Comments
2024/04/05
21:37 UTC

8

Project Zomboid - School Bus base by artforge 2024

1 Comment
2024/04/04
12:57 UTC

2

Particle tutorial with 3ds Max, TyFlow and Krakatoa

0 Comments
2024/04/04
12:25 UTC

2

Explanation of the Matrix4x4 class, source code in video description

0 Comments
2024/04/04
08:09 UTC

9

Realtime Refraction with SDF, Raymarching and some interesting Boolean Operations

1 Comment
2024/04/03
09:41 UTC

0

ATOM PUNK

0 Comments
2024/04/02
14:45 UTC

1

Handling Border Vertices in a Partitioned, Dynamic LOD System

I'm currently working on implementing a basic partitioned, dynamic Level of Detail (LOD) system for a square 3D terrain, which I've divided into quadrants. I am attempting to implement the data strcuture described in this paper (https://x3dom.org/pop/files/popbuffer2013.pdf), which utilizes aggressive geometry quantization to sort triangles based on their precision level, thus creating a nested, progressive structure.

The core of my issue lies in managing the quantization and rescaling of border vertices. According to the paper, all vertices within a partition map to an integer grid (ranging from 0 to 2^q - 1) based on the partition's specific bounding box. These quantized vertex values are truncated and rescaled back to the terrain's original object space, depending on the required LOD and the partition's bounding box. This process, however, introduces cracks along the borders of partitions with differing LODs.

To address this, the paper suggests protecting the positions of all border vertices by using the highest possible quantization level (l = q) during preprocessing. These protected vertices are flagged, and their high-precision coordinates are considered when computing degenerate triangles.

Here's where I'm stuck: Consider the top left quadrant of a square 3D surface/terrain. The bottom right corner vertex of this quadrant, being on the partition's border, would have a quantized grid value near the maximum. However, this same vertex, when part of the bottom right quadrant, would likely have a value near the minimum due to the different partition bounding box. Ensuring the correct rescaling of these border vertices requires using the same bounding box for rescaling as was used for quantization, which presents a challenge given my current use of simple vertex and index buffers. This might necessitate duplicating border vertices in the buffer and completely reindexing the triangles, which seems both inefficient and different than what the paper intends.

My tentative solution involves streaming the original, full-precision float values for the vertices and rendering border vertices at full precision, but this approach diminishes the benefits of reduced data streaming.

Is there a more efficient way to handle this issue without losing the advantages of the POP buffer method? Any insights or suggestions would be greatly appreciated. If this post doesn't belong here, please let me know, and I'll remove it. Thank you!

0 Comments
2024/03/29
14:42 UTC

3

Looking for a Shadow-Casting Technique for 2D scene

Hello, I am trying to implement shadows in my 2d game but im not really sure how to implement them.

This is how I would like them to look:

https://preview.redd.it/64bh0cu3yxqc1.png?width=2050&format=png&auto=webp&s=177fe86a58bad410052ee51e98105a20bfb8ba25

The green should be the shadow penumbra and should be kinda blurry.

Shadows should only follow a "shadow_dir"

I dont know if I should learn shadow mapping or raycasing?

Thanks

3 Comments
2024/03/27
21:02 UTC

1

Differences between Blinn-phong and phong illumination model?

Just wondering what are the differences/improvements made.

Is it just for illumination model or for shading models too?

3 Comments
2024/03/26
10:31 UTC

6

Phong illumination model - where can I find more explanation of these equations? (It’s not in the original paper)

9 Comments
2024/03/26
10:30 UTC

0

CYOA Research Questionnaire

Currently I'm in the process of making a choose your own adventure (CYOA) style fantasy first person 3D animation in Unreal Engine about an adventurer delving into a dungeon to acquire a magical artifact to save his village, for my college course. It would appreciated if you would fill out this questionnaire to help me with my project.

https://forms.gle/qGpBvJdwXuycK5rV9

0 Comments
2024/03/26
02:19 UTC

Back To Top