/r/GraphicsProgramming

Photograph via snooOG

A subreddit for everything related to the design and implementation of graphics rendering code.

Rule 1: Posts should be about Graphics Programming.
Rule 2: Be Civil, Professional, and Kind


Suggested Posting Material:
- Graphics API Tutorials
- Academic Papers
- Blog Posts
- Source Code Repositories
- Self Posts
(Ask Questions, Present Work)
- Books
- Renders
(Please xpost to /r/ComputerGraphics)
- Career Advice
- Jobs Postings (Graphics Programming only)


Related Subreddits:

/r/ComputerGraphics

/r/Raytracing

/r/Programming

/r/LearnProgramming

/r/ProgrammingTools

/r/Coding

/r/GameDev

/r/CPP

/r/OpenGL

/r/Vulkan

/r/DirectX


Related Websites:
ACM: SIGGRAPH
Journal of Computer Graphics Techniques

Ke-Sen Huang's Blog of Graphics Papers and Resources
Self Shadow's Blog of Graphics Resources

/r/GraphicsProgramming

49,334 Subscribers

9

TU Wien vs. Charles University in Prague and whether it even matters?

Hi! I'm nearing the end of my bachelor's program, and now I'm choosing between these two universities to pursue my master's degree with the ambition of working in the games industry in the future. Here are the main arguments I have for each (might be wrong):

TU Wien

  • very good course quality and some great research
  • great international ratings, which might help when applying for jobs

Charles University

  • still a very solid course
  • focuses more on graphics in the context of video games
  • Czechia has a lot of game studios

And now to the most important question: does it even matter that much which of these two I choose?

Would love to hear some opinions!

Thanks in advance :)

9 Comments
2024/11/29
20:30 UTC

1

LearnOpengl PointLight Shadow issue

3 Comments
2024/11/29
08:05 UTC

6

Weird bug with GGX importance sampling, I cannot for the life of me figure out what the issue is, please help

Okay I genuinely feel like I'm going insane. I want to get to a Cook-Torrance + GGX implementation for my ray tracer, but for some damn reason the importance sampling algo gives these weird artifacts, like turning the sun disk into a crescent, sometimes a spiral:

https://preview.redd.it/ue6gpmsjyp3e1.png?width=521&format=png&auto=webp&s=f1ded9704a4cb5e20b19d5f47a7ed7bf33906480

https://preview.redd.it/g2hlu7dmyp3e1.png?width=461&format=png&auto=webp&s=e7e695a287a1e30fcf58e2cfe00e9b6216ef6ee8

It's written in OptiX. This is the relevant code:

static __forceinline__ __device__ float pcg_hash(unsigned int input) {
    // pcg hash
    unsigned int state = input * 747796405u + 2891336453u;
    unsigned int word = ((state >> ((state >> 28u) + 4u)) ^ state) * 277803737u;

    return (word >> 22u) ^ word;
}

static __forceinline__ __device__ float myrnd(unsigned int& seed) {
    seed = pcg_hash(seed);
    return (float)seed / UINT_MAX;
}

// Helper class for constructing an orthonormal basis given a normal vector.
struct Onb
{
    __forceinline__ __device__ Onb(const float3& normal)
    {
        m_normal = normalize(normal);

        // Choose an arbitrary vector that is not parallel to n
        float3 up = fabsf(m_normal.y) < 0.99999f ? make_float3(0.0f, 1.0f, 0.0f) : make_float3(1.0f, 0.0f, 0.0f);

        m_tangent = normalize(cross(up, m_normal)); 
        m_binormal = normalize(cross(m_normal, m_tangent));
    }

    __forceinline__ __device__ void inverse_transform(float3& p) const
    {
        p = p.x * m_tangent + p.y * m_binormal + p.z * m_normal;
    }

    float3 m_tangent;
    float3 m_binormal;
    float3 m_normal;
};


// The closest hit program. This is called when a ray hits the closest geometry.
extern "C" __global__ void __closesthit__radiance()
{
    optixSetPayloadTypes(PAYLOAD_TYPE_RADIANCE);
    HitGroupData* hit_group_data = reinterpret_cast<HitGroupData*>(optixGetSbtDataPointer());

    const unsigned int  sphere_idx = optixGetPrimitiveIndex();
    const float3        ray_dir = optixGetWorldRayDirection(); // direction that the ray is heading in, from the origin
    const float3        ray_orig = optixGetWorldRayOrigin();
    float               t_hit = optixGetRayTmax(); // distance to the hit point

    const OptixTraversableHandle gas = optixGetGASTraversableHandle();
    const unsigned int           sbtGASIndex = optixGetSbtGASIndex();

    float4 sphere_props; // stores the 3 center coordinates and the radius
    optixGetSphereData(gas, sphere_idx, sbtGASIndex, 0.f, &sphere_props);

    float3 sphere_center = make_float3(sphere_props.x, sphere_props.y, sphere_props.z);
    float  sphere_radius = sphere_props.w;

    float3 hit_pos = ray_orig + t_hit * ray_dir; // in world space
    float3 localcoords_hit_pos = optixTransformPointFromWorldToObjectSpace(hit_pos);
    float3 normal = normalize(hit_pos - sphere_center); // in world space

    Payload payload = getPayloadCH();
    unsigned int seed = payload.seed;

    float3 specular_albedo = hit_group_data->specular;
    float3 diffuse_albedo = hit_group_data->diffuse_color;
    float3 emission_color = hit_group_data->emission_color;

    float roughness = hit_group_data->roughness; roughness *= roughness;
    float metallicity = hit_group_data->metallic ? 1.0f : 0.0f;
    float transparency = hit_group_data->transparent ? 1.0f : 0.0f;

    if (payload.depth == 0)
        payload.emitted = emission_color;
    else
        payload.emitted = make_float3(0.0f);


    float3 view_vec = normalize(-ray_dir); // From hit point towards the camera
    float3 light_dir;
    float3 half_vec;

    // Sample microfacet normal H using GGX importance sampling
    float r1 = myrnd(seed);
    float r2 = myrnd(seed);
    if (roughness < 0.015f) roughness = 0.015f; // prevent artifacts


    // GGX Importance Sampling
    float phi = 2.0f * M_PIf * r1;
    float alpha = roughness * roughness;
    float cosTheta = sqrt((1.0f - r2) / (1.0f + (alpha * alpha - 1.0f) * r2));
    float sinTheta = sqrt(1.0f - cosTheta * cosTheta);

    half_vec = make_float3(sinTheta * cosf(phi), sinTheta * sinf(phi), cosTheta);
    // half_vec = normalize(make_float3(0, 0, 1) + roughness * random_in_unit_sphere(seed));

    Onb onb(normal);
    onb.inverse_transform(half_vec);
    half_vec = normalize(half_vec);
    // half_vec = normalize(normal + random_in_unit_sphere(seed) * roughness);


    // Calculate reflection direction L
    light_dir = reflect(-view_vec, half_vec);

    
    // Update payload for the next ray segment
    payload.attenuation *= diffuse_albedo;
    payload.origin = hit_pos;
    payload.direction = normalize(light_dir);
    
    // Update the seed for randomness
    payload.seed = seed;
    
    setPayloadCH(payload);

}

Now, I suspected that maybe the random numbers are messing up with the seed, but I printed out the r1, r2 pairs and graphed them in excel, they looked completely uniform to me. My other suspicion (not that there are many options) is that the orthogonal basis helper struct is messing something up - but my issue with this is, if I replace the GGX sampling distribution with a simple up vector + some noise to create a similar distribution of vectors, the artifacts disappear. When I'm using Onb on make_float3(0, 0, 1) + roughness * random_in_unit_sphere(seed), it just doesn't have the same issue. So that would leave the actual half vector calculation as the problem, but for that I just copied the formula from the online sources, and I checked the correctness of the math many times. So I'm just lost. This is probably gonna be down to some really dumb obvious mistake that I somehow haven't noticed, or some misunderstanding of how these functions should be used I guess, but I would really appreciate some help lol.

10 Comments
2024/11/28
23:41 UTC

2

Ray Tracing: colour mixing for bounce lighting

Working on a small renderer and I‘ve sort of hit a wall at the final step: colour mixing.

I‘ve got all my intersections done, I know which colours need to be mixed etc. - what i‘m struggeling with is how to mix them properly without keeping track of every colour during the tracing process.

If you only trace rays without any bounces, the result is clear: the colour at the intersection point is the final pixel colour, so it can just be written to the image at those pixel coordinates.

But as soon as we have additional bounces, the primary intersection colour now becomes dependent on the incoming illumination from secondary and tertiary intersections (and so on). For example if my primary intersection results in a red colour, and the bounce ray then results in a blue colour (assuming it is not in shadow), then the red and blue need to be mixed.

For one bounce this is also trivial: simply mix the second colour with what‘s already stored in the image.

But when we get a second bounce, we can‘t just sequentially mix the colours „in place“. We first need to mix the secondary colour with the tertiary, and the result of that with the primary and THEN write to the image.

This gets even more complicated when we have multiple bounces spawn from a single ray.

How would you approach this? Is there a more efficient approach other than storing evaluated colours in a buffer and combining them in the correct order via some sort of wavefront approach?

How do ray tracers, that don’t limit their light paths to single bounces per intersection, handle this?

2 Comments
2024/11/28
22:58 UTC

32

I show my Java+OpenGL creation, all they want to know about is my UI.

https://i.redd.it/4k5m57a6mo3e1.gif

Hello!

I've been writing graphics stuff since mode 13h was a thing. Yesterday I showed some of my latest work with this open source robot sim and all anyone cared about was the UI... which is excellent! They care about something. Great success! :)

In Java 21 I'm still using Swing with a handy module called Modern Docking, which does all that hot swapping arrange-to-taste goodness. The app is node based like Unity or Godot.

The gif in question is showing a bug I'm having getting ODE physics engine and my stuff to play nice together. My current goal is to implement origin-shifting and reverse-z projection because my nephew wants to do solar systems at scale, KSP style. Turns out a million units from the origin 32 bit floats in graphics card start to struggle and meshes get "chunky". Currently on day 3 of what should have been a 10 minute fix. Classic stuff.

IDK where I was going with this. I just wanted to say hello and be a part of the community. Cheers!

16 Comments
2024/11/28
18:21 UTC

2

How do portals in Duke Nukem traverse into other sectors properly while keeping things rendering normally?

I'm making a very simple portal based software renderer in PyGame that just uses portals as links to other sectors - like a window or doorway. I've have gotten somewhat far, with texture mapping (thank you Plus-Dust), entities, and some other neat things. I'm going ahead with a portal based SW renderer because a depth buffer was apparently too difficult for me to add previously and that I'm not wanting to do BSP at the moment due to limited time, so I'm not doing anything fancy that makes a portal a proper portal - one that transforms a sector and puts it infront of the portal as if it were a doorway.

Forgive the fact that I may sound like an idiot, but my problem starts with the portal traversal. For example, we'll have 3 sectors. We have sector A, B, and C. A goes into C, B goes into C and C goes both into A AND B. My problem is in the form of the incorrect sector being drawn, as C would sometimes overlap C. My portal rendering system works by first getting the portals in the current sector I am in, then if the portal is visible, I then go to the sector linked by that sector and append the sector I was in before to an array, so I make sure NOT to go back there. During the drawing process, I check if there is a pixel already assigned to where I am wanting to draw. If there isn't, I draw my pixel.

I've tried clipping the portal but that doesn't work, and I've tried appending my portals instead of the sectors to an array so I don't somehow go backwards on accident, but that doesn't work either. Any advice on what I should do?

4 Comments
2024/11/28
16:35 UTC

54

tinybvh hit version 1.0.0

After an intense month of development, the tiny CPU & GPU BVH building and ray tracing library tiny_bvh.h hit version 1.0.0. This release brings a ton of improvements, such as faster ray tracing (now beating* Intel's Embree!), efficient shadow ray queries, validity tests and more.

Also, a (not related) github repo was just announced with various sample projects *in Unity* using the tinybvh library: https://github.com/andr3wmac/unity-tinybvh

Tinybvh itself can be found here: https://github.com/jbikker/tinybvh

I'll be happy to answer any questions here.

16 Comments
2024/11/28
16:27 UTC

62

How to start Graphics programming?

I know C++ till Object Oriented Programming and a bit of Data Structures and Algorithms, I need resources, books, tutorials to start all of this and progress to a point where I start learning and discovering new things by my own, I got inspired a lot by this one YouTube video: https://youtu.be/XxBZw2FEdK0?si=Gi1cbBfnhT5R0Vy4 Thanks 🙏

10 Comments
2024/11/28
15:49 UTC

2

Did anyone try the latest Falcor 8.0 with OpenXR?

0 Comments
2024/11/28
13:21 UTC

80

is there such thing as an entry level graphics programmer role? Every job posting I've found seems to ask for a minimum of 5 years (not just in 2024, even in 2021...)

I started university in 2017 and finished in 2021. I've always wanted to get into graphics programming, but I struggle to learn by myself, so I hope that I would be able to "learn on the job" - but I could never find any entry level graphics programming roles.

Since I graduated, I've worked two jobs and I was a generalist but there was never really an opportunity to ever get into graphics programming.

Is the only way to really get into graphics programming is to learn by myself? Compared to when I learned programming using Java or C# in university, graphics programming in c++ feels incredibly overwhelming.

Is there a specific project you'd suggest for me to learn that would be a good start for me to get my foot in the door for graphics programming?

40 Comments
2024/11/28
05:22 UTC

23

What are the best resources for the details of implementing a proper BRDF in a ray tracer?

So I started off with the "ray tracing in a weekend" articles like many people, but once I tried switching that lighting model out to a proper Cook-Torrance + GGX implementation, I found that there are not many resources that are similarly helpful - at least not that I could easily find. And I'm just wondering if there are any good blogs, book etc. that actually go into the implementation, and don't just explain the formula in theory?

8 Comments
2024/11/27
23:49 UTC

7

Graphics Programming Presentation

Hi! For my Oral Communications class, I have to pretend I am in my professional future, and give a 10 minute presentation which will be a role play of a potential career scenario (eg. a researcher presenting research, an entrepreneur selling a new business idea, etc).

I am super interested in graphics programming and becoming a graphics programmer, and I'm treating this presentation as a way to research more about a potential career! So, I'm wondering what kind of presentations you would typically give in this field? Thanks!

5 Comments
2024/11/27
22:55 UTC

63

Added light scattering to my procedural engine (C++/OpenGL/GLSL) gameplay is a struggle tho'

2 Comments
2024/11/27
19:11 UTC

0

💡 HeavyGL: A brand new Graphics API for C and Java JNI

Hey there!

I’m currently working on a project called HeavyGL, this project is maintained regularly!

🚀 What is HeavyGL?

HeavyGL is a simple specification that aims to be efficient and cross-platform, sounds hard but it's not. Its own API is very simple and easy. Based on a multi-context architecture.

✨ Key Features:

  • Fast porting: This API isn't going to be based on a single architecture. The implementation will switch between different contexts depending on the approach the developer want to aim for, this multi-context implementation allows you to have the same application running on a GPU using OpenGL or in some embedded system with a pixel array.
  • Minimal dependencies: Aiming for simplicity

( ♥️ ) Contributors are always welcome!

Official Website: https://heavygl.github.io

GitHub: https://github.com/HeavyGL

2 Comments
2024/11/27
18:27 UTC

16

Can we use some kind of approximation in rendering when GPU is in low power ?

Looking for some opportunities in optimisation when we detect GPU is in low power or some heavy scene where GPU can take more than expected time to get out from the pipeline. Thought by some means if we can tweak such that to skip some rendering but overall scene looks acceptable.

8 Comments
2024/11/27
17:40 UTC

3

When rendering a GUI, is ot better to render each element as an individual texture, or is it better to batch them all into a single texture?

By GUI I refer to elements such as runtime-generated text, interface rects, buttons, and the likes of that.

Do you often render each one of these with their own individual texture or do you create some dynamically-generated atlas and batch all of them into it at once?

This might be hard to implement (although not impossible), but frequent texture changes are bad for the fps and this could help minimize them.

Personally, texture changes were never a problem for my pc, and I don’t know how many texture changes per frame is acceptable. I might be a little way too paranoid.

2 Comments
2024/11/27
16:40 UTC

2

Alpha-blending geometry together to composite with the frame after the fact.

I have a GUI system that blends some quads and text together, which all looks fine when everything is blended over a regular frame, but if I'm rendering to a backbuffer/rendertarget how do I render the GUI to that so that the result will properly composite when blended as a whole to the frame?

So for example, if the backbuffer initializes to a zeroed out RGBA, and I blend some alpha-antialiased text onto it, the result when alpha blended onto the frame will result in the text having black fringes around it.

It seems like this is just a matter of having the right color and alpha blend functions when compositing the quads/text together in the backbuffer, so that they properly blend together, but also so that the result properly alpha blends during compositing.

I hope that makes sense. Thanks!

EDIT: Thanks for the replies guys. I think I failed to convey that the geometry must properly alpha-blend together (i.e. overlapping alpha-blended geometry) so that the final RGBA result of that can be alpha-blended ontop of an arbitrary render as though all of the geometry was directly alpha-blended with it. i.e. a red triangle at half-opacity when drawn to this buffer should result in (1,0,0,0.5) being there, and if a blue half-opacity triangle is drawn on top of it then the result should be (0.5,0,0.5,1).

10 Comments
2024/11/27
10:56 UTC

13

FBGL: A Lightweight Framebuffer Graphics Library for Linux

I'm excited to share a project I've been working on: FBGL (Framebuffer Graphics Library), a lightweight, header-only graphics library for direct framebuffer manipulation in Linux.

🚀 What is FBGL?

FBGL is a simple, single-header C library that allows you to draw directly to the Linux framebuffer with minimal dependencies. Whether you're into embedded graphics, game development, or just want low-level graphics rendering, this library might be for you!

✨ Key Features:

  • Header-only design: Just include and go!
  • No external dependencies (except standard Linux libraries)
  • Simple API for:
    • Pixel drawing
    • Shape rendering (lines, rectangles, circles)
    • Texture loading (TGA support)
    • Font rendering (PSF1 format)
    • FPS calculation

github: https://github.com/lvntky/fbgl

https://reddit.com/link/1h10lvh/video/mjajh6u1ze3e1/player

0 Comments
2024/11/27
09:35 UTC

35

Thoughts on Slang?

I have been using slang for a couple of days and I loved it! It's the only shader language that I think could actually replace all the (high-level) shader language. Since I worked with both machine learning (requires autodiff) and geometry processing (requires SIMT), it's either torch OR cuda/glsl/wgsl so it would be awesome if I could write all my gpu code in one language (and BIG bonus if I could deploy it everywhere as easily as possible). This language and its awesome compiler does everything very well without much performance drop compare to something like writing cuda kernels. With the recent push from nvidia and support from knonos group, I hope it will be adopted widely and doesn't end up like openCL. What are your thoughts on it?

14 Comments
2024/11/27
01:33 UTC

1

Working on a DSL for live coding . What graphics API ?

Most of my work has been in algorithms, so I have not focused on low-level graphics other than OpenGL with GLSL and mostly immediate mode stuff (I came from Iris gl originally.. lol ) . I work primarily on MacOS and I’ve done a few simple tutorials on both metal and Vulcan ( using molten ) . In both cases , it’s a lot of nuts and bolts complexity I’ll have to abstract . So I’m waffling over staying with OpenGL 4.2 (abstraction required but easier) . Last night I compiled example metal / objective C gpu ray tracing examples and was blown away by the speed on a Mac mini M4 . So now I’m thinking maybe I need to invest some time to learn low level coding .. it’s not beautiful but maybe a means to an end . Any thoughts?

4 Comments
2024/11/26
17:09 UTC

7

How do you sample emissive triangles?

Hi all, I'm new to pathtracing and have got as far as weighted reservoir sampling for just pointlights, but I'm struggling to figure out how to extend this to emissive triangle meshes, I'd really appreciate some pointers for a realtime pathtracer.

From my research most people seem to do the following:

  1. Pick a random emissive object in the scene (I'm guessing you would keep an array of just the emissive objects and pick a uniform random index?)

  2. Pick a random triangle on that emissive object

  3. Pick a random point in that triangle to sample

  4. Compute radiance and pdf p(x) at this point

The two things that confuse me right now are:

  1. Should the random triangle be picked with a uniform random number, or with another pdf?

  2. How should p(x) be calculated with this process?

16 Comments
2024/11/26
16:02 UTC

1

How Would One Create Arbitrary 2D images made of non-overlapping Lines?

What's in the title, To give the background real fast, I'm creating a magical language that I would like to have some symbols for- I don't want to repurpose another languages symbols, rather I would prefer to have a program that I can turn on, have it generate a series of squiggles, and then comb through said squiggles until I find one I like best for a given magical word.

What is My Desired Outcome: something that will start from a zero point, extend a line from point Zero by X (a range of lets say 1-10) units along a grid, then create a new Point, choose any direction (that doesn't overlap with an existing line) and start extending a new line for another 1-10 units, rinse and repeat. The goal is to create what could be called Runes, Wards, Sigils, or Glyphs.

What I am asking of you all:

  1. what program/language would be best to achieve this? or does someone know of an online tool that does this?
  2. is there an easier way? absolutely want to know if I'm over/under complicating this whole thing.

i am NOT asking someone to do the work for me here. I'm happy to learn if I must, or if someone happens to have the code just laying about that does this or something like it I will take it.

why I'm asking: I have a track record of trying to solve a problem without knowing someone already created a free tool that does the solving for me, and I'm tired of it. absolutely no idea what to google with the thoughts in my mind

5 Comments
2024/11/26
01:49 UTC

3

Need on advice on what to pursue for graduate school

Hello guys, I’m a fourth year undergrad comp sci student. As the title says i don’t know what to pursue for master’s. I have taken a course on GPU computing which really had me interested in HPC, but I also enjoy graphics programming. But, I am worried that I won’t be able to find a job after completing graduate school if I choose graphics. What should I do?

3 Comments
2024/11/26
00:35 UTC

18

Do you do a depth pre pass in your forward renderers?

I still can't decide if it's really worth it, or if i should move to a nulti-render-target solution, which is why i'd like to hear your opinions on this.

I assume in the end it's always a consideration between how occluded a scene is (in this case a pre pass may effectively reduce overdraw) and how complex a scene is (if a scene has a lot of vertices, a depth pre pass may just not be worth it.)

15 Comments
2024/11/25
22:40 UTC

0

"Newbie Alert! 30-year-old looking to start a career in graphic design as a freelancer. Where do I start?"

I'm a 30-year-old looking to start a new career in graphic design. I've always been interested in design, but never had the chance to pursue it. Now, I'm eager to learn and start working as a freelancer.

I'm not comfortable with the idea of a 9-to-5 job, as I value my independence and can't tolerate dominancy. Freelancing seems like the perfect fit for me.

Here are my questions:

  1. Where do I start? What are the essential skills and software I need to learn?
  2. Can I learn graphic design on my own without doing practical jobs? I want to build a career as a freelancer as soon as I'm done with learning.
  3. What are the best resources (online courses, tutorials, books) for learning graphic design?
  4. How do I create a strong portfolio and profile on platforms like Fiverr to attract clients?

I'd appreciate any advice, guidance, or resources you can share. Thank you in advance for your help!

Edit: I'm looking to learn graphic design from scratch, so any recommendations for beginner-friendly resources would be great!

6 Comments
2024/11/25
21:24 UTC

Back To Top