/r/GraphicsProgramming

Photograph via snooOG

A subreddit for everything related to the design and implementation of graphics rendering code.

Rule 1: Posts should be about Graphics Programming.
Rule 2: Be Civil, Professional, and Kind


Suggested Posting Material:
- Graphics API Tutorials
- Academic Papers
- Blog Posts
- Source Code Repositories
- Self Posts
(Ask Questions, Present Work)
- Books
- Renders
(Please xpost to /r/ComputerGraphics)
- Career Advice
- Jobs Postings (Graphics Programming only)


Related Subreddits:

/r/ComputerGraphics

/r/Raytracing

/r/Programming

/r/LearnProgramming

/r/ProgrammingTools

/r/Coding

/r/GameDev

/r/CPP

/r/OpenGL

/r/Vulkan

/r/DirectX


Related Websites:
ACM: SIGGRAPH
Journal of Computer Graphics Techniques

Ke-Sen Huang's Blog of Graphics Papers and Resources
Self Shadow's Blog of Graphics Resources

/r/GraphicsProgramming

50,652 Subscribers

1

[GLSL] In a compute shader, I need to determine the global maximum and global minimum values from a buffer. Any recommendations for how I might improve this?

More context: I've got a buffer full of sorting keys, and I'm scaling it such that the minimum is set to zero and the maximum is set to a predetermined maximum (e.g. v_max = 0xffff, 0x7fffff, etc). How I'm doing this is I'm obtaining the global minimum (g_min) and global maximum (g_max) values from the buffer, subtracting g_min from all values, then multiplying every value in the buffer by (v_max / (g_max - g_min)).

Currently, I'm finding g_max and g_min by having each thread take a value from the buffer, doing an atomicMax and atomicMin on shared variables, memoryBarrierShared(), then doing an atomicMax and atomicMin on global memory with the zeroth thread in each group.

Pretty simple on the whole. I'm wondering if anyone has recommendations to optimize this for speed. It's already not terrible, but faster is always better here.

2 Comments
2024/12/20
19:41 UTC

0

What type of shading language is this?

I have this shader code, that works with one program:

#version 300 es
precision mediump float;
uniform sampler2D in_tex;
out vec4 out_color;
in mediump vec2 uvpos;

void main()
{
    vec4 c = get_pixel(uvpos);
    // Invert
    c.r = 1.0 - c.r;
    c.g = 1.0 - c.g;
    c.b = 1.0 - c.b;
    c.r *= c.a;
    c.g *= c.a;
    c.b *= c.a;
    out_color = c;
}

But, what precise language is this? Because I have another shader file with a different sintax than this one that doesn't work with the same program used for the previous shader, but work with another program. Any link to that language?

6 Comments
2024/12/20
17:16 UTC

1

[Please don't laugh] Tryng to have points zoom/in out while remaining of the same size.

Using instance rendering to draw some boxes, extremely basic stuff - and I'm trying to have them move around on mouse wheel, when i zoom around i only change the centre position, and then apply the geometric transformation to draw the boxes. Why the f*** when I zoom things around my boxes shrink/grow even do I'm not scaling their sizes? Funnily, when I zoom in (zoom value grows) my boxes actually shrink?

Again - I'd like them to remain the same constant size regardless of zoom level, just spread out.

struct InstanceAttributes {

float4 colour;      // RGBA color

float4 transform;  // x, y, width, height

uint32_t instanceID; //unique_id

bool special = false;

};

struct v2f {

float4 position [[position]]; // Transformed screen-space position

half3 colour;                  // Rectangle color

half3 headerColour;             // Header color

uint32_t instanceID;              // Rectangle ID

float2 worldPosition;         // World-space position of the vertex

float2 rectCenter;

float2 mouseXY;

float zoom;

};

constant float pointRadius = 0.002f;

//  RENDERING

//========================================================================================================================

//========================================================================================================================

v2f vertex vertexMain(uint vertexId [[vertex_id]],

device const float2* positions [[buffer(0)]],           // Vertex positions for a unit rectangle

device const InstanceAttributes* instanceBuffer [[buffer(1)]],

uint instanceId [[instance_id]],

device const simd::float2* mousePosBuffer [[buffer(2)]],

constant simd::float3& viewportTransform [[buffer(3)]],

constant float &screenRatio [[buffer(4)]],

constant float &drawableWidth [[buffer(5)]])

{

v2f o;

InstanceAttributes instance = instanceBuffer[instanceId];

float zoom = viewportTransform.x;

float2 viewportCenter = float2(viewportTransform.y, viewportTransform.z);

// Scale hypermeters to NDC space

instance.transform.xy *= (2.f / drawableWidth);

float2 rectCenter = instance.transform.xy; // Calculate the rectangle's world-space center

// Compute the rectangle vertex's world-space position without scaling by zoom

float2 worldPosition = positions[vertexId] * instance.transform.zw + instance.transform.xy;

// Apply viewport and zoom offsets transforms for the rectangle center

float2 transformedPosition = (rectCenter - viewportCenter) * zoom;

// Add the unscaled local vertex position for the rectangle

transformedPosition += (positions[vertexId] * instance.transform.zw);

// Flip and adjust for aspect ratio

transformedPosition.y = -transformedPosition.y;

transformedPosition.y *= screenRatio;

// Output to clip space

o.position = float4(transformedPosition, 0.0, 1.0);

// Pass attributes to the fragment shader

o.colour = half3(instance.colour.rgb);

o.headerColour = half3(instance.colour.rgb);

o.instanceID = instanceId;

o.worldPosition = worldPosition;   // world-space vertex position

o.rectCenter = rectCenter;         // world-space rectangle center

o.mouseXY = mousePosBuffer[0];

o.zoom = zoom;

return o;

}

half4 fragment fragmentMain(v2f in [[stage_in]], constant float &screenRatio [[buffer(1)]]) {

// Use a world-space "radius". If you want a specific size on screen,

// consider adjusting this value or transforming coords differently.

// Both worldPosition and rectCenter are in world coordinates now

float2 fragCoord = in.worldPosition.xy;

float2 diff = in.rectCenter - fragCoord;

float distToCenter = length(diff);

float innerRadius = pointRadius -(distToCenter*0.1);  // Start of the fade

float outerRadius = pointRadius;           // Full radius

float alpha = 1.0 - smoothstep(innerRadius, outerRadius, distToCenter);

// Discard fragments outside the defined radius

if (distToCenter > pointRadius) {

discard_fragment();

//        return {1.f, 0.f, 0.f, 0.1f};

}

// Draw inside the circle as white for visibility

return half4(in.colour, 1.f);

}

0 Comments
2024/12/20
14:53 UTC

9

From Shaders to Rendering

I've been creating art with shaders for over a year now using and I've gotten pretty comfortable with it, for example I can write fragment shaders implementing ray marching for fairly simple scenes. From a theory/algorithms side, I understand that a shader dictates how a position on the screen should be mapped to color, but I don't understand anything about how shaders get compiled to the GPU and stuff actually shows up on the screen. Right now I use OpenFrameworks which handles all of that stuff under the hood. Where might be a good place to start understanding this process?

I'm curious in particular about how programming the GPU is different/similar to programming the CPU, and how programming the GPU for graphics is different to programming the GPU for other things like machine learning.

One of my main motivations is that I've interested in exploring functional alternatives to GLSL and maybe writing a functional shading language (and I'm aware a few similar projects exist already).

2 Comments
2024/12/20
07:04 UTC

3

More is better and less is more - vram sizes and sweetspots?

Do you optimize your code for different architecture bottlenecks and vram sizes?

In my particular case I would like to understand the optimal igpu vram allocation settings for Amd 8840U and HX 365/370 cpus. To my understanding they have very slow die interconnects, 1MB L2 and 2MB L3 per graphics "compute unit" and a 128bit wide bus.

0 Comments
2024/12/20
06:29 UTC

5

Ambient Light as "Area Light" Implementation Questions

This is a bit of a follow up from my previous post, which talks about a retro style real-time 3d api.

Just for fun, here is where I am at now.

So to start the whole thing off... Ambient lighting is usually just a constant which is added (or multiplied) ontop of the diffuse, however, metallic objects have no (or negligible) diffuse. How do we light metallic objects without direct lighting? Surely there is some specular highlighting or reflection happening from ambient light right?

I came accross this paper which suggested a Blinn-Phong PBR model. I really liked the idea of it, so started implementing it. The article mentions what they described as an Ambient BRDF to help improve ambient lighting, which results in a better look than just the "out_color = diffuse + spec + ambient" thing used in other common shaders. The main suggestion is to handle ambient light as an area light. I also came accross this post on SE from Nathan Reed which mentions...

Make the ambient color vary directionally, e.g. using spherical harmonics (SH) or a small cubemap, and looking up the color in a shader based on each vertex's or pixel's normal vector. This allows some visual differentiation between surfaces of different orientations, even where no direct light reaches them.

The first article mentioned using a 3d texture with (NdotV, roughness, F0) as coordinates. Ok great, this makes sense and both are in agreement... but how do I do this exactly? I'm really stumped on how to generate this texture. The specular calculation needs a surface normal, view normal, and a light normal, which we can use to compute NdotV, NdotL, NdotH, and VdotH for the specular component. However, our iteration loop goes from 0 to 1 for NdotV values, and it's not possible recover a vector from just a dot product. How can I go about getting the view and normal vector?

I tried using something (0, 0, 1) for the view vector, and having the surface normal go from up (0, 1, 0) to (0, 0, 1) for the loop iteration. This would give us a constant view vector, and surface normal dot product from 0 to 10. I used hermisphere sampling (32 * 32 samples) to get the light angles, but the resulting texture output doesn't seem to match at all: mine vs theirs. Specifically the far right side of the texture (when NdotV is almost 1 or equal to 1) the calculation falls apart. The paper states:

The volume texture stores the specular term itself and is directly used as the specular term in a pixel shader

What you're looking at is just the specular component for a surface at the given (NdotV, roughness) values, and diffuse can be estimated as "diffuse_color * (1 - specular term)" which can also be adjusted by the metallic (black) or non-metallic (albedo) texel color.

Next, I started looking into SH, but am also having trouble understanding these and feels like it goes way over my head, but from my other reading, it seems like once the coefficients are calculated, you end up with ~9 or so values you can multiply and add as part of the ambient lighting calculation. Are these coefficients available somehwere, or do I need to calculate them myself? Do they depend on the angle of the surface, if so, aren't I stuck back where I was on the previous problem of not having a view or normal vector (we only have NdotV from the loop)? I guess I could run the calculation for the entire normal sphere, and only keep those which have NdotV between 0 and 1, but this just seems wrong.

Would anyone be able to help point me in the right direction? For reference, the code I'm trying to calculate the texture is, is at this repo.

Other relevant links:

Unreal Fresnel Link

Blinn-Phong with Roughness Textures

Edit: More links and clean up.

3 Comments
2024/12/20
05:34 UTC

16

Super Basic Graphics Coding for HS elective?

Hello! I'm teaching a HS Graphics course this year and was wondering what the easiest way to introduce them to graphics coding would be?

It's a beginner elective where the only requirement is an Intro Programming class using Python and HTML. So something like OpenGL would probably be way over their heads. Is there a good tool or language for complete novices to get their feet wet? Something above Scratch level. Flash? Python? Unity?

I mainly want to give them a feel for the basic math and rendering pipeline.

21 Comments
2024/12/20
00:48 UTC

10

Optimizing Data Handling for a Metal-Based 2D Renderer with Thousands of Elements

I'm developing a 2D rendering app that visualizes thousands of elements, including complex components like waveforms. To achieve better performance, I've moved away from traditional CPU-based renderers and implemented my own Metal-based rendering system.

Currently, my app's backend maintains a large block of core data, while the Metal renderer uses a buffer that is of the same length as the core data. This buffer extracts and copies only the necessary data (e.g., color, world coordinates) required for rendering. Although I’d prefer a unified data structure, it seems impractical because Metal data resides in a shared GPU-accessible space. Thus, having a separate Metal-specific copy of the data feels necessary.

I'm exploring best practices to update Metal buffers efficiently when the core data changes. My current idea is to update only the necessary regions in the buffer whenever feasible and perform a full buffer update only when absolutely required. I'm also looking for general advice on optimizing this data flow and ensuring good practices for syncing large datasets between the CPU and GPU.

9 Comments
2024/12/19
11:39 UTC

5

Write my first renderer

I am planning to write my first renderer in openGL during the winter break. All I have in mind is that I want to create a high performance renderer. What I want to include are defer shading, frustum culling and maybe some meshlet culling. So my question is that is it actually a good idea to start with? Or are there any good techniques I can apply in my project? ( right now I will assume I just do ambient occlusion for global illumination)

9 Comments
2024/12/19
05:04 UTC

53

A Global Illumination implementation in my engine

Hello,

Wanted to share my implementation of Global Illumination in my engine, it's not very optimal as I'm using CS for raytracing, not RTX cores, as it's implemented in DirectX11. This is running in a RTX2060, but only with pure Compute Shaders. The basic algorithm is based on sharing diffused rays information emmited in a hemisphere between pixels in screen tiles and only trace the rays that contains more information based on the importance of the ray calculating the probability distribution function (PDF) of the illumination of that pixel. The denoising is based on the the tile size as there are no random rays, so no random noise, the information is distributed in the tile, the video shows 4x4 pixel tiles and 16 rays per pixel (only 1 to 4 sampled by pixel at the end depending the PDF) gives a hemisphere resolution of 400 rays, the bigger tile more ray resolution, but more difficult to denoise detailed meshes. I know there are more complex algorithms but I wanted to test this idea, that I think is quite simple and I like the result, at the end I only sample 1-2 rays per pixel in most of the scene (depends the illumination), I get a pretty nice indirect light reflection and can have light emission materials.

Any idea for improvement is welcome.

Source code is available here.

Global Illumination

Emmisive materials

Tiled GI before denoising

8 Comments
2024/12/18
22:50 UTC

29

Does triangle surface area matter for rasterized rendering performance?

https://preview.redd.it/1e0dj2gqeo7e1.png?width=741&format=png&auto=webp&s=f6b87b4eb6d6c1a7b24d89881ca2e949cdc56d1f

I know next-to-nothing about graphics programming, so I apologise in advance if this is an obvious or stupid question!

I recently saw this image in a youtube video, with the creator advocating for the use of the "max area" subdivision, but moved on without further explanation, and it's left me curious. This is in the context of real-time rasterized rendering in games (specifically Unreal engine, if that matters).

Does triangle size/surface area have any effect on rendering performance at all? I'm really wondering what the differences between these 3 are!

Any help or insight would be very much appreciated!

20 Comments
2024/12/18
21:43 UTC

9

Spectral dispersion in RGB renderer looks yellow-ish tinted

The diamond should be completely transparent, not tinted slightly yellow like that

IOR 1 sphere in a white furnace. There is no dispersion at IOR 1, this is basically just the spectral integration. The non-tonemapped color of the sphere here is (56, 58, 45). This matches what I explain at the end of the post.

I'm currently implementing dispersion in my RGB path tracer.

How I do things:

- When I hit a glass object, sample a wavelength between 360nm and 830nm and assign that wavelength to the ray
- From now on, IORs of glass objects are now dependent on that wavelength. I compute the IORs for the sampled wavelength using Cauchy's equation
- I sample reflections/refractions from glass objects using these new wavelength-dependent IORs
- I tint the ray's throughput with the RGB color of that wavelength

How I compute the RGB color of a given wavelength:

- Get the XYZ representation of that wavelength. I'm using the original tables. I simply index the wavelength in the table to get the XYZ value.
- Convert from XYZ to RGB from Wikipedia.
- Clamp the resulting RGB in [0, 1]

Matrix to convert from XYZ to RGB

With all this, I get a yellow tint on the diamond, any ideas why?

--------

Separately from all that, I also manually verified that:

- Taking evenly spaced wavelengths between 360nm and 830nm (spaced by 0.001)
- Converting the wavelength to RGB (using the process described above)
- Averaging all those RGB values
- Yields [56.6118, 58.0125, 45.2291] as average. Which is indeed yellow-ish.

From this simple test, I assume that my issue must be in my wavelength -> RGB conversion?

The code is here if needed.

25 Comments
2024/12/18
21:23 UTC

3

SSR - Reflections perspective seems incorrect

I've been working on implementing SSR using DDA and following the paper from Morgan McGuire "Efficient GPU Screen-Space Ray Tracing" However, the resulting reflections perspective seems off and I am not entirely sure why.

I'm wondering if anyone has tried implementing this paper before and might know what causes this to happen. Would appreciate any insight.

I am using Vulkan with GLSL.

https://preview.redd.it/aj1stsmxvl7e1.png?width=908&format=png&auto=webp&s=a550e7ed514ed6620bf869b2b1d4354c96d00bb5

vec3 SSR_DDA() {
  float maxDistance = debugRenderer.maxDistance;
  ivec2 c = ivec2(gl_FragCoord.xy);
  float stride = 1;
  float jitter = 0.5;

  // World-Space
  vec3 WorldPos = texture(gBuffPosition, uv).rgb;
  vec3 WorldNormal = (texture(gBuffNormal, uv).rgb);

  // View-space
  vec4 viewSpacePos = ubo.view * vec4(WorldPos, 1.0);
  vec3 viewSpaceCamPos = vec4(ubo.view * vec4(ubo.cameraPosition.xyz, 1.0)).xyz;
  vec3 viewDir = normalize(viewSpacePos.xyz - viewSpaceCamPos.xyz);
  vec4 viewSpaceNormal = normalize(ubo.view * vec4(WorldNormal, 0.0));
  vec3 viewReflectionDirection =
      normalize(reflect(viewDir, viewSpaceNormal.xyz));

  float nearPlaneZ = 0.1;

  float rayLength =
      ((viewSpacePos.z + viewReflectionDirection.z * maxDistance) > nearPlaneZ)
          ? (nearPlaneZ - viewSpacePos.z) / viewReflectionDirection.z
          : maxDistance;

  vec3 viewSpaceEnd = viewSpacePos.xyz + viewReflectionDirection * rayLength;

  // Screen-space start and end points
  vec4 H0 = ubo.projection * vec4(viewSpacePos.xyz, 1.0);
  vec4 H1 = ubo.projection * vec4(viewSpaceEnd, 1.0);

  float K0 = 1.0 / H0.w;
  float K1 = 1.0 / H1.w;

  // Camera-space positions scaled by rcp
  vec3 Q0 = viewSpacePos.xyz * K0;
  vec3 Q1 = viewSpaceEnd.xyz * K1;

  // Perspective divide to get into screen space
  vec2 P0 = H0.xy * K0;
  vec2 P1 = H1.xy * K1;
  P0.xy = P0.xy * 0.5 + 0.5;
  P1.xy = P1.xy * 0.5 + 0.5;

  vec2 hitPixel = vec2(-1.0f, -1.0f);

  // If the distance squared between P0 and P1 is smaller than the threshold,
  // adjust P1 so the line covers at least one pixel
  P1 += vec2((distanceSquared(P0, P1) < 0.001) ? 0.01 : 0.0);
  vec2 delta = P1 - P0;

  // check which axis is larger. We want move in the direction where axis is
  // larger first for efficiency
  bool permute = false;
  if (abs(delta.x) < abs(delta.y)) {
    // Ensure x is the main direction we move in to remove DDA branching
    permute = true;
    delta = delta.yx;
    P0 = P0.yx;
    P1 = P1.yx;
  }

  float stepDir = sign(delta.x);    // Direction for stepping in screen space
  float invdx = stepDir / delta.x;  // Inverse delta.x for interpolation

  vec2 dP = vec2(stepDir, delta.y * invdx);  // Step in screen space
  vec3 dQ = (Q1 - Q0) * invdx;   // Camera-space position interpolation
  float dk = (K1 - K0) * invdx;  // Reciprocal depth interpolation

  dP *= stride;
  dQ *= stride;
  dk *= stride;

  P0 = P0 + dP * jitter;
  Q0 = Q0 + dQ * jitter;
  K0 = K0 + dk * jitter;

  // Sliding these: Q0 to Q1, K0 to K1, P0 to P1 (P0) defined in the loop
  vec3 Q = Q0;
  float k = K0;
  float stepCount = 0.0;

  float end = P1.x * stepDir;
  float maxSteps = 25.0;

  // Advance a step to prevent self-intersection
  vec2 P = P0;
  P += dP;
  Q.z += dQ.z;
  k += dk;

  float prevZMaxEstimate = viewSpacePos.z;
  float rayZMin = prevZMaxEstimate;
  float rayZMax = prevZMaxEstimate;
  float sceneMax = rayZMax + 200.0;

  for (P; ((P.x * stepDir) <= end) && (stepCount < maxSteps);
       P += dP, Q.z += dQ.z, k += dk, stepCount += 1.0) {
    hitPixel = permute ? P.yx : P.xy;

    // Init min to previous max
    float rayZMin = prevZMaxEstimate;

    // Compute z max as half a pixel into the future
    float rayZMax = (dQ.z * 0.5 + Q.z) / (dk * 0.5 + k);

    // Update prev z max to the new value
    prevZMaxEstimate = rayZMax;

    // Ensure ray is going from min to max
    if (rayZMin > rayZMax) {
      float temp = rayZMin;
      rayZMin = rayZMax;
      rayZMax = temp;
    }

    // compare ray depth to current depth at pixel
    float sceneZMax = LinearizeDepth(texture(depthTex, ivec2(hitPixel)).x);
    float sceneZMin = sceneZMax - debugRenderer.thickness;

    // sceneZmax == 0 is out of bounds since depth is 0 out of bounds of SS
    if (((rayZMax >= sceneZMin) && (rayZMin <= sceneZMax)) ||
        (sceneZMax == 0)) {
      break;
    }
  }

  Q.xy += dQ.xy * stepCount;
  vec3 hitPoint = Q * (1.0 / k);  // view-space hit point

  // Transform the hit point to screen-space
  vec4 ss =
      ubo.projection * vec4(hitPoint, 1.0);  // Apply the projection matrix
  ss.xyz /= ss.w;  // Perspective divide to get normalized screen coordinates
  ss.xy = ss.xy * 0.5 + 0.5;  // Convert from NDC to screen-space

  if (!inScreenSpace(vec2(ss.x, ss.y))) {
    return vec3(0.0);
  }

  return texture(albedo, ss.xy).rgb;
}

https://reddit.com/link/1hh195p/video/ygjq6viv6m7e1/player

0 Comments
2024/12/18
13:10 UTC

27

Looking for a beginner course

Hey there! My bf is currently working in game dev as a tool programmer and constantly looks at graphic programming videos on YouTube. Its a dream of his to try himself out in this new field but seems paralyzed by “not knowing enough”. I thought to buy him an online course to kinda help him start actually doing something instead of just looking. Do you guys have any recommendations? He is not a beginner beginner but according to him he doesn’t know a thing when it comes to this. Thanks!

14 Comments
2024/12/18
09:47 UTC

278

City Ruins - Tiny Raycasting System with Destroyed City + Code

6 Comments
2024/12/17
18:30 UTC

27

I'm creating a dynamic 3D mesh generator for neurons using Mesh Shaders!

6 Comments
2024/12/17
17:09 UTC

1

Question about Variance Shadow Mapping and depth compare sampler

Hey all, I am trying to build Variance Shadow maps in my engine. I am using WebGPU and WGSL.

My workflow is as follows:

  1. Render to a 32bit depth texture A from the light's point of view
  2. Run a compute shader and capture the moments into a separate rg32float texture B:
    1. let src = textureLoad(srcTexture, tid.xy, 0); textureStore(outTexture, tid.xy, vec4f(src, src * src, 0, 0));
  3. Run a blur compute shader and store the results in texture rg32float C
  4. Sample the blurred texture C in my shader

I can see the shadow, however it seems to be inversed. I am using the Sponza scene. Here is what I get:

https://preview.redd.it/3vp761eamf7e1.png?width=729&format=png&auto=webp&s=f3c3b85082d3561d5f12d331f1f6ae7ae6c8aa34

The "line" or "pole" is above the lamp:

https://preview.redd.it/20e5lolfmf7e1.png?width=439&format=png&auto=webp&s=f91f22e082a8526c5e952c5eae02994e0dfe2c9b

It seems that the shadow of the pole (or lack of around the edges) overwrites the shadow of the lamp, which is clearly wrong.

I know I can use special depth_comparison sampler and specify the depth compare function. However in WGSL this works only with depth textures, while I am dealing with rg32float textures that have the "moments" captured. Can I emulate this depth comparison myself in my shaders? Is there an easier solution that I fail to see?

Here is my complete shadow sampling WGSL code:

fn ChebyshevUpperBound(moments: vec2f, compare: f32) -> f32 {
  let p = select(0.0, 1.0, (compare < moments.x));
  var variance = moments.y - (moments.x * moments.x);
  variance = max(variance, 0.00001);
  let d = compare - moments.x;
  var pMax = variance / (variance + d * d);
  return saturate(max(pMax, p));
}

// ...

let moments = textureSample(
  shadowDepthTexture,
  shadowDepthSampler,
  uv,
  0
).rg;
let shadow = ChebyshevUpperBound(
  moments,
  projCoords.z
);

EDIT: My "shadowDepthSampler" is not a depth comparison sampler. It simply has min / mag filtering set to "linear".

3 Comments
2024/12/17
16:07 UTC

7

Does going to art school part-time after finishing computer science studies make any sense?

Hi, I'm a computer science bachelor graduate, wondering where I should continue with my studies and career. I am certain that I want to work as a graphics programmer. I really enjoy working on low-level engineering problems and using math in a creative way.

However, I've also always had an affinity for visual arts (like illustration, animation and 3D modelling) and art history. I kind of see computer graphics and traditional fine arts achieving the same goal, just that former is automated with math and latter is handmade. Since I'm way better at programming, I've chosen the former.

I wouldn't want to paint professionally, but working in a game studio, I'd want to connect with artists more and understand their pipeline and problems and help develop tools to make their work more efficient. Or I've thought about directly working for a company such as Adobe or ProCreate, or perhaps even make my own small indie game in a while, where I'd be directly involved in art direction.

Would it make any sense to enroll in an evening art college (part-time, painting program) while working full-time as a graphics programmer in order to understand visual beauty more? It is a personal goal of mine, but would it help me in my career in any way, or would I just be wasting time on a hobby where I could put in the hours improving as a programmer instead?

I'm still in my 20s and I want to commit to something while I still have no children and have lots of free time. Thank you for sharing your thoughts on the matter <3

10 Comments
2024/12/17
14:24 UTC

1

Screen Space particle movement moving twice as fast?

Hello!

I've been just messing about with screen space particles and for some reason I've got an issue with my particles moving twice as fast relative to the motion buffer and I can't figure out why.

For some context, I'm trying to get particles to "stick" in the same way described by NaughtyDog's talk here. And yes, I've tried with and without the extra "correction" step using the motion vector of the predicted position, so it isn't anything to do with "doubleing up".

Here, u_motionTexture is an R32G32_SFLOAT texture that is written to each frame for every moving object like so (code extracts, not the whole thing obviously just the important parts):

Vertex (when rendering objects) (curr<X>Matrix is current frame, prev<X>Matrix is the matrix from the previous frame):

vs_out.currScreenPos = ubo.projMatrix * ubo.currViewMatrix * ubo.currModelMatrix * vec4(a_position, 1.0);
vs_out.prevScreenPos = ubo.projMatrix * ubo.prevViewMatrix * ubo.prevModelMatrix * vec4(a_position, 1.0);

Fragment (when rendering objects):

vec3 currScreenPos = 0.5 + 0.5*(fs_in.currScreenPos.xyz / fs_in.currScreenPos.w);
vec3 prevScreenPos = 0.5 + 0.5*(fs_in.prevScreenPos.xyz / fs_in.prevScreenPos.w);
vec2 ds = currScreenPos.xy - prevScreenPos.xy;
o_motion = vec4(ds, 0.0, 1.0);

Compute Code:

vec2 toScreenPosition(vec3 worldPosition)
{
    vec4 clipSpacePos = ubo.viewProjMatrix * vec4(worldPosition, 1.0);
    vec3 ndcPosition = clipSpacePos.xyz / clipSpacePos.w;
    return 0.5*ndcPosition.xy + 0.5;
}

vec3 toWorldPosition(vec2 screenPosition)
{
    float depth = texture(u_depthTexture, vec2(screenPosition.x, 1.0 - screenPosition.y)).x;
    vec4 coord = ubo.inverseViewProjMatrix * vec4(2.0*screenPosition - 1.0, depth, 1.0);
    vec3 worldPosition = coord.xyz / coord.w;
    return worldPosition;
}

// ...

uint idx = gl_GlobalInvocationID.x;

vec3 position = particles[idx].position;
vec2 screenPosition = toScreenPosition(position);

vec2 naiveMotion = texture(u_motionTexture, vec2(screenPosition.x, 1.0 - screenPosition.y)).xy;
vec2 naiveScreenPosition = screenPosition + naiveMotion;

vec2 correctionMotion = texture(u_motionTexture, vec2(naiveScreenPosition.x, 1.0 -  naiveScreenPosition.y)).xy;
vec2 newScreenPosition = screenPosition + correctionMotion;

particles[idx].position = toWorldPosition(newScreenPosition);

This is all well and good but for some reason the particle moves at twice the speed it really should?

That is, if I spawn the particle in screenspace directly over a moving block object going from left to right, the particle will move at twice the speed of the block it is resting on.

However, I would expect the particle to move at the same speed since all it is doing is just moving by the same amount the block moves along the screen. Why is it moving twice as fast?

I've obviously tried just multiplying the motion vector by 0.5, and yeah then it works, but why? And additionally, this fails for when the camera itself moves (the view matrix changes), the particle no longer sticks to the surface properly.

Thank you for any and all help or advice! :)

0 Comments
2024/12/17
14:03 UTC

0

OpenGL setup script update

On my previous post I talked about the script which set up a basic project structure with glfw and glad . In the updated version the script links both Sokol and cglm to get you started with whatever you want in c/c++ whether it’s is graphics or game programming . There is a lot of confusion especially for Mac users so I hope this helps . I’m planning on adding Linux support soon . Check it out in my GitHub and consider leaving a star if it helps : https://github.com/GeorgeKiritsis/Apple-Silicon-Opengl-Setup-Script

0 Comments
2024/12/17
13:57 UTC

58

Transitioning into graphics programming in your 30s

There are lots of posts about starting a career in graphics programming, but most of them appear to be focused on students/early grads. So I thought of making a post about people who may be in the middle of their careers, and considering a transition.

I have been so far a very generalist programmer, with a master's in CS and about 5~6 years of experience in C++ and Python in different fields.
I always felt guilty about being clueless about rendering, and not having sharpened my math skills when I had the opportunity. To try and get over this guilt, last year I started working on a simple rendering engine for about 2 months as a hobby project, but then life came and I ended up setting it aside.

Now, I may soon have an opportunity to transition into graphics programming.
However, I feel uncertain whether I should embrace this opportunity or let it go.
I wonder if this is a good idea career-wise, to start almost from 0 during your 30s.
My salary is (unfortunately) not very high so as of now I don't fear a pay cut, but I do fear about how this might be in 5-10 years if I don't make the move.

I know that only I will have the answer for this problem, but do any experienced people have any advice for someone like me...?

23 Comments
2024/12/17
11:26 UTC

88

Built a very basic raytracer

So for school project we built a very basic raytracer with a colleague. It has very minimal functionality compared to the raytracers or projects i see others do, but already that was quite a challenge for us. I was thinking about continuing on the path of graphics, but got kind of demotivated seeing the gap. So i wanted to ask a bit for people here, how was it for you when you were starting?

And here is the link to repo if you want to check it out, has some example pics to get the idea more or less. -> Link

13 Comments
2024/12/17
09:03 UTC

1

DX12 AppendStructuredBuffer Append() not working (but UAV counter increasing) on AMD cards

I have some strange problems with an AppendStructuredBuffer not actually appending any data when Append() is called in HLSL (but still incrementing the counter), specifically on an RX 7000 series GPU. If someone more knowledgeable than me on compute dispatch and UAV resources could take a look, I'd appreciate it a lot because I've been stuck for days.

I've implemented indirect draws using ExecuteIndirect, and the setup works like this: I dispatch a culling shader which reads from a list of active "draw set" indices, gets that index from a buffer of draw commands, and fills an AppendStructuredBuffer with valid draws. This is then executed with ExecuteIndirect.

This system works fine on Nvidia hardware. However on AMD hardware (7800XT), I get the following strange behavior:

The global draw commands list and active indices list works as expected- I can look at a capture in PIX, and the buffers have valid data. If I step through the shader, it is pulling the correct values from each. However, when I look at the UAV resource in subsequent timeline events, the entire buffer is zeros, except for the counter. My ExecuteIndirect then draws N copies of nothing.

I took a look at the execution in RenderDoc as well, and in there, if I have the dispatch call selected, it shows the correct data in the UAV resource. However, if I then step to the next event, that same resource immediately shows as full of zeros, again except for the counter.

PIX reports that all my resources are in the correct states, and I've both separated out my dispatch calls into a separate command list, added UAV barriers after them just in case, and even added a CPU fence sync after each command list execution just to ensure that it isn't a resource synchronization issue. Any ideas what could be causing something like this?

The state changes for my indirect command buffer look like this:

https://preview.redd.it/ahahhmxnyb7e1.png?width=951&format=png&auto=webp&s=95f48f89ef391a0aad73e821dd57f05fb0a943ba

and for the active indices and global drawset buffer, they look like this:

https://preview.redd.it/jix5xf5pyb7e1.png?width=1166&format=png&auto=webp&s=8cc50fd57e7175263e50e505122acfc9c48a5cf0

https://preview.redd.it/pvgbukppyb7e1.png?width=1247&format=png&auto=webp&s=4da882fd88b744213facf035b0fa0ad07fdc5147

Then, in Renderdoc, looking at the first dispatch shows this:

https://preview.redd.it/kql37msqyb7e1.png?width=1291&format=png&auto=webp&s=9f49d37f5e1c1e572c3d20791e797982a147852e

but moving to the next dispatch, while displaying the same resource from before, I now see this:

https://preview.redd.it/iezm34kryb7e1.png?width=1799&format=png&auto=webp&s=3c2550e4e191666cc0b6605fd92223bff2fd7c7b

For reference, my compute shader is here: https://github.com/panthuncia/BasicRenderer/blob/amd_indirect_draws_fix/BasicRenderer/shaders/frustrumCulling.hlsl

and my culling render pass is here: https://github.com/panthuncia/BasicRenderer/blob/amd_indirect_draws_fix/BasicRenderer/include/RenderPasses/frustrumCullingPass.h

Has anyone seen something like this before? Any ideas on what could cause it?

Thanks!

0 Comments
2024/12/17
03:51 UTC

30

If my goal is to get a job in the AAA industry as a rendering engineer, is it a waste of time to work on 2D games for my portfolio?

Title says it all. I don't know what these companies are looking for, and I'm curious to know if the game projects I work on won't help me land a 3D rendering programmer role because they are 2D games? I do plan on implementing things such as volumetrics and so on in a separate 3D renderer, but 2D games have been what I primarily work on nowadays. Is it a waste of time to put 2D games on a graphics programmer portfolio?

10 Comments
2024/12/17
02:30 UTC

34

A horror game that disappears if you pause or screen shot it

2 Comments
2024/12/16
22:53 UTC

52

Radiance Cascades - World Space (Shadertoy link in comments)

4 Comments
2024/12/16
21:44 UTC

Back To Top