/r/GraphicsProgramming
A subreddit for everything related to the design and implementation of graphics rendering code.
Rule 1: Posts should be about Graphics Programming.
Rule 2: Be Civil, Professional, and Kind
Suggested Posting Material:
- Graphics API Tutorials
- Academic Papers
- Blog Posts
- Source Code Repositories
- Self Posts
(Ask Questions, Present Work)
- Books
- Renders
(Please xpost to /r/ComputerGraphics)
- Career Advice
- Jobs Postings (Graphics Programming only)
Related Subreddits:
Related Websites:
ACM: SIGGRAPH
Journal of Computer Graphics Techniques
Ke-Sen Huang's Blog of Graphics Papers and Resources
Self Shadow's Blog of Graphics Resources
/r/GraphicsProgramming
Anyone knows how to achieve this?
https://www.indiedb.com/engines/brahma/videos/hexagonal-texture-filtering-in-brahma-engine
I don't want to work with C++. I want to work with C or Rust.
I also want mobile and web support.
What about WGPU?
Hi, given 2 camera2world matrices, I am trying to compute the rotation degree of camera from first image to second image, for this purpose I calculated the relative transformation between the matrices(multiplying second matrix by the inverse of the first), and took the sub matrix(:3,:3 of the 4*4 relative transform matrix), I have the ground truth rotation value but for some reason they do not match the Euler degrees I compute using scipy's rotation package, any clue what I am doing wrong mathmatically?
*the values of cam2world are the output obtained from Dust3r if that makes a difference
The depth in vulkan in stored in the range [0,1], I am sampling from my depth buffer and trying to linearize it but it remains entirely white. I am using the following linearization function from a stack post on the topic of linearizing depth in vulkan
//https://stackoverflow.com/questions/51108596/linearize-depth
float LinearizeDepth(float d) {
return ubo.nearPlane * ubo.farPlane / (ubo.farPlane + d * (ubo.farPlane - ubo.nearPlane));
}
I'm not entirely sure if this is correct. I plugged some values into demos to see what it gives back but what I get seems incorrect. I have
n = 0.1
f = 2000.0
d = 0.9 // example depth value from depth buffer
// apply formula
n * f / (f + d * (f * n)) = 0.09174311927
I tested the depth value from d = 0.1 to 0.9 and it remains 0.09 if you round to 2 dp.
Am I doing something wrong or not understanding something here correctly?
I've been working on a software 'portal renderer' in the style of Build or Doom, just to learn because 3D maths are one programming area I've never been very good at sort of a few basic raycasters.
And man the curve has been *steep* making me feel like an absolute newb but so far I've managed to get the walls of a sector projected and even eventually through extreme pain managed to decipher the formulas on Wikipedia to apply perspective-correct texturing to them.
Now I would like to add a floor and ceiling. There's more docs about how this works for Doom, and the Build code is hard to read so I went with a "visplane" like method atm - basically I'm taking the screen coordinates of the vertices from the onscreen walls, then just filling downwards from that to populate a structure like this one storing the region of the screen to be filled with floor:
struct Visplane {
`Visplane(int maxWidth) { spans = new VPSpan[maxWidth]; }`
`~Visplane() { delete[] spans; }`
`VPSpan *spans = nullptr;` `// for each "X", pos & height of column to draw`
`int minx, maxx;` `// area within spans arrays actually used`
};
struct VPSpan {
`int y1, y2;` `// start and eend of a single column (in screen coords)`
};
This flat-fills great (and of course gets me the Doom-esque out-of-bounds floodfill effect) but I am entirely at a loss as to how to correctly texture map them. Here are some of the things that I have to somehow pull off:
The floor in the rendering here looks mostly okay in a static scene, but 1) I have no idea how it works, I just ported some code from DOOM-Level-Viewer by StanislavPetrovV, and 2) this completely opaque series of formulas doesn't even work for me as good as it does in that project, there are all kinds of glitches with the floor speed not matching the walls, the floor sliding around slightly when you rotate or move etc, and I have no idea what to fix since I can't make heads or tails of what it's doing.
I should note that I'm not using any matrix math in this renderer, it's using solely classic tricks. So the books covering modern 3D math techniques aren't always applicable to this "faking" a flat. But, I'm not at all committed to do it exactly like Doom either, it was just the best-documented.
I've pretty much exhausted all of the docs *I* can find about this, talked endlessly with ChatGPT which is like pulling teeth to get it to finally say something smart, and downloaded several sample projects none of which had sufficient comments for me to decipher what the hell they were doing and why.
Has anyone seen a really GOOD from-the-basics tutorial or example code that includes textured floors/ceilings for this type of renderer? I know there's a few series on YT about recreating Doom, but they all seem to make a trade-off so that they're also entertaining, and went so fast over this part I just didn't get all of it. Plus sadly for a lot of them the actual code is really bad making it almost impossible to read it to fill in the blanks.
This silly 3D toy project has nerd-sniped me, help! :P
Hi! Real-time rendering framework Falcor used to read the scene graph from .fscene
data structure which has been replaced by .pyscene
and the latest orca library and elsewhere mainly using this latest .pyscene
data structure.
The old .fscene
data structure holds following:
I can (almost) create it manually, but that may lead to mismatch.
.fscene
?.fscene
..fscene
data structure from .fbx
orca scene or other standard fbx/gltf data?I'm working on a very simple BMP file loader in C and am having a little trouble understanding part of the spec. When I have a bmp with 32 bits per pixel I load the image into memory and send it over to DirectX as a texture and get it loading. This is mostly fine (although I'm still fighting with DirectX over tex coords) but the colors seem off. I think the reason is because I'm not doing anything with the RGB masks specified in the header of the bmp. The only problem is I don't really know what to do with the mask. Do I just bitwise & the mask with it's respective color or do I do it to the whole RGBA element or something else. Everywhere I look is kind of vague about this and just says the colors specified in the data section are relative to the palette or whatever. I don't really know how to parse that.
Any help would be greatly appreciated, thanks!
Hello everyone,
I've been diving into the world of optimizing the transfer and rendering of large meshes, and through my research, I've noticed two predominant approaches:
I'm currently grappling with understanding the specific use cases for each method and why one might be chosen over the other. From what I've gathered:
Interestingly, the original PM paper was also applied to terrains with similar view-dependent LOD control. This leads me to a few questions:
I'd love to hear your insights or experiences with these techniques. Any clarification or additional resources would be greatly appreciated!
I am looking for help in projects related to 3D Software and 3D OpenGL rendering, 3D modeling, and Shadertoy animation.
I'm trying to write my own bmp loader for a project I'm working on in c and wanted to check what I got with what stb does (because I thought it did bmp images). However, stbi_load fails whenever I pass in my bmp file. I tried it with a png just to make sure it was working and that was all good. I also tried different bmp images too but to no avail. Has anybody had a similar issue with bmp files?
The code is literally just these two lines
int x, y, channels;
unsigned char* image = stbi_load("./color_palette.bmp", &x, &y, &channels, 0);
Also, if anybody knows how to lay the bmp out in memory to send over to a DirectX11 texture that would be amazing as well.
Thanks!
Have started learning graphics programming as a complete beginner.
I am looking to write few applications based on multi view 3d geometry, will be going through few books and build sample projects using a lidar sensor.
I am looking for a library which can take input of a 3d point and render it in a window. The 3d point can be rendered as a single sphere. It will be something like how Neo visualizes matrix i.e 3d visualization using multiple tiny dots.
My purpose is to focus more on multi view geometry and algorithms and optimise lidar rather than the choice of 3d rendering library.
If the library supports real time rendering then that would awesome, then I can extend my project to render real time rather than static view geometry.
If folks have any other suggestion, happy to take inputs.
I will be following
Not AI integration planned for object mapping and detection just pure maths and geometry for now.
I got a voxel-based lighting system with RGB channels (8 bit each) in my voxel game. It looks okay when there are multiple light sources of the same color, but as soon as I add a different color, the lighting looks very wrong.
I basically compute the lighting at a voxel as the maximum (per channel) of the surrounding voxels minus a drop-off value, which is based on the color. E.g. if the neighboring voxel has the color RGB(200,50,0), I subtract (4,1,0) to retain the same color.
This approach somewhat works but if you have e.g. a wall with a torch, the other side of the wall also gets lit because the light propagates around it.
So I added flags for each voxel to block the light in certain directions, which increases the drop-off. E.g. if the light hits a wall on the left and continues in a different direction, it will not go very far left once it gets to a corner.
This works when you do it separately for each channel (RGB), but the results are still not ideal and sometimes look really bad.
So I thought, why not have the blocked light flags be the same for all channels? I got it to work in a few cases and it looked much better. But there is a huge problem: it does no longer converge.
Whenever I change a voxel's light value or blocked light flags, I also check the neighboring voxels. But with the combined flags, the values alternate, resulting in an infinite loop.
I have not been able to get it to converge despite numerous different approaches. What could I be missing? Is there a better way?
Just spent a couple of week on this so I am sharing:
I want a simple answer with a proof! Please. ))
Update 0.
Quad means geometry from 4 points and 6 indices. Yes it will be 2 triangles.
Update 1.
For curious people here is a great explanation of the concept
Update 2.
thanks to u/BalintCsala for answering.
Inside sources of shadertoy one could find a call to :
this.mRenderer.DrawFullScreenTriangle_XY( l1 );
// which uses this vertex buffer
mGL.bufferData( mGL.ARRAY_BUFFER, new Float32Array( [ -1.0, -1.0, 3.0, -1.0, -1.0, 3.0] ), mGL.STATIC_DRAW );
So yeah shadertoy render a SINGLE triangle, but if VR option enabled I found that it will draw two quads.
Hi!
I’m pretty new to graphics programming, and I’ve been working on a voxel engine for the past few weeks using Monogame. I have some problems texturing my cubes with a cubemap. I managed to texture them using six different 2D textures and some branching based on the normal vector of the vertex. As far as I know, branching is pretty costly in shaders, so I’m trying to texture my cubes with a cube map.
This is my shader file:
TextureCube<float4> CubeMap;
matrix World;
matrix View;
matrix Projection;
float3 LightDirection;
float3 LightColor;
float3 AmbientColor = float3(0.05, 0.05, 0.05);
samplerCUBE cubeSampler = sampler_state
{
Texture = <CubeMap>;
MAGFILTER = LINEAR;
MINFILTER = ANISOTROPIC;
MIPFILTER = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
AddressW = Wrap;
};
struct VS_INPUT
{
float4 Position : POSITION;
float3 Normal : NORMAL;
float2 TexCoord : TEXCOORD0;
};
struct PS_INPUT
{
float4 Position : SV_POSITION;
float3 Normal : TEXCOORD1;
float2 TexCoord : TEXCOORD0;
};
PS_INPUT VS(VS_INPUT input)
{
PS_INPUT output;
float4 worldPosition = mul(input.Position, World);
output.Position = mul(worldPosition, View);
output.Position = mul(output.Position, Projection);
output.Normal = input.Normal;
output.TexCoord = input.TexCoord;
return output;
};
float4 PS(PS_INPUT input) : COLOR
{
float3 lightDir = normalize(LightDirection);
float diffuseFactor = max(dot(input.Normal, -lightDir), 0);
float3 diffuse = LightColor * diffuseFactor;
float3 finalColor = diffuse + AmbientColor;
float4 textureColor = texCUBE(cubeSampler, input.Normal);
return textureColor + float4(finalColor, 0);
};
technique BasicCubemap
{
pass P0
{
VertexShader = compile vs_3_0 VS();
PixelShader = compile ps_3_0 PS();
}
};
And I use this vertex class provided by Monogame for my vertices (it has Position, Normal, and Texture values):
public VertexPositionNormalTexture(Vector3 position, Vector3 normal, Vector2 textureCoordinate)
{
Position = position;
Normal = normal;
TextureCoordinate = textureCoordinate;
}
Based on my limited understanding, cube sampling works like this: with the normal vector, I can choose which face to sample from the TextureCube, and with the texture coordinates, I can set the sampling coordinates, just as I would when sampling a 2D texture.
Please correct me if I’m wrong, and I would appreciate some help fixing my shader!
Edit:
EDIT - Solved: Thanks u/Th3HolyMoose for noticing that I'm using texture
instead of textureLod
Hello, I am implementing a PBR renderer with a prefiltered map for the specular part of the ambient light based on LearnOpenGL.
I am getting a weird artifact where the further I move from the spheres the darker the prefiltered color gets and it shows the quads that compose the sphere.
This is the gist of the code (full code below):
vec3 N = normalize(vNormal);
vec3 V = normalize(uCameraPosition - vPosition);
vec3 R = reflect(-V, N);
// LOD hardcoded to 0 for testing
vec3 prefilteredColor = texture(uPrefilteredEnvMap, R, 0).rgb;
color = vec4(prefilteredColor, 1.0);
https://reddit.com/link/1gcqot1/video/k6sldvo615xd1/player
This one face of the prefiltered cube map:
I am out of ideas, I would greatly appreciate some help with this.
The full fragment shader: https://github.com/AlexDicy/DicyEngine/blob/c72fed0e356670095f7df88879c06c1382f8de30/assets/shaders/default-shader.dshf
Some more debugging screenshots:
for a "challenge" im working on making a basic 3d engine from scarch in java, to learn both java and 3d graphics. I've been stuck for a couple of days on how to get the transformation matrix that when applied to my vertices, calculates the vertices' rotation, translation and perspective projection matrices onto the 2d screen. As you can see when moving to the side the vertices get squished: Showcase Video
This is the code for creating the view matrix
This is the code for drawing the vertices on the screen
Thanks in advance for any help!
I'm working on my little DOOM Style Software Renderer, and I'm at the part where I can start working on Textures. I was searching up how a day ago on how I'd go about it and I came to this page on Wikipedia: https://en.wikipedia.org/wiki/Texture_mapping where it shows 'ua = (1-a)*u0 + u*u1' which gives you the affine u coordinate of a texture. However, it didn't work for me as my texture coordinates were greater than 1000, so I'm wondering if I had just screwed up the variables or used the wrong thing?
My engine renders walls without triangles, so, they're just vertical columns. I tend to learn based off of code that's given to me, because I can learn directly from something that works by analyzing it. For direct interpolation, I just used the formula which is above, but that doesn't seem to work. u0, u1 are x positions on my screen defining the start and end of the wall. a is u which is 0.0-1.0 based on x/x1. I've just been doing my texture coordinate stuff in screenspace so far and that might be the problem, but there's a fair bit that could be the problem instead.
So, I'm just curious; how should I go about this, and what should the values I'm putting into the formula be? And have I misunderstood what the page is telling me? Is the formula for ua perfectly fine for va as well? (XY) Thanks in advance
Hello everyone,
I’ve spent days on this bug, and I can’t figure out how to fix it. It seems like the textures aren’t mapping correctly when the wall’s height is equal to the screen’s height, but I’m not sure why. I’ve already spent a lot of time trying to resolve this. Does anyone have any ideas?
Thank you for taking the time to read this!
Here’s the code for the raycast and renderer:
RaycastInfo *raycast(Player *player, V2 *newDir, float angle)
{
V2i map = (V2i){(int)player->pos.x, (int)player->pos.y};
V2 normPlayer = player->pos;
V2 dir = (V2){newDir->x, newDir->y};
V2 rayUnitStepSize = {
fabsf(dir.x) < 1e-20 ? 1e30 : fabsf(1.0f / dir.x),
fabsf(dir.y) < 1e-20 ? 1e30 : fabsf(1.0f / dir.y),
};
V2 sideDist = {
rayUnitStepSize.x * (dir.x < 0 ? (normPlayer.x - map.x) : (map.x + 1 - normPlayer.x)),
rayUnitStepSize.y * (dir.y < 0 ? (normPlayer.y - map.y) : (map.y + 1 - normPlayer.y)),
};
V2i step;
float dist = 0.0;
int hit = 0;
int side = 0;
if (dir.x < 0)
{
step.x = -1;
}
else
{
step.x = 1;
}
if (dir.y < 0)
{
step.y = -1;
}
else
{
step.y = 1;
}
while (hit == 0)
{
if (sideDist.x < sideDist.y)
{
dist = sideDist.x;
sideDist.x += rayUnitStepSize.x;
map.x += step.x;
side = 0;
}
else
{
dist = sideDist.y;
sideDist.y += rayUnitStepSize.y;
map.y += step.y;
side = 1;
}
// Check if ray has hit a wall
hit = MAP[map.y * 8 + map.x];
}
RaycastInfo *info = (RaycastInfo *)calloc(1, sizeof(RaycastInfo));
V2 intersectPoint = VEC_SCALAR(dir, dist);
info->point = VEC_ADD(normPlayer, intersectPoint);
// Calculate the perpendicular distance
if (side == 0){
info->perpDist = sideDist.x - rayUnitStepSize.x;
}
else{
info->perpDist = sideDist.y - rayUnitStepSize.y;
}
//Calculate wallX
float wallX;
if (side == 0) wallX = player->pos.y + info->perpDist * dir.y;
else wallX = player->pos.x + info->perpDist * dir.x;
wallX -= floorf(wallX);
// Calculate texX
int texX = (int)(wallX * (float)texSize);
if (side == 0 && dir.x > 0) texX = texSize - texX - 1;
if (side == 1 && dir.y < 0) texX = texSize - texX - 1;
texX = texX & (int)(texSize - 1); // Ensure texX is within the valid range
// Set the calculated values in the hit structure
info->wallX = wallX;
info->texX = texX;
info->mapX = map.x;
return info;
}
int main(int argc, char const *argv[])
{
char text[500] = {0};
InitWindow(HEIGHT * 2, HEIGHT * 2, "Raycasting demo");
Player player = {{1.5, 1.5}, 0};
V2 plane;
V2 dir;
SetTargetFPS(60);
RaycastInfo *hit = (RaycastInfo *)malloc(sizeof(RaycastInfo));
Texture2D texture = LoadTexture("./asset/texture/bluestone.png");
while (!WindowShouldClose())
{
if (IsKeyDown(KEY_A))
{
player.angle += 0.06;
}
if (IsKeyDown(KEY_D))
{
player.angle -= 0.05;
}
dir = (V2){1, 0};
plane = NORMALISE(((V2){0.0f, 0.50f}));
dir = rotate_vector(dir, player.angle);
plane = rotate_vector(plane, player.angle);
if (IsKeyDown(KEY_W))
{
player.pos = VEC_ADD(player.pos, VEC_SCALAR(dir, PLAYER_SPEED));
}
if (IsKeyDown(KEY_S))
{
player.pos = VEC_MINUS(player.pos, VEC_SCALAR(dir, PLAYER_SPEED));
}
BeginDrawing();
ClearBackground(RAYWHITE);
draw_map();
dir = NORMALISE(dir);
for (int x = 0; x < HEIGHT; x++)
{
float cameraX = 2 * x / (float)(8 * SQUARE_SIZE) - 1;
V2 newDir = VEC_ADD(dir, VEC_SCALAR(plane, cameraX));
hit = raycast(&player, &newDir, player.angle);
DrawVector(player.pos, hit->point, GREEN);
int h, y0, y1;
h = (int)(HEIGHT / hit->perpDist);
y0 = max((HEIGHT / 2) - (h / 2), 0);
y1 = min((HEIGHT / 2) + (h / 2), HEIGHT - 1);
Rectangle source = (Rectangle){
.x = hit->texX,
.y = 0,
.width = 1,
.height = texture.height
};
Rectangle dest = (Rectangle){
.x = x,
.y = HEIGHT + y0,
.width = 1,
.height = y1 - y0,
};
DrawTexturePro(texture, source, dest,(Vector2){0,0},0.0f, RAYWHITE);
//DrawLine(x, y0 + HEIGHT, x, y1 + HEIGHT, hit->color);
}
snprintf(text, 500, "Player x = %f\nPLayer y = %f", player.pos.x, player.pos.y);
DrawText(text, SQUARE_SIZE * 8 + 10, 20, 10, RED);
EndDrawing();
}
CloseWindow();
return 0;
}
I will be working on Computer Graphics and doing the rendering part using OpenGL. How are other people experience all around the globe in institutions and orgranisations. Do share your experience with me