/r/GraphicsProgramming

Photograph via snooOG

A subreddit for everything related to the design and implementation of graphics rendering code.

Rule 1: Posts should be about Graphics Programming.
Rule 2: Be Civil, Professional, and Kind


Suggested Posting Material:
- Graphics API Tutorials
- Academic Papers
- Blog Posts
- Source Code Repositories
- Self Posts
(Ask Questions, Present Work)
- Books
- Renders
(Please xpost to /r/ComputerGraphics)
- Career Advice
- Jobs Postings (Graphics Programming only)


Related Subreddits:

/r/ComputerGraphics

/r/Raytracing

/r/Programming

/r/LearnProgramming

/r/ProgrammingTools

/r/Coding

/r/GameDev

/r/CPP

/r/OpenGL

/r/Vulkan

/r/DirectX


Related Websites:
ACM: SIGGRAPH
Journal of Computer Graphics Techniques

Ke-Sen Huang's Blog of Graphics Papers and Resources
Self Shadow's Blog of Graphics Resources

/r/GraphicsProgramming

46,847 Subscribers

1

Hexagonal texture filtering

1 Comment
2024/10/31
05:16 UTC

1

BGFX or Sokol?

I don't want to work with C++. I want to work with C or Rust.
I also want mobile and web support.
What about WGPU?

3 Comments
2024/10/30
20:41 UTC

2

Camera rotation degree

Hi, given 2 camera2world matrices, I am trying to compute the rotation degree of camera from first image to second image, for this purpose I calculated the relative transformation between the matrices(multiplying second matrix by the inverse of the first), and took the sub matrix(:3,:3 of the 4*4 relative transform matrix), I have the ground truth rotation value but for some reason they do not match the Euler degrees I compute using scipy's rotation package, any clue what I am doing wrong mathmatically?

*the values of cam2world are the output obtained from Dust3r if that makes a difference

2 Comments
2024/10/30
16:30 UTC

2

Linearizing a depth buffer in Vulkan producing an entirely white output

The depth in vulkan in stored in the range [0,1], I am sampling from my depth buffer and trying to linearize it but it remains entirely white. I am using the following linearization function from a stack post on the topic of linearizing depth in vulkan

//https://stackoverflow.com/questions/51108596/linearize-depth 
float LinearizeDepth(float d) {         
    return ubo.nearPlane * ubo.farPlane / (ubo.farPlane + d * (ubo.farPlane - ubo.nearPlane));
}

I'm not entirely sure if this is correct. I plugged some values into demos to see what it gives back but what I get seems incorrect. I have

n = 0.1
f = 2000.0
d = 0.9 // example depth value from depth buffer

// apply formula
n * f / (f + d * (f * n)) = 0.09174311927

I tested the depth value from d = 0.1 to 0.9 and it remains 0.09 if you round to 2 dp.

Am I doing something wrong or not understanding something here correctly?

1 Comment
2024/10/30
12:01 UTC

7

Floor/ceiling texturing in 2.5D

https://preview.redd.it/d43awqp7hsxd1.png?width=1920&format=png&auto=webp&s=84950a1bbac83908b5e17d546847a810a50e6a76

I've been working on a software 'portal renderer' in the style of Build or Doom, just to learn because 3D maths are one programming area I've never been very good at sort of a few basic raycasters.

And man the curve has been *steep* making me feel like an absolute newb but so far I've managed to get the walls of a sector projected and even eventually through extreme pain managed to decipher the formulas on Wikipedia to apply perspective-correct texturing to them.

Now I would like to add a floor and ceiling. There's more docs about how this works for Doom, and the Build code is hard to read so I went with a "visplane" like method atm - basically I'm taking the screen coordinates of the vertices from the onscreen walls, then just filling downwards from that to populate a structure like this one storing the region of the screen to be filled with floor:

struct Visplane {

`Visplane(int maxWidth) { spans = new VPSpan[maxWidth]; }`

`~Visplane() { delete[] spans; }`

`VPSpan *spans = nullptr;`	`// for each "X", pos & height of column to draw`

`int minx, maxx;`				`// area within spans arrays actually used`

};

struct VPSpan {

`int y1, y2;`		`// start and eend of a single column (in screen coords)`

};

This flat-fills great (and of course gets me the Doom-esque out-of-bounds floodfill effect) but I am entirely at a loss as to how to correctly texture map them. Here are some of the things that I have to somehow pull off:

  • It needs to look like a floor of course, texturing it in a manner similar to how walls are done just gives a freaky "squishy parallax" effect.
  • Must rotate and move correctly with the player AND those movements in texture must appear "in sync" with the walls or it'll look like the floor is sliding around a little and the world won't feel "solid".
  • The texturing should appear bigger or smaller depending on how high the camera is above it. This one really confused me, if the texture is smaller, it will have more repeats in it. How can it get smaller, yet stay in the same spot and have the same pattern?
  • The same "spot" on the floor should always be in the same "spot" in world coordinates -- if there is a line on the floor texture near a wall, then that line is near the wall when you are up higher too, even though the texture features are now smaller, and no matter what direction you're looking or where you are, or how you've adjusted the camera so the projection of the floor is slightly bigger in screen space than it was before.

The floor in the rendering here looks mostly okay in a static scene, but 1) I have no idea how it works, I just ported some code from DOOM-Level-Viewer by StanislavPetrovV, and 2) this completely opaque series of formulas doesn't even work for me as good as it does in that project, there are all kinds of glitches with the floor speed not matching the walls, the floor sliding around slightly when you rotate or move etc, and I have no idea what to fix since I can't make heads or tails of what it's doing.

I should note that I'm not using any matrix math in this renderer, it's using solely classic tricks. So the books covering modern 3D math techniques aren't always applicable to this "faking" a flat. But, I'm not at all committed to do it exactly like Doom either, it was just the best-documented.

I've pretty much exhausted all of the docs *I* can find about this, talked endlessly with ChatGPT which is like pulling teeth to get it to finally say something smart, and downloaded several sample projects none of which had sufficient comments for me to decipher what the hell they were doing and why.

Has anyone seen a really GOOD from-the-basics tutorial or example code that includes textured floors/ceilings for this type of renderer? I know there's a few series on YT about recreating Doom, but they all seem to make a trade-off so that they're also entertaining, and went so fast over this part I just didn't get all of it. Plus sadly for a lot of them the actual code is really bad making it almost impossible to read it to fill in the blanks.

This silly 3D toy project has nerd-sniped me, help! :P

3 Comments
2024/10/30
00:44 UTC

1

Can anyone help me with .fscene data structure and 3D scene

Hi! Real-time rendering framework Falcor used to read the scene graph from .fscene data structure which has been replaced by .pyscene and the latest orca library and elsewhere mainly using this latest .pyscene data structure.

The old .fscene data structure holds following:

  • mesh data, their materials, and animations
  • light source data
  • cameras
  • light probes
  • animation paths (for cameras, for example)
  • Scene hierarchy

I can (almost) create it manually, but that may lead to mismatch.

  • Does anyone has previous experience working with .fscene?
  • I would really appreciate if someone has resource with .fscene.
  • Is there any software/tool to automatically generate .fscene data structure from .fbx orca scene or other standard fbx/gltf data?
6 Comments
2024/10/29
09:52 UTC

4

BMP Loading Woes

I'm working on a very simple BMP file loader in C and am having a little trouble understanding part of the spec. When I have a bmp with 32 bits per pixel I load the image into memory and send it over to DirectX as a texture and get it loading. This is mostly fine (although I'm still fighting with DirectX over tex coords) but the colors seem off. I think the reason is because I'm not doing anything with the RGB masks specified in the header of the bmp. The only problem is I don't really know what to do with the mask. Do I just bitwise & the mask with it's respective color or do I do it to the whole RGBA element or something else. Everywhere I look is kind of vague about this and just says the colors specified in the data section are relative to the palette or whatever. I don't really know how to parse that.

Any help would be greatly appreciated, thanks!

9 Comments
2024/10/28
22:23 UTC

11

Progressive Meshes (PM) vs Height-Map Specific LOD control

Hello everyone,

I've been diving into the world of optimizing the transfer and rendering of large meshes, and through my research, I've noticed two predominant approaches:

  1. Progressive Meshes (PM): Originally introduced by Hughes Hoppe, which you can explore further here.
  2. Terrain/Heightmap Specific Methods: A comprehensive overview of these methods can be found in this thesis.

I'm currently grappling with understanding the specific use cases for each method and why one might be chosen over the other. From what I've gathered:

  • Progressive Meshes: These are streamed progressively using a series of edge collapses.
  • Heightmap LOD Techniques: These seem to employ some kind of hierarchical LOD structure, such as clipmaps or quadtrees.

Interestingly, the original PM paper was also applied to terrains with similar view-dependent LOD control. This leads me to a few questions:

  • Is there something inherent about heightmaps that allows for further optimizations or shortcuts?
  • Are Progressive Meshes an ancestor of the current height-map LOD controls, or do they remain equally viable today?
  • Are these two methods simply different approaches to achieving the same goal?

I'd love to hear your insights or experiences with these techniques. Any clarification or additional resources would be greatly appreciated!

9 Comments
2024/10/28
14:45 UTC

0

Looking for help in graphics programming in C++

I am looking for help in projects related to 3D Software and 3D OpenGL rendering, 3D modeling, and Shadertoy animation.

6 Comments
2024/10/28
09:27 UTC

2

STBI BMP not loading correctly

I'm trying to write my own bmp loader for a project I'm working on in c and wanted to check what I got with what stb does (because I thought it did bmp images). However, stbi_load fails whenever I pass in my bmp file. I tried it with a png just to make sure it was working and that was all good. I also tried different bmp images too but to no avail. Has anybody had a similar issue with bmp files?

The code is literally just these two lines

    int x, y, channels;
    unsigned char* image = stbi_load("./color_palette.bmp", &x, &y, &channels, 0);

Also, if anybody knows how to lay the bmp out in memory to send over to a DirectX11 texture that would be amazing as well.

Thanks!

3 Comments
2024/10/27
23:43 UTC

13

Bloat free c++ based 3d library for rendering simple objects

Have started learning graphics programming as a complete beginner.

I am looking to write few applications based on multi view 3d geometry, will be going through few books and build sample projects using a lidar sensor.

I am looking for a library which can take input of a 3d point and render it in a window. The 3d point can be rendered as a single sphere. It will be something like how Neo visualizes matrix i.e 3d visualization using multiple tiny dots.

My purpose is to focus more on multi view geometry and algorithms and optimise lidar rather than the choice of 3d rendering library.

If the library supports real time rendering then that would awesome, then I can extend my project to render real time rather than static view geometry.

If folks have any other suggestion, happy to take inputs.

I will be following

  1. Learn basic 3d geometry from https://cvg.cit.tum.de/teaching/online/mvg.
  2. Choose a 3d library and start implementing basic c++ code.
  3. "Multiple View Geometry in Computer Vision", R. Hartley and A. Zisserman learn this and have more folks collaborate on the project.
  4. Start developing rendering application using lidar, maybe iPhone's lidar or lumineer lidar or any Chinese one would suffice.
  5. Learn and implement 3 d geometry algorithms.

Not AI integration planned for object mapping and detection just pure maths and geometry for now.

15 Comments
2024/10/27
18:43 UTC

5

Voxel-based lighting with multiple colors, is there a better way to do the light propagation?

I got a voxel-based lighting system with RGB channels (8 bit each) in my voxel game. It looks okay when there are multiple light sources of the same color, but as soon as I add a different color, the lighting looks very wrong.

I basically compute the lighting at a voxel as the maximum (per channel) of the surrounding voxels minus a drop-off value, which is based on the color. E.g. if the neighboring voxel has the color RGB(200,50,0), I subtract (4,1,0) to retain the same color.

This approach somewhat works but if you have e.g. a wall with a torch, the other side of the wall also gets lit because the light propagates around it.

So I added flags for each voxel to block the light in certain directions, which increases the drop-off. E.g. if the light hits a wall on the left and continues in a different direction, it will not go very far left once it gets to a corner.

This works when you do it separately for each channel (RGB), but the results are still not ideal and sometimes look really bad.

So I thought, why not have the blocked light flags be the same for all channels? I got it to work in a few cases and it looked much better. But there is a huge problem: it does no longer converge.

Whenever I change a voxel's light value or blocked light flags, I also check the neighboring voxels. But with the combined flags, the values alternate, resulting in an infinite loop.

I have not been able to get it to converge despite numerous different approaches. What could I be missing? Is there a better way?

4 Comments
2024/10/27
17:53 UTC

17

Charrot Gregory patch (a way to blend arbitrary curve patches)

0 Comments
2024/10/27
05:47 UTC

1

Matrix confusion

0 Comments
2024/10/27
00:51 UTC

0

Does shadertoy renders quad or triangle?

I want a simple answer with a proof! Please. ))
Update 0.
Quad means geometry from 4 points and 6 indices. Yes it will be 2 triangles.
Update 1.
For curious people here is a great explanation of the concept
Update 2.
thanks to u/BalintCsala for answering.
Inside sources of shadertoy one could find a call to :

this.mRenderer.DrawFullScreenTriangle_XY( l1 );
// which uses this vertex buffer
mGL.bufferData( mGL.ARRAY_BUFFER, new Float32Array( [ -1.0, -1.0,   3.0, -1.0,    -1.0,  3.0] ), mGL.STATIC_DRAW );

So yeah shadertoy render a SINGLE triangle, but if VR option enabled I found that it will draw two quads.

9 Comments
2024/10/27
00:16 UTC

2

HLSL Texture Cube Sampling - Need Help!

Hi!
I’m pretty new to graphics programming, and I’ve been working on a voxel engine for the past few weeks using Monogame. I have some problems texturing my cubes with a cubemap. I managed to texture them using six different 2D textures and some branching based on the normal vector of the vertex. As far as I know, branching is pretty costly in shaders, so I’m trying to texture my cubes with a cube map.

This is my shader file:

TextureCube<float4> CubeMap;

matrix World;
matrix View;
matrix Projection;

float3 LightDirection;
float3 LightColor;
float3 AmbientColor = float3(0.05, 0.05, 0.05);

samplerCUBE cubeSampler = sampler_state
{
    Texture = <CubeMap>;
    MAGFILTER = LINEAR;
    MINFILTER = ANISOTROPIC;
    MIPFILTER = LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
    AddressW = Wrap;
};

struct VS_INPUT
{
    float4 Position : POSITION;
    float3 Normal : NORMAL;
    float2 TexCoord : TEXCOORD0;
};

struct PS_INPUT
{
    float4 Position : SV_POSITION;
    float3 Normal : TEXCOORD1;
    float2 TexCoord : TEXCOORD0;
};

PS_INPUT VS(VS_INPUT input)
{
    PS_INPUT output;
    
    float4 worldPosition = mul(input.Position, World);
    output.Position = mul(worldPosition, View);
    output.Position = mul(output.Position, Projection);
    
    output.Normal = input.Normal;
    output.TexCoord = input.TexCoord;
    
    return output;
};

float4 PS(PS_INPUT input) : COLOR
{
    float3 lightDir = normalize(LightDirection);
    
    float diffuseFactor = max(dot(input.Normal, -lightDir), 0);
    float3 diffuse = LightColor * diffuseFactor;
    
    float3 finalColor = diffuse + AmbientColor;
    
    float4 textureColor = texCUBE(cubeSampler, input.Normal);
    
    return textureColor + float4(finalColor, 0);
};

technique BasicCubemap
{
    pass P0
    {
        VertexShader = compile vs_3_0 VS();
        PixelShader = compile ps_3_0 PS();
    }
};

And I use this vertex class provided by Monogame for my vertices (it has Position, Normal, and Texture values):

public VertexPositionNormalTexture(Vector3 position, Vector3 normal, Vector2 textureCoordinate)
 {
     Position = position;
     Normal = normal;
     TextureCoordinate = textureCoordinate;
 }

Based on my limited understanding, cube sampling works like this: with the normal vector, I can choose which face to sample from the TextureCube, and with the texture coordinates, I can set the sampling coordinates, just as I would when sampling a 2D texture.

Please correct me if I’m wrong, and I would appreciate some help fixing my shader!

Edit:

The rednering looks like this

The cubemap

6 Comments
2024/10/26
21:41 UTC

9

Prefiltered environment map looks darker the further I move

EDIT - Solved: Thanks u/Th3HolyMoose for noticing that I'm using texture instead of textureLod

Hello, I am implementing a PBR renderer with a prefiltered map for the specular part of the ambient light based on LearnOpenGL.
I am getting a weird artifact where the further I move from the spheres the darker the prefiltered color gets and it shows the quads that compose the sphere.

This is the gist of the code (full code below):

vec3 N = normalize(vNormal);
vec3 V = normalize(uCameraPosition - vPosition);
vec3 R = reflect(-V, N);
// LOD hardcoded to 0 for testing
vec3 prefilteredColor = texture(uPrefilteredEnvMap, R, 0).rgb;
color = vec4(prefilteredColor, 1.0);

https://preview.redd.it/0vmjdhm515xd1.png?width=1981&format=png&auto=webp&s=12d6b3396baac603236ed14819e67779458416cc

https://reddit.com/link/1gcqot1/video/k6sldvo615xd1/player

This one face of the prefiltered cube map:

https://preview.redd.it/j8pwc7x915xd1.png?width=1790&format=png&auto=webp&s=1f1f4209903fa5f5b6162cb6924fe5338ea599d8

I am out of ideas, I would greatly appreciate some help with this.

The full fragment shader: https://github.com/AlexDicy/DicyEngine/blob/c72fed0e356670095f7df88879c06c1382f8de30/assets/shaders/default-shader.dshf

Some more debugging screenshots:

color = vec4((N + 1.0) / 2.0, 1.0);

color = vec4((R + 1.0) / 2.0, 1.0);

10 Comments
2024/10/26
17:49 UTC

1

Issue with moveable camera in java

for a "challenge" im working on making a basic 3d engine from scarch in java, to learn both java and 3d graphics. I've been stuck for a couple of days on how to get the transformation matrix that when applied to my vertices, calculates the vertices' rotation, translation and perspective projection matrices onto the 2d screen. As you can see when moving to the side the vertices get squished: Showcase Video
This is the code for creating the view matrix
This is the code for drawing the vertices on the screen

Thanks in advance for any help!

4 Comments
2024/10/26
17:22 UTC

11

How does Texture Mapping work for quads like in DOOM?

I'm working on my little DOOM Style Software Renderer, and I'm at the part where I can start working on Textures. I was searching up how a day ago on how I'd go about it and I came to this page on Wikipedia: https://en.wikipedia.org/wiki/Texture_mapping where it shows 'ua = (1-a)*u0 + u*u1' which gives you the affine u coordinate of a texture. However, it didn't work for me as my texture coordinates were greater than 1000, so I'm wondering if I had just screwed up the variables or used the wrong thing?

My engine renders walls without triangles, so, they're just vertical columns. I tend to learn based off of code that's given to me, because I can learn directly from something that works by analyzing it. For direct interpolation, I just used the formula which is above, but that doesn't seem to work. u0, u1 are x positions on my screen defining the start and end of the wall. a is u which is 0.0-1.0 based on x/x1. I've just been doing my texture coordinate stuff in screenspace so far and that might be the problem, but there's a fair bit that could be the problem instead.

So, I'm just curious; how should I go about this, and what should the values I'm putting into the formula be? And have I misunderstood what the page is telling me? Is the formula for ua perfectly fine for va as well? (XY) Thanks in advance

26 Comments
2024/10/26
15:21 UTC

1

Weird bug in edge of the screen for a raycasting engine

https://preview.redd.it/k45yjv5mazwd1.png?width=804&format=png&auto=webp&s=2ff26bdd2a2cc0fe7f23c628e181ab7162e964e2

Hello everyone,

I’ve spent days on this bug, and I can’t figure out how to fix it. It seems like the textures aren’t mapping correctly when the wall’s height is equal to the screen’s height, but I’m not sure why. I’ve already spent a lot of time trying to resolve this. Does anyone have any ideas?

Thank you for taking the time to read this!
Here’s the code for the raycast and renderer:

RaycastInfo *raycast(Player *player, V2 *newDir, float angle)
{
    V2i map = (V2i){(int)player->pos.x, (int)player->pos.y};
    V2 normPlayer = player->pos;
    V2 dir = (V2){newDir->x, newDir->y};

    V2 rayUnitStepSize = {
        fabsf(dir.x) < 1e-20 ? 1e30 : fabsf(1.0f / dir.x),
        fabsf(dir.y) < 1e-20 ? 1e30 : fabsf(1.0f / dir.y),
    };

    V2 sideDist = {
        rayUnitStepSize.x * (dir.x < 0 ? (normPlayer.x - map.x) : (map.x + 1 - normPlayer.x)),
        rayUnitStepSize.y * (dir.y < 0 ? (normPlayer.y - map.y) : (map.y + 1 - normPlayer.y)),
    };
    V2i step;
    float dist = 0.0;
    int hit = 0;
    int side = 0;

    if (dir.x < 0)
    {
        step.x = -1;
    }
    else
    {
        step.x = 1;
    }
    if (dir.y < 0)
    {
        step.y = -1;
    }
    else
    {
        step.y = 1;
    }
    while (hit == 0)
    {
        if (sideDist.x < sideDist.y)
        {
            dist = sideDist.x;
            sideDist.x += rayUnitStepSize.x;
            map.x += step.x;
            side = 0;
        }
        else
        {
            dist = sideDist.y;
            sideDist.y += rayUnitStepSize.y;
            map.y += step.y;
            side = 1;
        }
        // Check if ray has hit a wall
        hit = MAP[map.y * 8 + map.x];
    }

    RaycastInfo *info = (RaycastInfo *)calloc(1, sizeof(RaycastInfo));
    V2 intersectPoint = VEC_SCALAR(dir, dist);
    info->point = VEC_ADD(normPlayer, intersectPoint);

    // Calculate the perpendicular distance
    if (side == 0){
        info->perpDist = sideDist.x - rayUnitStepSize.x;
    }
    else{
        info->perpDist = sideDist.y - rayUnitStepSize.y;
    }

    //Calculate wallX 
    float wallX;
    if (side == 0) wallX = player->pos.y + info->perpDist * dir.y;
    else wallX = player->pos.x + info->perpDist * dir.x;
    wallX -= floorf(wallX);

    // Calculate texX
    int texX = (int)(wallX * (float)texSize);
    if (side == 0 && dir.x > 0) texX = texSize - texX - 1;
    if (side == 1 && dir.y < 0) texX = texSize - texX - 1;
    texX = texX & (int)(texSize - 1); // Ensure texX is within the valid range

    // Set the calculated values in the hit structure
    info->wallX = wallX;
    info->texX = texX;
    info->mapX = map.x;
    return info;
}

int main(int argc, char const *argv[])
{
   
    char text[500] = {0};
    InitWindow(HEIGHT * 2, HEIGHT * 2, "Raycasting demo");
    Player player = {{1.5, 1.5}, 0};
    V2 plane;
    V2 dir;
    SetTargetFPS(60);
    RaycastInfo *hit = (RaycastInfo *)malloc(sizeof(RaycastInfo));
    Texture2D texture = LoadTexture("./asset/texture/bluestone.png");
    while (!WindowShouldClose())
    {

        if (IsKeyDown(KEY_A))
        {
            player.angle += 0.06;
        }

        if (IsKeyDown(KEY_D))
        {
            player.angle -= 0.05;
        }

        dir = (V2){1, 0};
        plane = NORMALISE(((V2){0.0f, 0.50f}));
        dir = rotate_vector(dir, player.angle);
        plane = rotate_vector(plane, player.angle);
        if (IsKeyDown(KEY_W))
        {
            player.pos = VEC_ADD(player.pos, VEC_SCALAR(dir, PLAYER_SPEED));
        }
        if (IsKeyDown(KEY_S))
        {
            player.pos = VEC_MINUS(player.pos, VEC_SCALAR(dir, PLAYER_SPEED));
        }
        BeginDrawing();
        ClearBackground(RAYWHITE);
        draw_map();
        dir = NORMALISE(dir);
        for (int x = 0; x < HEIGHT; x++)
        {
            float cameraX = 2 * x / (float)(8 * SQUARE_SIZE) - 1;
            V2 newDir = VEC_ADD(dir, VEC_SCALAR(plane, cameraX));

            hit = raycast(&player, &newDir, player.angle);
            DrawVector(player.pos, hit->point, GREEN);
                
            int h, y0, y1;
            h = (int)(HEIGHT / hit->perpDist);
            y0 = max((HEIGHT / 2) - (h / 2), 0);
            y1 = min((HEIGHT / 2) + (h / 2), HEIGHT - 1);
            Rectangle source = (Rectangle){
                .x = hit->texX,
                .y = 0,
                .width = 1,
                .height = texture.height
            };
            Rectangle dest = (Rectangle){
                .x = x,
                .y = HEIGHT + y0,
                .width = 1,
                .height = y1 - y0,
            };
             DrawTexturePro(texture, source, dest,(Vector2){0,0},0.0f, RAYWHITE);
            //DrawLine(x, y0 + HEIGHT, x, y1 + HEIGHT, hit->color);
        }
        snprintf(text, 500, "Player x = %f\nPLayer y = %f", player.pos.x, player.pos.y);
        DrawText(text, SQUARE_SIZE * 8 + 10, 20, 10, RED);
        EndDrawing();
    }

    CloseWindow();
    return 0;
}
2 Comments
2024/10/25
22:30 UTC

8

I have been selected as Research Intern in one of the Top Institutions of my Country.What should I expect in the terms of Graphics

I will be working on Computer Graphics and doing the rendering part using OpenGL. How are other people experience all around the globe in institutions and orgranisations. Do share your experience with me

6 Comments
2024/10/25
17:09 UTC

Back To Top