/r/computergraphics

Photograph via snooOG
Welcome to r/computergraphics.

This subreddit is open to discussion of anything related to computer graphics or digital art.

Computer Graphics FAQ

Submission Guidelines

Anything related to CG is welcome.

  • If you submit your own work, please be ready to receive critiques.

  • If you submit other peoples work, please give credit.

For more information about posting to our subreddit, read the r/computergraphics FAQ

Technical Questions?

Here are subreddits dedicated to specific fields:

r/vfx
r/GraphicsProgramming
r/MotionDesign
r/Programming
r/gamedev
r/Low_Poly
r/archviz
r/3Dmodeling
r/DigitalArtTutorials

Software Questions?

Questions about specific software. We'd love to help, and please feel free to ask, but these software specific communities may be able to provide you with more in-depth answers.

r/Maya
r/3dsMax
r/Cinema 4D
r/Blender

/r/computergraphics

58,768 Subscribers

3

Implementing a Fast Software 3D Pipeline in Python: The Pixerise Renderer

Just released Pixerise: A CPU-based 3D renderer in Python that doesn't sacrifice performance

https://i.redd.it/s4rtcqg6ioge1.gif

Hey folks! I've built a software 3D renderer that implements the full graphics pipeline in Python, achieving 60+ FPS with 2000+ triangles through clever optimization.

Technical Highlights:

  • Full 3D pipeline implementation (projection, clipping, rasterization)
  • Multiple shading models: wireframe, flat, and Gouraud
  • Depth buffer with perspective-correct interpolation
  • Scene graph with instancing support
  • Numba JIT-compiled rendering kernels
  • OBJ file loading with normal support

The project aims to bridge the gap between educational graphics implementations (which are often slow) and production renderers (which are often impenetrable). Every part of the pipeline is accessible and modifiable, making it great for experimenting with rendering techniques.

Performance is achieved through:

  • Vectorized triangle processing
  • SIMD-friendly memory layout
  • JIT-compiled rasterization kernels
  • Efficient depth buffer management

Check it out: GitHub

The project just went public - if you find it interesting, a star would be much appreciated :)

Happy to discuss implementation details or answer any questions.

3 Comments
2025/02/02
07:35 UTC

6

Is this a good course to learn computer graphics from scratch?

5 Comments
2025/02/02
05:58 UTC

7

Unlocking the Perfect PBR Range: Must-Know for Texture Artists

The highlights of my 2 years long research into the WHITEST and BLACKEST albedo values for PBR materials. These values are critical for accurate and consistent light response in any photorealistic CG creations.

✔️ Safe Range (sRGB 40-243)
✔️ Acceptable Range (sRGB 20-250)
✔️ Extreme Range (sRGB 3-254)

Here is the video: https://youtu.be/Y9SvKHtu5Jg

I finally managed to make a shorter one :) If you like it, find it useful and believe it deserves visibility, I would very appreciate any reshares, likes and comments, so it doesnt get lost in the depths of the internet.

I would like to thank to all of you who purchased my PBR Color Reference list, as it really helped to co-fund these videos and this entire research. Thanks a thousand times as I wouldnt make it without your support ❤️
Enjoy!!!

0 Comments
2025/01/31
22:35 UTC

2

Dark Souls Architectural Analysis

0 Comments
2025/01/31
18:16 UTC

18

A Pet Ray Tracer Project

Hello everyone! This is my first post here. I just wanted to share a project I have been working on for the last couple of months. I have built a CPU ray tracer with C++. Here are the implemented features:

  • Reflection and refraction
  • BVH acceleration structure incorporating TLAS/BLAS
  • Transformations
  • Instancing
  • Anti-aliasing with jittered sampling
  • Motion blur, depth-of-field, area lights, glossy reflections
  • Texture mapping, normal mapping, bump mapping
  • Perlin noise
  • HDR tonemapping with Reinhard TMO
  • Spot lights, directional lights, environment lights
  • Microfacet BRDFs
  • Object lights
  • Path tracing with cosine importance sampling, next event estimation and Russian roulette

You can check out the github repo here with some example scenes:

https://github.com/distortedfuzz/advanced_ray_tracer

I have also written a blog post series through the development of the project that details the theory behind each feature:

https://medium.com/@Ksatese

Comments and suggestions are welcome. I am an undergraduate student and want to improve myself in the field of computer graphics, so if you catch anything wrong or weird please let me know.

https://preview.redd.it/qxnrz5wi4rfe1.png?width=1000&format=png&auto=webp&s=00af08ddffeae821d70294631cc5c7f88eb00d9c

https://preview.redd.it/894p2xml4rfe1.png?width=800&format=png&auto=webp&s=acd52b93ad42a4072a9713de7ad12aa24ef8f572

https://preview.redd.it/vw8fx7475rfe1.png?width=512&format=png&auto=webp&s=d5d7a778964e73bf7c75c1b02c26c6b47f3eaaa5

3 Comments
2025/01/28
15:10 UTC

4

Learning ZBrush and Blender

0 Comments
2025/01/26
22:36 UTC

0

Opinions about Path Tracing in C

0 Comments
2025/01/26
16:59 UTC

9

Spots and scales, but all heart!

0 Comments
2025/01/25
06:34 UTC

5

AI threats?

So sorry if this is very very annoying but how likely is it that AI will mess up this field of CG? Im just starting out and all this AI talk is making me get anxiety of my choice to pursue this field..

6 Comments
2025/01/25
01:56 UTC

8

Xiaomi 13 Ultra | Animation Concept

Xiaomi 13 Ultra | Animation Concept.

Creating this concept for the Xiaomi 13 Ultra has been a journey of precision and creativity. Using Cinema 4D and Octane has transformed the way I bring ideas to life. I’m constantly exploring new techniques and pushing creative boundaries, amazed by the stunning realism Octane adds to every frame. Seeing everything come together is incredibly fulfilling and keeps me motivated to keep creating.

Visuals: @kiwe.lb Sound Design: @h1.sound

Full project on behance: https://www.behance.net/gallery/217641853/Xiaomi-13-Ultra-Animation-Concept

1 Comment
2025/01/24
20:12 UTC

17

TerrainGen - open source GPU terrain generator and erosion simulator

3 Comments
2025/01/24
14:21 UTC

4

Overlapping skills - Computer Graphics Engineer and skilled trades(carpentry, home renos, and etc)

I've always respected trades and always had a great interest for houses and related construction - carpentry, house building from ground up, house finishes for various rooms and bathrooms.

Is there any skills I can learn to overlap my current programming skills and say a given trade?

Are there any use cases where my current programming skills can help a trades man's life easier at work?

12 Comments
2025/01/23
12:53 UTC

24

My first real CG project

11 Comments
2025/01/22
19:18 UTC

1

Help with Webgl1 fragment shader artifacts on iphone

0 Comments
2025/01/22
03:00 UTC

8

Can you recommend an english math book for linear algebra and calculus with good amount of exercise questions?

Hello. I will start my master's degree in computer graphics soon. But I feel like I forget most of the math I learnt in my bachelor's degree. Of course I can do some basic linear algebra math to make a 3d OpenGL game, but I definitely don't feel like I can do academic research in the field. I want a book to recover my knowledge, and I want it to have a lot of questions because I learn math better when solving questions.

0 Comments
2025/01/21
05:30 UTC

1

Reflect Shading Artifact

Hi all, I've been fooling around with ray tracing and i have a decent amount working for simple scenes, but there's an issue i cant quite put my finger on. Just a heads up, I don't really know what I'm talking about and I've kind of been making it up as I go along so to speak

I'm using invisible point light(s).

the surface of the below sphere is defined entirely as follows, no other color or surface related data points are considered when assigning color:

```

ambient 0.0 0.0 0.0
diffuse 0.0 0.0 0.0
specular 0.0 0.0 0.0
specpow 0.0
reflect 0.5

```

The issue I'm trying to pin down is why I'm getting a defined hemisphere line around the equator of the sphere where it transitions from lit to in-shadow given the reflective nature of the surface. .

When i turn up specular and spec pow, the highlight hides it slightly, but its still there. Setting reflect to 1.0 still demonstrates a shadow. The reference image does not show the defined lines on reflective objects. I understand that this would be entirely normal on a non-glossy surface, but it doesn't seem correct given this one is reflective (and given the defined shading line is not there in the reference).

Single light source showing shading line around hemisphere

example with the two other light sources enabled

Any help is appreciated! Thanks!

8 Comments
2025/01/20
22:39 UTC

7

Alpha blending

How to do the second question here how can we identify what’s alpha front and alpha back

6 Comments
2025/01/20
09:03 UTC

11

How would you improve this?

2 Comments
2025/01/20
01:45 UTC

627

Image generation with compute shaders and genetic algorithms

42 Comments
2025/01/18
18:56 UTC

6

Visualizing geometry density

Im working on viewmodes in my engine and I want to show which areas of the scene has more triangle density. So if a mesh has a screw with 1M triangles it looks very bright.

I though using additive blending without depthtest but didnt manage to make it work.

Does anybody knows a trick to do it? (without having to manually construct a color based map per mesh).

10 Comments
2025/01/17
12:27 UTC

15

This might be a stretch for this subreddit but I am wondering if anyone has ideas on how to code this style of auto stereogram (AKA magic eye). I believe this style cfalls under the Mapped Texture Stereogram but I can't find much info on how to make them.

3 Comments
2025/01/16
19:34 UTC

23

Struggling with 3D Math

I have a great understanding of how the GPU and CPU work and the graphics pipeline.

However, my weakness is 3d math. How can I improve on this and what should I study?

If anyone be interested to mentor me, I can pay hourly.

14 Comments
2025/01/15
05:07 UTC

4

Need Help with Material Architecture

Hello, I’m trying to make a model pipeline for my OpenGL/C++ renderer but got into some confusion on how to approach the material system and shader handling.

So as it stands each model object has an array of meshes, textures and materials and are loaded from a custom model data file for easier loading (kind of resembles glTF). Textures and Meshes are loaded normally, and materials are created based on a shader json file that leads to URIs of vertex and fragment shaders (along with optional tessellation and geometry shaders based on flags set in the shader file). When compiled the shader program sets the uniform samplers of maps to some constants, DiffuseMap = 0, NormalMap = 1, and so on. The shaders are added to a global shaders array and the material gets a reference to that instance so as not to create duplicates of the shader program.

My concern is that it may create cache misses when drawing. The draw method for the model object is like so Bind all textures to their respective type’s texture unit, i.e Diffuse = 0, Normal = 1, etc… Iterate over all meshes: for each mesh, get their respective material index (stored per mesh object) then use that material from the materials array. then bind the mesh’s vao and make the draw call.

Using the material consists of setting the underlying shader active via their reference, this is where my cache concern is raised. I could have each material object store a shader object for more cache hits but then I would have duplicates of the shaders for each object using them, say a basic Blinn-Phong lighting shader or other.

I’m not sure how much of a performance concern that is, but I wanted to be in the clear before going further. If I’m wrong about cache here, please clear that up for me if you can thanks :)

Another concern with how materials are handled when setting uniforms ? Currently shader objects have a set method for most data types such as floats, vec3, vec4, mat4 and so on. But for the user to change a uniform for the material, the latter would have to act as a wrapper of sorts having its own set methods that would call the shader set methods ? Is there a better and more general way to implement this ?The shader also has a dictionary with uniform names as keys and their location in the shader program as the values to avoid querying this. As for matrices, currently for the view and projection matrix I'm using a UBO by the way.

So my concern is how much of a wrapper the material is becoming in this current architecture and if this is ok going forward performance wise and in terms of renderer architecture ? If not, how can it be improved and how are materials usually handled, what do they store directly, and what should the shader object store. Moreover can the model draw method be improved in terms of flexibility or performance wise ?

tldr: What should material usually store ? Only Constant Uniform values per custom material property and a shader reference ? Do materials usually act as a wrapper for shaders in terms of setting uniforms and using the shader program ? If you have time, please read the above if you can help with improving the architecture :)

I am sorry if this implementation or questions seem naive but i’m still fairly new to graphics programming so any feedback would be appreciated thanks!

0 Comments
2025/01/14
06:48 UTC

1

How to get the 3d rotating correctly with difference in axis?

0 Comments
2025/01/11
16:46 UTC

4

Need help in Fragment Shader

I'm working on a project where it is required us to build the "Fragment Shader" of a GPU. This is purely hardware designing.

I'm looking for some resources and contents where I can read about this and how and what is its role.

Please recommend some reading or lectures :)

1 Comment
2025/01/11
10:35 UTC

9

What’s limiting generating more realistic images?

Computer graphics has come a long way, and I’m curious to know what’s limiting further progress

Two parts question and would appreciate perspective/knowledge from experts:

  1. what gives an image a computer generated look?

even some of the most advanced computer generated images have this distinct, glossy look. What’s behind this?

  1. what’s the rate limiting factor? Is it purely a hardware problem or do we also have algorithmic and/or implementational limitations? Or is it the case that we can’t simply explicitly simulate all visual components and light interactions, thus requiring a generative method for photorealism?
9 Comments
2025/01/08
16:35 UTC

Back To Top