/r/computergraphics
This subreddit is open to discussion of anything related to computer graphics or digital art.
Anything related to CG is welcome.
If you submit your own work, please be ready to receive critiques.
If you submit other peoples work, please give credit.
For more information about posting to our subreddit, read the r/computergraphics FAQ
Here are subreddits dedicated to specific fields:
r/vfx
r/GraphicsProgramming
r/MotionDesign
r/Programming
r/gamedev
r/Low_Poly
r/archviz
r/3Dmodeling
r/DigitalArtTutorials
Questions about specific software. We'd love to help, and please feel free to ask, but these software specific communities may be able to provide you with more in-depth answers.
/r/computergraphics
This is the method proposed by Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting @iclr 2024
Just wondering the process of turning 4D into 3D Gaussian. Is it using a function of time to determine the 3D Gaussian at any given instant and do the normal splatting? (I didn’t quite get the paper).
In the paper it mentioned this method is not affected by obstructing views so just wondering why that is?
Many thanks!
While Googling around for how to apply different kind of textures in the context of procedural terrain generation, I can't but help to notice that a lot of people "know but don't know" what they are talking about. I often encounter people saying "just do X" but seemingly never follow up with how to actually do stuff. For a beginner that might not know much, it's quite annoying to never find more granular answers. I am, of course, not expecting a step by step "hold-your-hand" style tutorial walking through every single detail, but at least give a more fine-grained guideline than "just do X".
With that said...
Assume we generate a coarse non-tessellated mesh CPU side, run it through a tessellation shader to obtain a more fine-grained mesh along with texture coordinates and normals for each new vertex. Furthermore, assume we have in our fragment shader
A normalized height value ([0, 1]
, could also be in [-h, +h]
if you're defining stuff in your own way),
A texture coordinate,
and 4 textures corresponding to "rock", "sand", "grass" and "snow".
Let's try to answer a question or two with more detail than your typical "coarse" answer found on the internet. It's likely safe to assume that most people here have used GLSL
before, so feel free to contribute with concrete GLSL code as a way of concretizing your ideas. It often helps to have something more "concrete" on top of provided pure theory.
How should one choose the values for the boundaries between the different textures? Does there exist some somewhat robust way of automatically generating ranges, or is this part simply about manually fine-tuning?
The most straight-forward way to do this seems to simply manually tweak ranges until you get something that seems to fit nicely for your particular generated terrain.
When making significant modifications to the terrain, e.g. via tweaking parameters for how your heightmap is generate (e.g. Perlin noise parameters), then you have to re-calibrate the ranges which is annoying but the downside of this strategy. What other strategies are there?
How do we properly texture the terrain?
One naïve way of doing it in is to simply use the defined boundaries to choose what texture to sample and simply use the sample as the color. However, this will typically not yield nice results as it'll yield seams between boundaries. For example,
vec3 color = vec3(0.0f);
float t = u_Height / u_HeightScale; // [0, MaxHeight] -> [0, 1]
if (t < 0.10f)
// Water
color = vec3(66.0f/255.0f, 135.0f/255.0f, 245.0f/255.0f);
else if (t < 0.25f)
// Sand
color = texture(sandTexture, texCoord).rgb;
else if (t < 0.50f)
// Grass
color = texture(grassTexture, texCoord).rgb;
else if (t < 0.75f)
// Rock
color = texture(rocKTexture, texCoord).rgb;
else
// Snow
color = texture(snowTexture, texCoord).rgb;
FragColor = vec4(color, 1.0);
To combat this people often suggest blending textures, however how this can be done is unknown to me. I am assuming one has to do more calculations that are dependent on neighbouring vertices, e.g. their heights or something. What is a concrete way of doing this in?
Two objects / bounding boxes may overlap partially or even completely. So there may be some bounding boxes with the exact same size & position that then need to be moved apart from each other, so that they are next to each other. I guess I need a recursive algorithm or something like that, since I want each bounding box to "try" to keep their original position as close as possible, while still approaching the even spacing.
Is there any kind of algorithm that already does exactly something like this? Or is there any way you can guide me to solve this problem? How complex is it really? Visually it is an easy task, but I've tried to code it and it doesn't seem that simple at all.
I need to implement it in JS, if possible without any complex external libraries etc.
Thanks a lot for your help!
Link to the sketch:
https://i.ibb.co/fYpyqpk/eualize-Boundingbox-Distribution.png
I have 3 subjects to study:
And I work as a freelance developer - bit of blender and three.js
So far I've been doing this:
But my math is very weak so thinking I should spend 1hr on algebra a day so I can speed up my math when I get to 3D Math.
Question is, is there a better way to plan out my day? Or keep it to what I have, one subject at a time?
Thank you.
Below is the code for a simple wormhole effect shader made in shadertoy.com.
I understand everything else except:
uv.x = time + .1/(r);
uv.y = time + sin(time/2.575) + 3.0*a/3.1416;
Here's what I think it's doing:
`uv.x = time + .1/r` is making the later `texture()` take the color for the pixel from the right side of the wormhole's texture, making it look like it's moving forward. And I think the `+.1/r` is there to make the center of the wormhole to distort more.
I have no idea what the `uv.y` is doing.
void mainImage( out vec4 fragColor, in vec2 fragCoord ) { float time = iTime; vec2 p = -1.0 + 2.0 * fragCoord.xy / iResolution.xy; vec2 uv; p.x += 0.5*sin(time);
p.y += 0.5*cos(time*1.64); float r = length(p); float a = atan(p.y,p.x); uv.x = time + .1/(r); uv.y = time + sin(time/2.575) + 3.0a/3.1416; float w = rr; vec3 col = texture(iChannel0,uv).xyz; fragColor = vec4(col,1.0); }
📷 Exciting News! Join Our Vibrant CG & VFX Community on Discord! 📷
Are you passionate about Computer Graphics & Visual Effects? 📷 Look no further! Our Discord channel is the ultimate hub for professionals, enthusiasts, and newcomers alike! 📷
📷 Dive into a world of:
- Latest News: Stay updated with the hottest trends and breakthroughs in CG & VFX.
- Creative Challenges: Fuel your creativity with stimulating challenges and competitions.
- In-depth Tutorials: Learn from the best with tutorials covering a wide range of topics.
- Top Tools & Resources: Discover essential tools and resources to level up your skills.
- Networking Opportunities: Connect with industry professionals and fellow enthusiasts for collaborations and discussions.
Don't miss out on the opportunity to be a part of our dynamic community! Click the link below to join now and unlock a world of endless possibilities in CG & VFX! 📷📷 hashtag#CG hashtag#VFX hashtag#DiscordCommunity hashtag#JoinUs
Join Now!!!
https://discord.gg/TRrw4ZZaXt
Maybe my understanding is incorrect but 3DGS is basically a point cloud formed by ovals that have variable colour depending on the view point.
Just wondering why oval is the preferred shape and not other shapes?
Is there a specific the technique/paper that indicates the process of finding the ideal shape for a single unit in a point cloud?
Many thanks!
Idk if yall have seen those presentations that literally are like spiderweb maps and you click different locations of it to look at that info. Can anyone help me pinpoint what to use to create it? Im trying to create a visual interactive diagram for different OS's and things for class. Any ideas?
I challenged myself to recreate the Walt Disney castle after a 5-year hiatus. The new project was built using Blender 4.1, while the previous one was created with Blender 2.79.
I would love to hear your thoughts on the comparison between the two versions.
If you’re interested, you can watch the creation and animation here: https://youtu.be/l89sj4DiMNo
I am writing an application that is using Unity as a visualizer, but as the core of this application has to be somewhat portable/disconnected from Unity I am using ILGPU (can change to ComputeSharp if need be, went with ILGPU as it supports more platforms) to write some performance critical code to take advantage of the GPU (advection/diffusion fluid engine). This all works fine and the code runs, but now I want to display the simulation on screen.
The naive way would be to do work on GPU through ILGPU, fetch the data to the CPU, send it over to Unity and then fill a ComputeBuffer with the data, then point to that buffer in the shader. This will be slow.
So my question is, is it possible to directly set the ComputeBuffer in Unity using the pointer from ILGPU or some other magic on the shader end?