/r/opengl
News, information and discussion about OpenGL development.
News and discussion about OpenGL on all platforms.
Tutorials
Reference
Related subreddits
Programming Subreddits
/r/opengl
I have a x11 window and an OpenGL context set up. Learning OpenGL is too much of a curve for me right now however and unimportant to what I want to accomplish anyway. Is there a library that I can use to draw to my OpenGL context in a highly abstracted way? Im hoping for something similar to raylib, preferably with both 2d and 3d support, but only 2d is fine aswell. (Does a library like this even make sense?) Thanks in advance for any replies
Edit: Thank you for your replies. The technologies im using: C99 (not C++), Xlib, and OpenGL. I am using Xlib because any abstractions on top of it remove access to useful Xlib API calls that I need for this project. I figured OpenGL would be the easiest thing to hook into my Xlib window which is why I am using it. Ultimately the goal is to be able to easily draw shapes to the screen while being able to call Xlib functions. If someone knows of a better option please let me know
I am writing a camera controller for my project and I have rewritten it many time, but for some reason every time I look up or down about 50° the camera starts rotating rapidly.
Here is my current code.
Hey folks, I have a technical question regarding multiple materials for a terrain system (OpenGL 4.3).
At the moment, my engine supports a single material for the terrain, and of course, it is possible to customize the material and write a custom GLSL shader to it, with different uniforms and so on. This way, it is possible to create a blend map texture in order to blend between different textures such as grass, rock, dirt, sand, and so on.
But recently, I've been thinking about a better way of doing this when it comes to UX. Ideally, I would like to allow the user to create multiple materials for the same terrain without having to modify the sources of the shaders being applied to the material and do this blending manually.
My initial idea is that the terrain system will store an array of materials and also, starting from the second material in this array, store an array of textures of the blend map to that specific material. Then, it will render the entire terrain for each material, applying the blend texture as an alpha (and `alpha = 1.0` for the first mat, of course). It won't mess with the rest of the alpha materials since terrain rendering is done during opaque pass.
The engine do have depth pre-pass, but still, this approach will drastically increase the number of draw calls, overdraw, and bindings in general. So I'm not sure if I'm happy with this idea, even though it is the best that I was able to think about in terms of user experience.
Do you all have a recommendation or a different take on it?
To render thousands of small RGB data every frame into screen, what is the best approach to do so with OpenGL?
The RGB data are 10x10 to 30x30 rectangles and with different positions. They won't overlap with each others in terms of position. There are ~2000 of these small RGB data per frame.
It is very slow if I call glTexSubImage2D for every RGB data item.
One thing I tried is to a big memory and consolidate all RGB data then call glTexSubImage2D only once per frame. But this wouldn't work sometimes because these RGB data are not always continuous.
OpenGL Model Viewer
I have developed a hobby project: a 3D Viewer that reads and displays the most common 3D file formats supported by the Assimp library.
The link to the GitHub is https://github.com/sharjith/ModelViewer-Qt5
I am looking for contributors to this open-source project. Any suggestions to make the project visible to the open-source community so that it evolves are welcome.
I have the following code snipped:
const glm::mat4 rotate = glm::orientation({ 0, 1, 0 }, plane.Normal); const glm::mat4 translate = glm::translate(plane.Position); (*_PlaneTransforms)[_PlaneBatchedCount] = translate * rotate;
Which gets run 40,000 times per frame for testing purposes. If i run this in Release Configuration (Visual Studio), i get ~130 FPS / 7 ms. However, if i run it in Debug Configuration, I get 8 Fps / 125 ms, meaning its 17x slower.
The profiler shows that the main culprit is the matrix mutliply and glm::orientation, and theres pretty much no other OpenGL stuff going on.
So my question is: Why is the GLM performance so terrible, especially because its just floating point math, which i feel like shouldn't be too optimizable (unless some SIMD stuff or something is being used which doesn't work in Debug?) and can I do anything to fix this? Thanks in advance
hello guys, i want to learn opengl, do you guys have any books or courses to recommend me ?
I was learning to compile C/C++ graphics applications for the web using Emscripten. I have figured out most of the stuff. But, even after several attempts, I am unable to get mouse events in my OpenGL application when running in the browser.
I was using React on the frontend to create a (modern) minimal example. Opengl Web contains the code. Most of the C++ code is taken from my other repository which runs only natively.
Things I know so far:
glfwGetCursorPos() returns (0, 0) without any GLFW errors.
Emscripten docs suggest I should use functions like emscripten_set_mousemove_callback and emscripten_set_mousedown_callback for mouse events.
Emscripten callback functions do work. They return the correct mouse coordinates (which I have tested by passing them to the Uniform). But, passing them to ImGui using ImGuiio::AddMousePosEvent and ImGuiio::AddMouseButtonEvent or directly assigning ImGuiio::MousePos and ImGuiio::io.MouseClicked doesn't seem to work and ImGui frames remain uninteractable.
I have discovered that by pressing Tab key repeatedly, I was able to get the Text box in ImGui frame into focus and also write into it.
And now, I'm stuck :/
Any help would be greatly appreciated. :)
I have a rigid body physics simulator which is made in raylib. However, considering how many things I have planned for it, like fluid simulations, soft body physics and better rigid body physics, someone has told me that it would be worth it to switch over to something more low level for efficient rendering 🤔.
I never thought I would take 2 hours to learn to draw a triangle 😭😭
i! I changed Assimp for cgltf and I think it is more intuitive and easier. Now I will be trying to make animations work. :D (also it would look better if it had ambient occlusion, but that's for later)
so when designing an opengl app how do you avoid the hidden binding problem? I know aboud DSA but I'm wondering how you would do this without it.
say I want to make a mesh class, do I make a Mesh
struct and have it contain the vertex and index data, and maybe also pointers to textures, shaders, etc. and then have some kind of Scene
class that takes all the Mesh
structs and draws them one by one binding everything itself?
if I take that approach how do you avoid binding things multiple times, do you somehow keep track of whats currently bound? do you somehow sort the meshes in such a way that multiple binds aren't possible?
or is there a way to do the binding inside the Mesh
class that avoids the hidden binding problem?
Although some budget cpus DO support the programmable pipeline.
edit: I meant the igpu of budget cpus may have fixed function hardware
hello, so currently i have an object that i collect, the problem is whenver i get close to it it gets sooo big that it takes the whole screen, is there a fix to that?
hello! so i'm a beginner in all of this, so i have a terrain n skybox, the yellow thing you see is my object, i need to place it on the terrain (now it's just following the camera around), and i also need to place more of that same object in a certain path, how can i do that?
https://www.youtube.com/watch?v=3LY-IbMMZ7Y
Quick video showing how I've been spending my free time lately. I've gone from programming to designing (lmao programmer art) (:
Moving aside special cards with their own proprietary API...
Is OpenGL 1.1 fully supported also on newer hardware or are there deprecated functions that won't do anything?
Any suggestions on debugging opengl ? Some calls don’t produce any error messages and you might just get a black screen . I’m using MacOS .
does someone have any idea why this rendering glitch is happening
the blend mode is set correctly
i use batch rendering and ecs and the problem happens only with texture
without textures the pixels scatterting kind of effect doesnt happen.
its been a long time since i touched this project and i think this started happening after i set up batch rendering and framebuffers. (dont remember which one)
i just wanted to know what the problem could be
edit: the glitch effect is batch rendering problem because turning it off (setting the maxQuads to 1) makes the glitch go away but the blending doesnt work still
edit2: for problem 2, im just using nvidia's GPU and will fix later. problem 1 is batch rendering problem but idk what to do with problem 3
I think I understand the basics of framebuffers and rendering, but it doesn’t seem to be fully sticking in my brain/i can’t seem to fully grasp the concept.
First you have a default framebuffer which i believe is created whenever the opengl context or window is and this is the only framebuffer that’s like connected to the screen in the sense that stuff shows on screen when using it.
Then you can create your own framebuffer which the purpose is not fully clear it’s either essentially a texture or like where everything is stored (the end result/output from draw calls)
Lastly, you can bind shaders which tell the gpu which vertex and fragment shader to use during the pipeline process, you can bind textures which I believe is assigning them to a texture unit that can be used in shaders, and then lastly you have the draw calls which processes everything and stores it in a framebuffer which needs to be copied over to the default framebuffer.
Apologies if this was lengthy, but that’s my understanding of it all which I don’t think is that far off?
I've been learning opengl for months now, i just decided to make my first 2d game in it in C, all is well and good, i start everything from input to drawing stuff to shader handling, little things and even tilesets and now i have a pretty good workflow now here's the problem, i wanted to get working collisions, but i wanted a solution where i can use it on every 2d game i do not just game-specific so i decided to use what i knew existed because of godot, box2d
here comes the problem, there's no good docs, any videos about using it are 11 years ago minimum and even tho their sample program is opensource its not clear and made weirdly
for being the best physics engine for 2d there was no public usage, no repos using it other than game engines or simple simulations with sdl's renderer and 0 examples and its frustrating to learn
if anyone here sees this and knows where i could find somewhere to learn from could you please provide it?