/r/opengl

Photograph via snooOG

News, information and discussion about OpenGL development.

/r/opengl

28,170 Subscribers

4

Prisma Engine update

An example of a demo scene with deferred clustered shading and forward to run smoothly many lights, with some updates like GPU sorted transparencies, terrain rendering and height based texture, frustum culled grass, that has also animations , it runs at 120 FPS 500+ different animated meshes that run updates in different threads.

1 Comment
2024/11/02
14:25 UTC

1

How do you make use of the local size of work groups when running compute shader ?

If you're going to process a image , then you define work group size as dimension of that image. If you're going to render full screen , then it is similar that you define work group size as dimension of screen. If you have tons of vertices to process , then you probably want to define work group size as (x,y,z) where x*y*z approximately equals to count of vertices .

I didn't see how can I make use of local size of groups . Whatever input it is , pixels, vertices, they're all 'atomic' , not divisible . Probably local invocations of one work group is used for blur effect, texture up-scaling ? As you have to access neighbor pixels . I would think it is like how raster packs 4 fragments to render 1 fragments (to make dFdx, dFdy accessible).

However let's say I'm going to do raytracing using compute shader. Can I make use of local invocations as well ? ( i.e. change the local size . Do not leave them (1,1,1) which is default. ) I've heard from somewhere that I should try my best to pack calls into one work groups ( make them invocations of that group) , because invocations inside one group run faster than multiple work groups with only one invocation. Can I arbitrarily divide the screen dimension by 64(or 100 or 144), and then allocate 8x8x1( or 10x10x1 or 12x12x1) invocations for each work group ?

2 Comments
2024/11/02
08:01 UTC

3

Hello, I am having some struggles using Assimp to load the Sponza scene

https://preview.redd.it/ktx3mej94eyd1.png?width=1920&format=png&auto=webp&s=881f0b98ec6de4351f0b7f64ca19bc32167e9a84

https://preview.redd.it/i9e0dxnh4eyd1.png?width=1920&format=png&auto=webp&s=2ee83049397cd2bec79e4e96ad07b6757ecc80d5

In the scene rendering, I'm adding an offset to each sub-mesh's position and that is showing that each submesh stores roughly the exact same mesh at the exact same transform.

static const uint32_t s_AssimpImportFlags =
aiProcess_CalcTangentSpace          
| aiProcess_Triangulate             
| aiProcess_SortByPType             
| aiProcess_GenSmoothNormals
| aiProcess_GenUVCoords
| aiProcess_OptimizeGraph
| aiProcess_OptimizeMeshes          
| aiProcess_JoinIdenticalVertices
| aiProcess_LimitBoneWeights        
| aiProcess_ValidateDataStructure   
| aiProcess_GlobalScale
;

AssimpImporter::AssimpImporter( const IO::FilePath& a_FilePath )
: m_FilePath( a_FilePath )
{
}

SharedPtr<MeshSource> AssimpImporter::ImportMeshSource( const MeshSourceImportSettings& a_ImportSettings )
{
SharedPtr<MeshSource> meshSource = MakeShared<MeshSource>();

Assimp::Importer importer;
//importer.SetPropertyBool( AI_CONFIG_IMPORT_FBX_PRESERVE_PIVOTS, false );
importer.SetPropertyFloat( AI_CONFIG_GLOBAL_SCALE_FACTOR_KEY, a_ImportSettings.Scale );

const aiScene* scene = importer.ReadFile( m_FilePath.ToString().c_str(), s_AssimpImportFlags);
if ( !scene )
{
TE_CORE_ERROR( "[AssimpImporter] Failed to load mesh source from: {0}", m_FilePath.ToString() );
return nullptr;
}

ProcessNode( meshSource, (void*)scene, scene->mRootNode, Matrix4( 1.0f ) );

//ExtractMaterials( (void*)scene, meshSource );

// Create GPU buffers

meshSource->m_VAO = VertexArray::Create();

BufferLayout layout =
{
{ ShaderDataType::Float3, "a_Position" },
{ ShaderDataType::Float3, "a_Normal" },
{ ShaderDataType::Float3, "a_Tangent" },
{ ShaderDataType::Float3, "a_Bitangent" },
{ ShaderDataType::Float2, "a_UV" },
};

meshSource->m_VBO = VertexBuffer::Create( (float*)( meshSource->m_Vertices.data() ), (uint32_t)( meshSource->m_Vertices.size() * sizeof( Vertex ) ) );
meshSource->m_VBO->SetLayout( layout );
meshSource->m_VAO->AddVertexBuffer( meshSource->m_VBO );

meshSource->m_IBO = IndexBuffer::Create( meshSource->m_Indices.data(), (uint32_t)( meshSource->m_Indices.size() ) );
meshSource->m_VAO->SetIndexBuffer( meshSource->m_IBO );

return meshSource;
}

void AssimpImporter::ProcessNode( SharedPtr<MeshSource>& a_MeshSource, const void* a_AssimpScene, void* a_AssimpNode, const Matrix4& a_ParentTransform )
{
const aiScene* a_Scene = static_cast<const aiScene*>( a_AssimpScene );
const aiNode* a_Node = static_cast<aiNode*>( a_AssimpNode );

Matrix4 localTransform = Util::Mat4FromAIMatrix4x4( a_Node->mTransformation );
Matrix4 transform = a_ParentTransform * localTransform;

// Process submeshes
for ( uint32_t i = 0; i < a_Node->mNumMeshes; i++ )
{
uint32_t submeshIndex = a_Node->mMeshes[i];
SubMesh submesh = ProcessSubMesh( a_MeshSource, a_Scene, a_Scene->mMeshes[submeshIndex] );
submesh.Name = a_Node->mName.C_Str();
submesh.Transform = transform;
submesh.LocalTransform = localTransform;

a_MeshSource->m_SubMeshes.push_back( submesh );
}

// Recurse
// Process children
for ( uint32_t i = 0; i < a_Node->mNumChildren; i++ )
{
ProcessNode( a_MeshSource, a_Scene, a_Node->mChildren[i], transform );
}
}

SubMesh AssimpImporter::ProcessSubMesh( SharedPtr<MeshSource>& a_MeshSource, const void* a_AssimpScene, void* a_AssimpMesh )
{
const aiScene* a_Scene = static_cast<const aiScene*>( a_AssimpScene );
const aiMesh* a_Mesh = static_cast<aiMesh*>( a_AssimpMesh );

SubMesh submesh;

// Process Vertices
for ( uint32_t i = 0; i < a_Mesh->mNumVertices; ++i )
{
Vertex vertex;
vertex.Position = { a_Mesh->mVertices[i].x, a_Mesh->mVertices[i].y, a_Mesh->mVertices[i].z };
vertex.Normal = { a_Mesh->mNormals[i].x, a_Mesh->mNormals[i].y, a_Mesh->mNormals[i].z };

if ( a_Mesh->HasTangentsAndBitangents() )
{
vertex.Tangent = { a_Mesh->mTangents[i].x, a_Mesh->mTangents[i].y, a_Mesh->mTangents[i].z };
vertex.Bitangent = { a_Mesh->mBitangents[i].x, a_Mesh->mBitangents[i].y, a_Mesh->mBitangents[i].z };
}

// Only support one set of UVs ( for now? )
if ( a_Mesh->HasTextureCoords( 0 ) )
{
vertex.UV = { a_Mesh->mTextureCoords[0][i].x, a_Mesh->mTextureCoords[0][i].y };
}

a_MeshSource->m_Vertices.push_back( vertex );
}

// Process Indices
for ( uint32_t i = 0; i < a_Mesh->mNumFaces; ++i )
{
const aiFace& face = a_Mesh->mFaces[i];
TE_CORE_ASSERT( face.mNumIndices == 3, "Face is not a triangle" );
a_MeshSource->m_Indices.push_back( face.mIndices[0] );
a_MeshSource->m_Indices.push_back( face.mIndices[1] );
a_MeshSource->m_Indices.push_back( face.mIndices[2] );
}

submesh.BaseVertex = (uint32_t)a_MeshSource->m_Vertices.size() - a_Mesh->mNumVertices;
submesh.BaseIndex = (uint32_t)a_MeshSource->m_Indices.size() - ( a_Mesh->mNumFaces * 3 );
submesh.MaterialIndex = a_Mesh->mMaterialIndex;
submesh.NumVertices = a_Mesh->mNumVertices;
submesh.NumIndicies = a_Mesh->mNumFaces * 3;

return submesh;
}

Here is a link to the repository https://github.com/AsherFarag/Tridium/tree/Asset-Manager

Thanks!

11 Comments
2024/11/02
01:29 UTC

8

Tessellating Bézier Curves and Surface

Hi all!

I've been doing some experimentation with using tessellation shaders to draw Bézier Curves and Surface.

While experimenting I noticed that a lot of the resources online are aimed at professionals or only cover simple Bézier curves.

Knowing what "symmetrized tensor product" is can be useful to understanding what a Bézier Triangle is, but I don't think it's necessary.

So I decided to turn some of my experiments into demonstrations to share here:

https://github.com/CleisthenesH/Tessellating-Bezier-Curves-and-Surface

Please let me know what you think as the repose will inform how much time I spend improving the demonstrations (Comments, number of demonstrations, maybe even documentation?).

And if you're looking for more theory on Bézier Curves and Surface please consider checking out my notes on them here under "blossom theory":

https://github.com/CleisthenesH/Math-Notes

2 Comments
2024/11/01
23:41 UTC

2

Using a non-constant value to access an Array of Textures in the Shader?

I'm building a small OpenGL Renderer to play around. But when trying to implement Wavefront Files ran into a problem. I can't access my array of Materials because when I try to use 'index' (or any uniform) instead of a non-constant Value it wouldn't render anything, but it also wouldn't throw an error.

When there were no Samplers in my struct, it worked how I imagined but the moment I added them it sopped working, even if that part of the code would never be executed. I tried to narrow it down as much as possible, it almost certainly has to be the problem with this part.

#version 410
    out vec4 FragColor;

    in vec3 Normal;  
    in vec3 FragPos;  
    in vec2 TexCoords;
    flat in float MaterialIndex;

    struct Material  {
        sampler2D ambientTex; 
        sampler2D diffuseTex;
        sampler2D specularTex;
        sampler2D emitionTex;

        vec3 ambientVal;
        vec3 diffuseVal;
        vec3 specularVal;
        vec3 emitionVal;
        float shininess;
    }; 

    uniform Material material[16];
...


uniform bool useTextureDiffuse;


void main(){
vec3 result = vec3(0,0,0);
vec3 norm = normalize(Normal);
int index = int(MaterialIndex);

vec3 ambient = useTextureDiffuse ? ambientLight * texture(material[index].diffuseTex, TexCoords).rgb: ambientLight*material[index].diffuseVal;

vec3 viewDir = normalize(viewPos - FragPos);

result = ambient; 
result +=  CalcDirLight(dirLight, norm, viewDir , index);
// rest of the lighting stuff

Is it just generally a problem with my approach, or did I overlook a bug? If it's a problem of my implementation, how are you supposed to do it properly?

5 Comments
2024/11/01
23:40 UTC

5

New video tutorial: 3D Camera using GLM

0 Comments
2024/11/01
21:55 UTC

0

Framebuffer blit with transparency?

Fairly new to frame buffers, so please correct me if I say something wrong. I want to render a ui on top of my game's content, and ChatGPT recommended frame buffers, so I did that. I am giving the frame buffer a texture, then calling glTexSubImage2D to change part of the texture. Then, I blit my frame buffer to the window. However, the background is black, and covers up my game's content below it. It worked fine when using just a texture and GL_BLEND, but that doesn't work with the frame buffer. I know the background of my texture is completely clear. Is there some way to fix this or do I have to stick with a texture?

1 Comment
2024/11/01
20:09 UTC

28

A huge openGL engine stuck in 32 bit that I worked on for 10 years

Unfortunately, it has become a moving target to try to keep it working. For some reason FBOs are currently the issue. I created a stacking support for offscreen buffers but it stopped working a few years ago, if anyone knows why I'd love to correct it. It's explained in the Issues section.

https://github.com/LAGameStudio/apolune

Issue: https://github.com/LAGameStudio/apolune/issues/3

34 Comments
2024/10/31
12:52 UTC

2

Downscaling a texture

<SOLVED>Hi, I've had this issue for a while now, basically I'm making a dithering shader and I think it would look best when the framebuffer color attachment texture is downscaled. Unfortunately I haven't found anything useful to help me. Is there a way i can downscale the texture, or is there a way to do this other way?
(using mipmap levels as a base didn't work for me and just displayed a black screen and since I'm using opengl 3.3 i cant use glCopyImageSubData() or glTexStorage())

EDIT: I finally figured it out! To downscale an image you must create 2 framebuffers one with screen size resolution and another one with desired resolution. After that you render the scene with the regular framebuffer and before switching to the default framebuffer you use:

glBindFramebuffer(GL_READ_FRAMEBUFFER, ScreenSizeResolutionFBO);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, DesiredResolutionFBO);

glBlitFramebuffer(0, 0, ScreenWidth, ScreenHeight, 0, 0, DesiredWidth, DesiredHeight, GL_COLOR_BUFFER_BIT, GL_NEAREST);

More can be found on the chapter: Anti-Aliasing on LearnOpenGL.com

Note: if you want pixels to be clear use GL_NEAREST

5 Comments
2024/10/31
04:52 UTC

2

Combining geometry shader and instancing?

SOLVED

Edit: I saw this post and decided to answer it. It's already answered, but I was looking through the answers, and u/Asyx mentioned "multi draw indirect", which is EXACTLY what I need. Instead of sending a bunch of commands from the cpu, send the commands to the gpu once (including args) then tell it to run all of them. Basically wrap all your draw calls in one big draw call.

I recently discovered the magic of the geometry shader. I'm making a game like Minecraft with a lot of cubes, which have a problem. There are 6 faces that share 8 vertices, but each vertex has 3 different texture coords, so it has to be split up into 3 vertices, which triples the number of projected vertices. A geometry shader can fix this. However, if I want to draw the same cube a ton of times, I can't use instancing, because geom shaders and instancing aren't compatible (at least, gl_InstanceID isn't updated), so I have to send a draw call for each cube. Is there a way to fix this? ChatGPT (which is usually pretty helpful) doesn't get that instancing and geom shaders are incompatible, so it's no help.

2 Comments
2024/10/31
00:32 UTC

0

[Noob] New Vertices or Transformation?

Im making a 2D gravity simulation in python and currently Im trying to move over from pyglets (graphics library) built in shape renderer to my own vertex base renderer. This way I can actually use shaders for my objects. I have everything working and now I just need to start applying the movement to each of my circles (which are the planets) but I have no clue how to do this. I know that I could technically just create new vertices every frame, but wouldnt just sending the transformations into the GPU using a UBO be better? The only solution ive figured out is to update the transformation matrix per object on the CPU, which completely negates the parallel processing of the gpu.

I know UBOs are used to send uniforms to the shader, but how do I specify which object gets which UBO?

1 Comment
2024/10/31
00:16 UTC

4

Need help with clarification of VAO attribute and binding rules.

I've recently finished an OpenGL tutorial and now wanted to create something that needs to works with more that the one VBO, VAO and EBO that's used in the tutorial. But I've noticed that I don't really understant the binding rules for these. After some research, I thought the system worked like this:

  1. A VAO is bound.
  2. A VBO is bound.
  3. VertexAttribPointer is called. This specifies the data layout and associates the attribute with the currently bound VBO
  4. (Optional) Bind different VBO in case the vertex data is split up into multiple buffers
  5. Call VertexAttribPointer again, new attribute is associated with current VBO
  6. Repeat...
  7. When DrawElements is called, vertex data is pulled from the VBOs associated with the current VAO. Currently bound VBO is irrelevant

But I've seen that you can apparently use the same VAO for different meshes stores in different VBOs for performance reasons, assuming they share the same vertex layout. How does this work? And how is the index buffer associated with the VAO? Could someone give me an actual full overview over the rules here? I haven't actually seem them explained anywhere in an easy to understand way.

Thanks in advance!

6 Comments
2024/10/30
18:05 UTC

0

Export blender 3d model to opengl

I want to export my 3d model from blender (obj file) to opengl ( codeblocks ,vs code). can someone help me with the whole process step by step

2 Comments
2024/10/30
15:05 UTC

4

Font Rendering using Texture Atlas: Which is the better method?

I'm trying to render a font efficiently, and have decided to go with the texture atlas method (instead of individual texture/character) as I will only be using ASCII characters. However, i'm not too sure how to go about adding each quad to the VBO.

There's 3 methods that I read about:

  1. Each character has its width/height and texture offset stored. The texture coordinates will be calculated for each character in the string and added to the empty VBO. Transform mat3 passed as uniform array.
  2. Each character has a fixed texture width/height, so only the texture offset is stored. Think of it as a fixed quad, and i'm only moving that quad around. Texture offset and Transform mat3 passed as uniform array.
  3. Like (1), but texture coordinates for each character are calculated at load-time and stored into a map, to be reused.

(2) will allow me to minimise the memory used. For example, a string of 100 characters only needs 1 quad in the VBO + glDrawElementsInstanced(100). In order to achieve this I will have to get the max width/height of the largest character, and add padding to the other characters so that every character is stored in the atlas as 70x70 pixels wide box for example.

(3) makes more sense than (1), but I will have to store 255 * 4 vtx * 8 (size of vec2) = 8160 bytes, or 8mb in character texture coordinates. Not to say that's terrible though.

Which method is best? I can probably get away with using 1 texture per character instead, but curious which is better.

Also is batch rendering one string efficient, or should I get all strings and batch render them all at the end of each frame?

11 Comments
2024/10/30
04:02 UTC

2

C and OpenGL project having struct values corrupted

I'm programming a minecraft clone using C and OpenGL and I'm having an issue where I have a texture struct which I think is being corrupted somehow as I set the texture filter initially which has the correct value however later when I bind the texture the values are all strange integers that are definitely not correct and I can't figure out why this is happening. If anyone could try and find out why this is happening it would be much appreciated as I really am not sure. Thanks.

I have tried using printing out values and have found that it is all being initialised correctly however when I bind the texture later it has messed up values which causes OpenGL invalid operation errors at the glBindTexture(GL_TEXTURE_2D, texture->gl_id) line and also means that the blocks are mostly black and not textured and ones that are don't have a consistent texture filter.

However if I remove the tilemap_bind(&block->tilemap); line inside the block_draw function then everything seems to work fine but surely adding in this line shouldn't be causing all these errors and it would make sense to bind it before drawing.

Here is the github repo for the project

2 Comments
2024/10/29
15:19 UTC

2

Manually modifying clip-space Z stably in vertex shader?

So, since I know this is an odd use case: In Unity, I have a shader I've written, where at the end of the vertex shader, I have an optional variable which nudges the Z value up or down in clip space. The purpose here is mainly to alleviate visual artifacts caused by clothes clipping during animation (namely skirts/robes), which while I know this isn't a perfect solution (if bodyparts clip out sideways they'll still show), it works well enough with the camera views I'm using. It's kind of a way of semi-disabling ZTest, but not entirely.

However, I've noticed that depending on how zoomed out the camera is, how far back an item is nudged changes. As in, a leg which was previously just displaced behind the front of the skirt (good), is now also displaced behind the back of the skirt (bad).

I'm pretty sure there's two issues here, first that the Z coordinate in clip space isn't linear, and second that I have no idea what I'm doing when it comes to the W coordinate (I know semi-conceptually that it normalizes things, but not how it mathematically relates to xyz enough to manipulate it).

The best results I've managed to alleviate this is essentially stopping after the View matrix, computing two vertex positions against the Projection matrix (one modified, one unmodified), then combining the modified Z/W coordinates to the unmodified X/Y. This caused the vertex to move around on the screen though (since I was modifying W from what the X/Y were supposed to be paired with), so using the scientific method of brute force I was able to come to this:

float4 workingPosition = mul((float4x4) UNITY_MATRIX_M, v.vertex);
workingPosition = mul((float4x4) UNITY_MATRIX_V, workingPosition);
float4 unmodpos = workingPosition;
float4 modpos = workingPosition;
modpos.z += _ModelZBias*100;
unmodpos = mul((float4x4) UNITY_MATRIX_P, unmodpos);
modpos = mul((float4x4) UNITY_MATRIX_P, modpos);
o.pos = unmodpos;//clipPosition;
float unmodzw = unmodpos.z / unmodpos.w;
float modzw = modpos.z / modpos.w;
float zratio = ( unmodzw/ modzw);
//o.pos.z = modpos.z;
o.pos.zw = modpos.zw;
o.pos.x *= zratio;
o.pos.y *= zratio;

Which does significantly better at maintaining stable Z values than my current in-use solution, but this doesn't keep X/Y completely stable. It slows them much more than without this "zratio" solution, but still not enough to be more usable than just using my current non-stable version and dealing with it.

So I guess the question is: Is there any more intelligent way of moving a Z coordinate after projection/clip space, in such a way that the distance moved is equal to a specific world-space distance?

2 Comments
2024/10/29
03:49 UTC

1

point shadows in opengl

so i was reddit learnopengl.com point shadows tutorial and i don't understand how is using geometry shader instead of rendering the whole scene into a cube map, so for rendering the scene it's straight forward your look in the view of the light you are rendering and capture image, but how do you use geometry shader instead of rendering the scene 6 times from the light perspective?

25 Comments
2024/10/28
22:11 UTC

1

Using Compute Shader in OpenGL ES with kotlin

So I am new to the shader stuff, I want to Test out how the shaders and compute shaders work.

The compute shader should just color a pixel white and return it. and then the shader should use that color to paint the bottom of the screen.

the shader works fine, but when I tried to implement compute shader, it just does not work.

Please take a look at this stack overflow issue

4 Comments
2024/10/28
15:30 UTC

2

Multipass shaders in opengl

Hi, I am trying to implement a sobel filter to an image to do some computations, but i am faced with the problem that i have to grayscale the image before applying sobel filter. In unity you would just make a grayscale pass and sobel filter pass, but after some research i couldn't find how to do that. Is there a way to apply several shader passes?

1 Comment
2024/10/28
03:03 UTC

2

How expo-gl works?

Hi everyone! Does anyone know exactly how expo-gl works?

I'm familiar with the concept of the bridge between the JavaScript VM and the native side in a React Native app. I'm currently developing a React Native photo editor using expo-gl for image processing (mostly through fragment shaders).

From what I understand, expo-gl isn’t a direct WebGL implementation because the JS runtime environment in a React Native app lacks the browser-specific API. Instead, expo-gl operates on the native side, relying mainly on OpenGL. I've also read that expo-gl bypasses the bridge and communicates with the native side differently. Is that true? If so, how exactly is that achieved?

I'm primarily interested in the technical side, not in code implementation or usage within my app — I’ve already got that part covered. Any insights would be greatly appreciated!

1 Comment
2024/10/27
16:08 UTC

5

Prefiltered environment map looks darker the further I move

EDIT - Solved: Thanks u/Th3HolyMoose for noticing that I'm using texture instead of textureLod

Hello, I am implementing a PBR renderer with a prefiltered map for the specular part of the ambient light based on LearnOpenGL.
I am getting a weird artifact where the further I move from the spheres the darker the prefiltered color gets and it shows the quads that compose the sphere.

This is the gist of the code (full code below):

vec3 N = normalize(vNormal);
vec3 V = normalize(uCameraPosition - vPosition);
vec3 R = reflect(-V, N);
// LOD hardcoded to 0 for testing
vec3 prefilteredColor = texture(uPrefilteredEnvMap, R, 0).rgb;
color = vec4(prefilteredColor, 1.0);

(output: prefilteredColor) The further I move the darker it gets until it's completely dark

The problems appears further if the roughness is lower

https://preview.redd.it/0o7bpvo4m4xd1.png?width=1666&format=png&auto=webp&s=3401e75dd9dca196d7663a9844d1afa0beb8cc53

The normals of the spheres are fine and uniform, as the R vector is, and they don't change when moving around.

color = vec4((N + 1.0) / 2.0, 1.0);

color = vec4((R + 1.0) / 2.0, 1.0);

This is the prefiltered map:

https://preview.redd.it/d2lr9xadg4xd1.png?width=1676&format=png&auto=webp&s=7f0b0eccf418d9fd10ead8e037384b87f833c4e9

One face (mipmaps) of the prefiltered map

I am out of ideas, I would greatly appreciate some help with this.

The fragment shader: https://github.com/AlexDicy/DicyEngine/blob/c72fed0e356670095f7df88879c06c1382f8de30/assets/shaders/default-shader.dshf

8 Comments
2024/10/26
15:57 UTC

2

GlMultiDrawindirect sorting

Hi, i didn't find info about if GlMultiDrawindirect respects the order of the buffer when I call it, I need to sort it for transparencies, anyone knows if it does? Or the only solution is OIT? Thanks

4 Comments
2024/10/26
15:14 UTC

5

Extracting Scaling, Rotation and Translation from OBJ object ?

I'm a beginner with OpenGL. Although I'm hoping someone can help is there a way to begin with loading an OBJ object and extracting it's Scaling, Rotation and Translation from the object ?

In other words is there a platform I can use when programming in OpenGL when beginning for such tasks. I understand there are many graphics programs which use OpenGL and this kind of task could be accomplished within those programs.

10 Comments
2024/10/26
00:37 UTC

7

Uniform "overrides" pattern

I was wondering it's a common part of peoples code design to have a function that sets a collection of uniforms, with another parameter that's a collection of overriding uniforms. An example would be in shadow mapping if you want to set all the same uniforms for the depth pass shader as the lighting shader, with the view and projection matrices being overridden.

A reasonable answer obviously is "why ask, do what you need to do", the thing is since I'm in webgl there's a tendency to over-utilize the looseness of javascript, as well as under utilize parts of the regular opengl library like uniform buffers, so I thought I'd ask in anticipation of this, in case anyone has some design feedback. thanks.

2 Comments
2024/10/25
22:29 UTC

6

Point Based Rendering

I have a point cloud. I want to add a light source (point light source, area light source or environment map) do some lighting computation on the points and render it to a 2D image. I have albedo map, normal map and specular residual for each point. I don't know where to start with the rendering I was planning to build it from scratch and use phong to do the lighting computation but once I started this looks like a lot of work. I did some search, there could be a couple of possible solution like OpenGL or Pytorch3D. In openGL, I couldn't find any tutorials that explains how to do point based rendering. In pytorch3D in found this tutorial. But currently the point renderer doesn't support adding a light source as far as I understand.

9 Comments
2024/10/25
11:52 UTC

Back To Top