/r/webgl
WebGL (Web Graphics Library): a JavaScript API for rendering interactive 3D graphics in compatible browsers without the use of plug-ins. WebGL programs consist of code written in JavaScript and shader code executed on a computer's Graphics Processing Unit (GPU). WebGL is designed and maintained by the non-profit Khronos Group.
Check if your browser supports WebGL:
Khronos Resources
Learn About WebGL
Be Informed About WebGL
Meet WebGL Developers
Game with WebGL
Code in the Cloud with WebGL
Related Subreddits
/r/webgl
I am currently taking a computer graphics course in university. The first assignment is to edit pre-existing code of a seirpinski gasket so that it will loop and change scale, color and amount of points each loop. I am so lost and have been working on this for so long its borderline embarressing. I could really use some help on where to start. The code is found in the github link i provided within codeupdate/02/gasket1.js and codeupdate/02/gasket1.html
https://metaory.github.io/glslmine/
As I was just getting more into the graphics and shader world I wanted easy and fast way to browse through other people collections fast, we have a few good source but they all paginated and slow
So I wrote a tiny script that collects preview thumbnails from a source and stores it locally, I still wanted a better experience browsing so I made a simple app for my dump!
Later I moved my crawler into a ci job to do scheduled weekly fetches and deploy,
Currently there is only one data source, but I intend to add few more soon
Codebase is vanilla JavaScript and you can find it here
It's in readme of glslViewer from the legendary patriciogonzalezvivo
I've tried going through his other repositories and projects,
So far no luck
Anyone has any idea?
Hey👋. I published a library to load/parse obj and mtl files: timefold/obj. Free for everyone to use! Here is a stackblitz example. Let me know if you find it useful 🙂
Hi all,
Recently I've been writing an infinite canvas drawing website thing with webgl1,
but with a twist! the entire canvas is represented as a quad tree and most operations are done on the cpu!
The only thing the gpu is responsible for is converting (part of) my beautiful quad tree into an image.
Since I need to pass my quadtree to the gpu and webgl1 is wonderful, i've decided to pass the tree as a texture.
a node in my tree is represented as 20 bytes, 4 bytes for the color followed by 4 32-bit indexes into the quad tree array, (not byte offsets) so i can address 2^(32) nodes or 20 * 2^(32) bytes of quadtree nodes.
my tree is sent into a texture (lazily), and i run a fragment shader on a fullscreen quad that basically just takes an initial node index as a uniform and for every pixel asks which quadrant of the current node it is in, and then steps down the tree in that direction, up to 16 times. the resulting color is the color of the node it ends up at.
now the problems! webgl1 only gaurantees ~16 bit integers and i need 32 bit integers for my indexes! so i've implemented 32 bit integers in an ivec4 as a sort-of carry save adder. I believe my implementation to be (glsl ES 1.0) standard compliant.
However i've had reports of my shitty amazing website not working properly on iphone, and i'm not entirely sure why. the image I've attached is what happens when you convert texel values into their RGBA byte values improperly and the problems i've seen on iphone look very similar.
does the iphone not store RGBA textures as fixed points with 8 bits of precision? from what i've read in the standards, i'm pretty sure they are supposed to...
Specifically the lines i've changed to get the effect shown are:
ivec4 vec4tonum(vec4 val){
- return ivec4(255.0*val + 0.5);
+ return ivec4(256.0*val);
}
project links:
https://github.com/cospplredman/da
https://cospplredman.github.io/da/
controls:
- left click = draw
- middle click = pan
- scroll = zoom
- ctrl-z/ctrl-y undo/redo
edit: forgot to attach picture
So the idea is that since it's expected that the gpu and cpu would share the memory pool when running webgl applications, is it possible for the driver to directly have the gpu read the vertex buffer on ram rather than virtual vram, once the cpu is done with them?
https://kndlt.github.io/voxelviewer/
Made this voxel art path tracer for magica voxel files (webgl1). The rendering works entirely inside a glsl shader.
Hi!
I’ve been working on integrating a fluid shader into my website, and while it works perfectly on my locally hosted site, I’ve hit a bit of a roadblock when trying to implement it on my live website. The shader breaks and doesn't work properly once deployed, and I’m not sure what’s going wrong.
I’m looking for a freelancer who could help me with the following:
If you're experienced with WebGL, JavaScript shaders, and website integration, I would greatly appreciate your assistance. Please let me know if you have availability and an hourly rate.
Thank you so much for your help!
Would someone want to do a cute online tutorial to get people started using WebGL with Wave Function Collapse procedural generation, using a cute open-source asset lib like https://kenney.nl/assets/castle-kit ?
I’ve been using RenderDoc on Windows to debug WebGL data. However, I recently switched to Mac and found out that RenderDoc doesn’t support macOS.
What tools or methods do people typically use on Mac to capture frames and debug WebGL? Any recommendations would be great. Thanks!
It seems to default to using the integrated GPU
I need to create a 3d Koch snowflake, I was able to do it in 2d and then extrude it to 3d model. I created many layers separated by dz=0.1 and tried to connect adjacent vertices by triangles. everything was working fine before trying to connect the adjacent vertices.
I know it might be unclear but if anyone is able to help me I would provide him with my code.
thank you
A client who commissioned me for a video artwork (created in c4d) for their homepage has asked if I can now deliver it as webgl. I'm trying to figure out if this is even possible? My best guess is that I should open up Spline (for the first time ) and try and match the vibe of it but my guess is it'll look completely different?
I guess what I'm trying to decide on is if I should take this on or not .. and if not me, who would best be able to do so?
Hi I am looking to use WebGl in my web dev project for university, was just wondering where to start when it comes to applying it in a web development environment. Any help is much appreciated.
I'm a beginner in both Blender and Three.js and recently started learning Three.js to create some cool models. I managed to create a model in Blender and added an animation using geometry nodes. However, I'm having trouble exporting it to Three.js.
Here's what I've tried so far:
It seems like I’m missing something specifically related to exporting or viewing the animation. Does anyone know the right way to export animations from geometry nodes so they’ll work with Three.js? I feel like I might be missing something in the export process or in setting up the animation correctly.
I want to make a simple drawing program, where you manipulate individual pixels by drawing, using my own custom functions to set the pixel values, not any of the canvas drawing functions.
I want it to be as performant as possible, so I'm guessing WebGL is the way to go, but is it truly any faster than canvas for just displaying / manipulating a single 2d texture?
I'm having an issue with a WebGL project that I'm hoping someone can help me wrap up before the Friday afternoon.
I have created a cube with a square and a triangle inside and I want the up down arrow keys to change the near panel distance so I can enter and exit this cube. The way things are currently set, I can only go right up to the cube's wall and then it bounces off.
I need to be able to go right up to the triangle inside the box and enter the cube's walls. My professor's suggestion is to change the near panel distance but even when I alter that I'm only getting up the wall but not entering. This is due tomorrow afternoon so any help ASAP would be great as I am still very much learning WebGL.
Below I'll list my current js code and the html with it.
// Vertex shader program
const VSHADER_SOURCE = `
attribute vec4 a_Position;
uniform mat4 u_ModelMatrix;
uniform mat4 u_ViewMatrix;
uniform mat4 u_ProjMatrix;
void main() {
gl_Position = u_ProjMatrix * u_ViewMatrix * u_ModelMatrix * a_Position;
}`;
// Fragment shader program
const FSHADER_SOURCE = `
precision mediump float;
void main() {
gl_FragColor = vec4(0.6, 0.8, 0.9, 1.0);
}`;
// Global variables
let g_near = 0.1; // Start with a smaller near plane
let g_far = 100.0;
let g_eyeX = 3.0;
let g_eyeY = 2.0;
let g_eyeZ = 7.0;
let g_rotationAngle = 0;
let g_moveSpeed = 0.2; // Control movement speed
// Global matrices
let projMatrix;
let viewMatrix;
let modelMatrix;
let gl;
function main() {
const canvas = document.getElementById('webgl');
gl = getWebGLContext(canvas);
if (!gl || !initShaders(gl, VSHADER_SOURCE, FSHADER_SOURCE)) {
console.error('Failed to initialize shaders.');
return;
}
const n = initVertexBuffers(gl);
// Get uniform locations
const u_ModelMatrix = gl.getUniformLocation(gl.program, 'u_ModelMatrix');
const u_ViewMatrix = gl.getUniformLocation(gl.program, 'u_ViewMatrix');
const u_ProjMatrix = gl.getUniformLocation(gl.program, 'u_ProjMatrix');
if (!u_ModelMatrix || !u_ViewMatrix || !u_ProjMatrix) {
console.error('Failed to get uniform locations');
return;
}
// Initialize matrices as globals
modelMatrix = new Matrix4();
viewMatrix = new Matrix4();
projMatrix = new Matrix4();
// Set up debug display
const debugDiv = document.getElementById('debug');
debugDiv.style.backgroundColor = 'rgba(255, 255, 255, 0.8)';
debugDiv.style.padding = '10px';
function updateScene() {
// Update view matrix
viewMatrix.setLookAt(g_eyeX, g_eyeY, g_eyeZ, 0, 0, 0, 0, 1, 0);
gl.uniformMatrix4fv(u_ViewMatrix, false, viewMatrix.elements);
// Update model matrix
modelMatrix.setRotate(g_rotationAngle, 0, 1, 0);
gl.uniformMatrix4fv(u_ModelMatrix, false, modelMatrix.elements);
// Update projection matrix with adjusted near plane
projMatrix.setPerspective(45, canvas.width/canvas.height, g_near, g_far);
gl.uniformMatrix4fv(u_ProjMatrix, false, projMatrix.elements);
// Update debug info
debugDiv.innerHTML = `
Camera Position: (${g_eyeX.toFixed(2)}, ${g_eyeY.toFixed(2)}, ${g_eyeZ.toFixed(2)})<br>
Near Plane: ${g_near.toFixed(2)}<br>
Rotation: ${g_rotationAngle.toFixed(2)}°
`;
draw(gl, n);
}
// Register keyboard event handler
document.onkeydown = function(ev) {
switch(ev.key) {
case 'ArrowUp':
// Move camera forward
g_eyeZ -= g_moveSpeed;
// Adjust near plane based on camera distance
g_near = Math.max(0.1, g_eyeZ - 2.0);
break;
case 'ArrowDown':
// Move camera backward
g_eyeZ += g_moveSpeed;
// Adjust near plane based on camera distance
g_near = Math.max(0.1, g_eyeZ - 2.0);
break;
case 'ArrowLeft':
g_rotationAngle -= 5.0;
break;
case 'ArrowRight':
g_rotationAngle += 5.0;
break;
case 'w': // Move up
g_eyeY += g_moveSpeed;
break;
case 's': // Move down
g_eyeY -= g_moveSpeed;
break;
case 'a': // Move left
g_eyeX -= g_moveSpeed;
break;
case 'd': // Move right
g_eyeX += g_moveSpeed;
break;
default:
return;
}
updateScene();
console.log('Camera position:', g_eyeX, g_eyeY, g_eyeZ);
};
// Enable depth testing
gl.enable(gl.DEPTH_TEST);
// Initial scene setup
updateScene();
}
function initVertexBuffers(gl) {
const vertices = new Float32Array([
// Front face
-1.0, -1.0, 1.0, 1.0, -1.0, 1.0, 1.0, 1.0, 1.0, -1.0, 1.0, 1.0,
// Back face
-1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0, -1.0, -1.0,
// Left face
-1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0, -1.0,
// Right face
1.0, -1.0, -1.0, 1.0, 1.0, -1.0, 1.0, 1.0, 1.0, 1.0, -1.0, 1.0,
// Top face
-1.0, 1.0, -1.0, -1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, -1.0,
// Bottom face
-1.0, -1.0, -1.0, 1.0, -1.0, -1.0, 1.0, -1.0, 1.0, -1.0, -1.0, 1.0,
// Inner square at z=0
-0.5, -0.5, 0.0, 0.5, -0.5, 0.0, 0.5, 0.5, 0.0, -0.5, 0.5, 0.0,
// Inner triangle at z=0
-0.3, 0.3, 0.0, 0.0, -0.3, 0.0, 0.3, 0.3, 0.0
]);
const vertexBuffer = gl.createBuffer();
if (!vertexBuffer) {
console.error('Failed to create buffer');
return -1;
}
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
const a_Position = gl.getAttribLocation(gl.program, 'a_Position');
if (a_Position < 0) {
console.error('Failed to get attribute location');
return -1;
}
gl.vertexAttribPointer(a_Position, 3, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(a_Position);
return vertices.length / 3;
}
function draw(gl, n) {
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// Draw all shapes
gl.drawArrays(gl.LINE_LOOP, 0, 4); // Front face
gl.drawArrays(gl.LINE_LOOP, 4, 4); // Back face
gl.drawArrays(gl.LINE_LOOP, 8, 4); // Left face
gl.drawArrays(gl.LINE_LOOP, 12, 4); // Right face
gl.drawArrays(gl.LINE_LOOP, 16, 4); // Top face
gl.drawArrays(gl.LINE_LOOP, 20, 4); // Bottom face
gl.drawArrays(gl.LINE_LOOP, 24, 4); // Inner square
gl.drawArrays(gl.TRIANGLES, 28, 3); // Inner triangle
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>WebGL Hollow Box with Objects</title>
<style>
body {
display: flex;
align-items: center;
justify-content: center;
height: 100vh;
margin: 0;
background-color: #f0f0f0;
}
canvas {
border: 1px solid black;
}
#instructions {
position: fixed;
bottom: 10px;
left: 10px;
background-color: rgba(255, 255, 255, 0.8);
padding: 10px;
}
</style>
</head>
<body>
<canvas id="webgl" width="600" height="600"></canvas>
<!-- Debug info -->
<div id="debug" style="position: fixed; top: 10px; left: 10px;"></div>
<!-- Instructions -->
<div id="instructions">
Controls:<br>
↑/↓ - Move forward/backward<br>
←/→ - Rotate view<br>
W/S - Move up/down<br>
A/D - Move left/right
</div>
<!-- Helper functions -->
<script>
// WebGL context helper
function getWebGLContext(canvas) {
return canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
}
// Shader initialization helper
function initShaders(gl, vsSource, fsSource) {
const vertexShader = loadShader(gl, gl.VERTEX_SHADER, vsSource);
const fragmentShader = loadShader(gl, gl.FRAGMENT_SHADER, fsSource);
const shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);
if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
console.error('Unable to initialize the shader program: ' + gl.getProgramInfoLog(shaderProgram));
return null;
}
gl.useProgram(shaderProgram);
gl.program = shaderProgram;
return true;
}
function loadShader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error('An error occurred compiling the shaders: ' + gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
// Matrix helper class
class Matrix4 {
constructor() {
this.elements = new Float32Array([
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
]);
}
setRotate(angle, x, y, z) {
const c = Math.cos(angle * Math.PI / 180);
const s = Math.sin(angle * Math.PI / 180);
const elements = this.elements;
if (x === 1 && y === 0 && z === 0) {
elements[5] = c;
elements[6] = s;
elements[9] = -s;
elements[10] = c;
} else if (x === 0 && y === 1 && z === 0) {
elements[0] = c;
elements[2] = -s;
elements[8] = s;
elements[10] = c;
}
return this;
}
setPerspective(fovy, aspect, near, far) {
const f = 1.0 / Math.tan(fovy * Math.PI / 360);
const nf = 1 / (near - far);
this.elements[0] = f / aspect;
this.elements[5] = f;
this.elements[10] = (far + near) * nf;
this.elements[11] = -1;
this.elements[14] = 2 * far * near * nf;
this.elements[15] = 0;
return this;
}
setLookAt(eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ) {
let z = [eyeX - centerX, eyeY - centerY, eyeZ - centerZ];
let length = Math.sqrt(z[0] * z[0] + z[1] * z[1] + z[2] * z[2]);
z = [z[0] / length, z[1] / length, z[2] / length];
let x = [upY * z[2] - upZ * z[1],
upZ * z[0] - upX * z[2],
upX * z[1] - upY * z[0]];
length = Math.sqrt(x[0] * x[0] + x[1] * x[1] + x[2] * x[2]);
x = [x[0] / length, x[1] / length, x[2] / length];
let y = [z[1] * x[2] - z[2] * x[1],
z[2] * x[0] - z[0] * x[2],
z[0] * x[1] - z[1] * x[0]];
this.elements[0] = x[0];
this.elements[1] = y[0];
this.elements[2] = z[0];
this.elements[3] = 0;
this.elements[4] = x[1];
this.elements[5] = y[1];
this.elements[6] = z[1];
this.elements[7] = 0;
this.elements[8] = x[2];
this.elements[9] = y[2];
this.elements[10] = z[2];
this.elements[11] = 0;
this.elements[12] = -(x[0] * eyeX + x[1] * eyeY + x[2] * eyeZ);
this.elements[13] = -(y[0] * eyeX + y[1] * eyeY + y[2] * eyeZ);
this.elements[14] = -(z[0] * eyeX + z[1] * eyeY + z[2] * eyeZ);
this.elements[15] = 1;
return this;
}
}
</script>
<!-- Your main WebGL script -->
<script src="main.js"></script>
<script>
// Add event listener for page load
window.onload = function() {
main();
};
</script>
</body>
</html>
Hi everyone! Does anyone know exactly how expo-gl
works?
I'm familiar with the concept of the bridge between the JavaScript VM and the native side in a React Native app. I'm currently developing a React Native photo editor using expo-gl
for image processing (mostly through fragment shaders).
From what I understand, expo-gl
isn’t a direct WebGL implementation because the JS runtime environment in a React Native app lacks the browser-specific API. Instead, expo-gl
operates on the native side, relying mainly on OpenGL. I've also read that expo-gl
bypasses the bridge and communicates with the native side differently. Is that true? If so, how exactly is that achieved?
I'm primarily interested in the technical side, not in code implementation or usage within my app — I’ve already got that part covered. Any insights would be greatly appreciated!
Would like to do something like the image above, but that one is from a tutorial that just duplicates the image and moves each copy to create the effect. I was wondering if there might be a more efficient way to do it, also I'm interested in being able to render just the outline part separately, as it might come in handy for indicating sprites which are hidden behind other objects.
I'm using WebGL 2, and just rendering stuff using WebGL calls without any 3rd party engine. Anyone got some resources for achieving this effect? it doesn't seem as trivial as I hoped.
I want to make a whiteboarding application. Each board should be as big as 7000×8000. I am currently using Konva with Vue (so no webgl atm) but I noticed that the performance becomes awful when rendering tje graphics on a large canvas. At one point, all elements should visible.
My question is what approach can I take in order to render a lot elements 1k and maybe way more to do that knowing that the elements are interactive? What are the optimizations that I can do? And does any of you have an example?
The next WebGL & WebGPU Meetup is right around the corner. Register for free and come join us to hear about the latest API updates and presentations from Microsoft, Dassault Systemes, and Snapchat!
Learn more and register: https://www.khronos.org/events/webgl-webgpu-meetup-november-2024
Hi everyone.
I am following the tutorial/article on webglfundamentals.org on how to perform computations using the fragment shader. My overall goal is to do an n-body simulation (i.e. simulating bodies with gravity interacting with each other). I still have to figure out many details.
At the moment I'm trying to just write a program that takes a vector and doubles it, exactly like the first part of the tutorial, but using FLOATs instead of UNSIGNED_BYTES.
My code is the following: https://pastebin.com/CAm0JgVc
The output I get at the end is an array of NaN
Am I missing something? Is my goal even feasible?