/r/VoxelGameDev
A game development subreddit for discussing the creation of voxel games, and voxel engines.
A game development subreddit for discussing the creation of voxel games, and voxel engines.
Discussion - Question - Media
Article - Tutorial - Resource
Open Source Voxel Engines:
Information:
/r/VoxelGameDev
I've got no clue. I am currently making a game where the characters move with frame by frame animation in quick succession. If the player for example, "moves forward" unity will loop between 3 models (Left forward, idle, right forward), making it look like an animation. Now I want to add a "roll" into the game but I have no idea how to animate it, should I "Sonic it" making like a ball, or more like darksouls where you can see the roll, Keep in the game is in third person with an enemy lock on feature.
This is the place to show off and discuss your voxel game and tools. Shameless plugs, progress updates, screenshots, videos, art, assets, promotion, tech, findings and recommendations etc. are all welcome.
My game I've been working on for abt 8 months. Client made in Unity, server in c#/.NET 8
Hello everyone! I’m working on an open-world vehicle combat game for mobile, packed with exciting mechanics! Most of the models in the game are voxel-based, and I’m currently adding tons of unique features. Would love to hear your thoughts!
How large could a map made of voxels theoretically be so a high end PC (not something from NASA, but a good PC ordinary people could have) can support it? Im talking about a map with detail, no repeating patterns and not procedurally generated. It is allowed to use optimization methods like simplifying distant terrain or not loading absolutely everything at the same time.
Hi! I have just started to learn voxel modeling and I was just wondering if you have any recommendations about YouTube channels or content creators in general that create videos about voxel designs, doesn’t really matter if it’s just for learning concepts (tutorial-like) or showing their creation process, both are interesting!
I currently have an octree system in a block based game, works great. I am working in unity with the job system. I would like some input on how I could go about upscaling blocks to larger chunks.
The method I was about to implement was taking the blocks stored in all 8 children and downscaling then merging data and generating the mesh, immediately discarding the data. This is great because all data can be passed to the job so no race conditions are created. Only problem is I only want to store all my data once, or in the lowest LOD, so this works for the second layer up but nothing else. I thought of passing more and more data to be downscaled and merged buy it seems inefficient to pass 68 billion blocks when only 260 thousand are gonna be used.
Another thought that just occurred to me was to do some magic with the mesh and just somehow downscale that but it seems really complex.
The other quite obvious method that I seemed to immediately mentally discard is to just store the downscaled block data in the parent of the smallest, and when merging just use that data, then store and repeat.
TLDR: how could I go about merging chunks in a block octree in unity's job system.
Hey everyone,
I'm trying to code a voxel ray marcher in OpenGL that works in a similar fashion to Teardown and I'm specifically using this section of the Teardown dev commentary. My general approach is that I render each object as an oriented bounding box with an associated 3D texture representing the voxel volume. In the fragment shader I march rays, starting from the RayOrigin and in the RayDirection, using the algorithm described in A Fast Voxel Traversal Algorithm for Ray Tracing.
My confusion comes from choosing the RayDirection. Since I want to march rays through the 3D texture, I assume I want both the RayOrigin and RayDirection to be in UV (UVW?) space. If this assumption is correct then my RayOrigin is just the UV (UVW) coordinate of the bounding box vertex. For example, if I'm talking about the front-top-left vertex (-0.5, +0.5, +0.5), the RayOrigin and UV coordinate would be (0, 1, 1). Is this assumption correct? If so, how do I determine the correct RayDirection? I know it must depend on the relationship between the camera and oriented bounding box but I'm having trouble determining exactly what this relationship and ensuring it's in UVW space like the RayOrigin. If not, what am I doing wrong?
If it's helpful, here's the vertex shader I'm using where I think I should be able to determine the RayDirection. This is drawn using glDrawArrays and GL_TRIANGLE_STRIP.
#version 330 core
out vec3 RayOrigin;
out vec3 RayDirection;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform vec3 camera_position;
const vec3 vertices[] = vec3[](
vec3(+0.5, +0.5, +0.5), // Back-top-right
vec3(-0.5, +0.5, +0.5), // Back-top-left
vec3(+0.5, -0.5, +0.5), // Back-bottom-right
vec3(-0.5, -0.5, +0.5), // Back-bottom-left
vec3(-0.5, -0.5, -0.5), // Front-bottom-left
vec3(-0.5, +0.5, +0.5), // Back-top-left
vec3(-0.5, +0.5, -0.5), // Front-top-left
vec3(+0.5, +0.5, +0.5), // Back-top-right
vec3(+0.5, +0.5, -0.5), // Front-top-right
vec3(+0.5, -0.5, +0.5), // Back-bottom-right
vec3(+0.5, -0.5, -0.5), // Front-bottom-right
vec3(-0.5, -0.5, -0.5), // Front-bottom-left
vec3(+0.5, +0.5, -0.5), // Front-top-right
vec3(-0.5, +0.5, -0.5) // Front-top-left
);
void main () {
vec3 vertex = vertices[gl_VertexID];
RayOrigin = vertex + vec3(0.5); // move origin into UV space
RayDirection = vec3(0); // ?
gl_Position = projection * view * model * vec4(vertex, 1);
}
This is the place to show off and discuss your voxel game and tools. Shameless plugs, progress updates, screenshots, videos, art, assets, promotion, tech, findings and recommendations etc. are all welcome.
I've been working on a game for about 7 months now, similar idea to Minecraft. I finished sky light propagation and tree generation recently and am going back and reworking my biomes and terrain stuff and was taking a look at MC's stuff and didn't think it would be so complicated. If you've ever taken a look at their density_function stuff its pretty cool; its all defined in JSON files (attached an example). Making it configuration based seems like a good idea, but like it would be such a pain in the ass to do, at least to the extent they did.
I feel like the part that was giving me trouble before was interpolating between different biomes, basically making sure it's set up so that the terrain blends into each biome without flat hard of edges. idk what this post is actually supposed to be about, i think im just a bit lost on how to move forward having seen how complicated it could be, and trying to find the middle ground for a solo dev
{
"type": "minecraft:flat_cache",
"argument": {
"type": "minecraft:cache_2d",
"argument": {
"type": "minecraft:add",
"argument1": 0.0,
"argument2": {
"type": "minecraft:mul",
"argument1": {
"type": "minecraft:blend_alpha"
},
"argument2": {
"type": "minecraft:add",
"argument1": -0.0,
"argument2": {
"type": "minecraft:spline",
"spline": {
"coordinate": "minecraft:overworld/continents",
"points": [
{
"derivative": 0.0,
"location": -0.11,
"value": 0.0
},
{
"derivative": 0.0,
"location": 0.03,
"value": {
"coordinate": "minecraft:overworld/erosion",
"points": [
### and so on
I'm a beginner and I want to make a Voxel game in rust What would be the best graphics library to handle a large amount of voxels And I also want to add the ability in my game to import high triangle 3D models so I want it to handle that well too
This is the place to show off and discuss your voxel game and tools. Shameless plugs, progress updates, screenshots, videos, art, assets, promotion, tech, findings and recommendations etc. are all welcome.
I am working in c# in unity.
I have a LOD system and want to make far chunks have colors instead of textures. There are multiple ways I have thought to do this, but I'm sure there are more.
First is to downscale my texture atlas to a pixel each, representing a color. This would be done as the game loads before it starts generating the world.
Second is send the texture to the job in which the mesh is generated and sample it, setting the color of each quad there.
A combination of both would work, where the texture is downscaled then sent to the job where it would be sampled and the color is applied.
In all 3 of these situations a single pixel is used to represent the quad, and either the pixel is colored or the quad stores the color and multiplies it with the pixel.
I'm sure there are better ways to do this. Is there a way to create a quad that just has a color with no texture? This whole process is to optimize the rendering as much as possible.
I'm going to add trees to my game and have 2 ideas as to how.
First is to create them procedurally and randomly on the spot based on some parameters, my problem with this is that they are generating in jobs in parallel and I don't know how to give them predictable randomness that can be recreated on the same seed.
The second idea is to save a tree in some way and "stamp" it back into the world, like minecraft structures, this can be combined with some randomness to add variety.
There are many ways to achieve both of these and definitely ways that are faster, clearer, and easier to do. Overall I just want opinions on these.
Edit: there seems to be a lot of confusion regarding the topic. The matter at hand is the generation of the trees themselves, not the selection of their positions.
Hello,
I've been researching the way Dreams does its rendering, and how it uses integer arithmetic to cull primitives per voxel. I've seen that this is a pretty decent way for detecting collisions and normals for an SDF octree, but everything I've seen sounds like this is mostly for a GPU based approach. I'm wondering about collision detection for simple primitives like spheres/capsules against an SDF for basic gameplay on the CPU.
If anyone has any idea how they constructed colliders for Dreams that would be much appreciated. Did they make simple mesh colliders ahead of time? Do they still just use raycasts against the voxels?
I have a LOD system where I make it so blocks that are farther are larger. Each block has an accurate texture size, for example, a 2x2 block has 4 textures per side (one texture tiled 4 times), I achieved this by setting its UVs to the size of the block, so the position of the top right UV would be (2, 2), twice the maximum, this would tile the texture. I am now switching to a texture atlas system to support more block types, this conflicts with my current tiling system. Is there another was to tile faces?
Hey! i’m working on a Minecraft like game (i know, unique!) and am about 8 months into the development. i’ve been using a random MC Texture pack to texture my world and am thinking about starting to design my own. currently i’m working with a 128x128 textures but i might want to go down or up, i really have no idea what style i want just yet. i guess my question is, what if any tools have you guys used in the past for designing textures for assets? bonus if you know of a tool that enforces some type of tileable/seamless texture.
This is the place to show off and discuss your voxel game and tools. Shameless plugs, progress updates, screenshots, videos, art, assets, promotion, tech, findings and recommendations etc. are all welcome.