/r/gameenginedevs
This subreddit is all about game engines! Talk about methodologies, projects, or ideas for game engines and software engineering. Feel free to post about the projects you're working on or find interesting.
We now have a Discord server!
/r/gameenginedevs
I'm about to setup a raycast system for my game in C++. I'm trying to use https://github.com/Cornflakes-code/OWGameEngine/blob/master/engine/Geometry/OWRay.cpp but I don't know how to set the OWRay object up (and use it)... Any idea on how I can set it up or if there is another source I can use for setting up the raycast system?
Hello! I am unsure if this is the right subreddit for this, as I am still new to the platform, but I cannot figure out why my engine's geometry isn't being properly output. I've checked my vertex and index buffers, they are all correct, my shader works fine, my input buffer is also correct. It was drawing at one point, I had to put it down for a week and now I cannot find what the issue is.
In RenderDoc, when in the texture viewer I can see my texture being assigned properly, but the only output it a black screen. In the mesh viewer my vertexes and indices are listed out in the proper order on the input side, but the output shows all properties, even hard-coded ones equaling zero. De-compiling the shader and stepping through it sets these values correctly so why is DX11 not recognizing or using them?
RenderDoc VS Output (Mesh Viewer)
Hi Reddit ! As I said before, I would do merge into main every Friday. This week I have worked on SpriteComponent. So I added SpriteComponent to the engine, Window class to manage window, OnResize event for classes that do actions on resize. Also I implemented world transform matrices and associated functions for GameObject.
Thanks for your attention!
Hello everyone,
I’m new to Vulkan, coming from a Swift developer background and some game dev experience in c++.
I’ve been following the tutorial on the official vulkan website on how to render a triangle but I’m struggling to really grasp or understand the basic concepts and how they relate to each other.
For example, how the command buffers, frame buffers, the render/sub passes, the swap chain, and attachments work together.
Sometimes it feels like Im creating loads of CreateInfos but Im not seeing how the pieces connect.
Does anyone have any tips on resources to read that goes over these concepts? Or leave any comments below.
Thank you!
I just released my 4th game, I didn't use any graphics engine to program it... I used Rust and OpenGL as my main tools, it was one of the first times I did deferred shading in a project and despite many problems (I don't know much about color processing and I was trying to make a semi-realistic look)... I can't stop thinking that in my next project I want to do Ray Tracing, with OpenGL without using any graphics engine! I've already done some tests with Ray Marching, I saw some positive and some negative points along the way but I'm excited anyway! It's wonderful when we program a visual effect on the screen and it looks beautiful! Anyway, if you want to see the look of my game, here's the link:
I am currently writing a scene system for my engine. It uses an update system similiar to Unitys Entity Component structure. I was now thinking to use Gltf as scene file format instead of creating a new custom one. For components, i was planning on using GltfExtra to save them.
Given that you can save an entire hierachie with meshes, camera and lights and custom data, I think this would be a good approach. The only disadvantage I could think of would be that other formats like fbx would need to be converted first.
What do you think of this idea? Are there any disadvantages I am currently overlooking?
Hi Reddit, because a lot of you said that I need license on my project, I have added it, as well as a SpriteComponent.
Everything now in develop branch, push into main will be in Friday, as always. I have banned in r/gamedev for team hiring or something like that. I don't I am hired anyone, but anyway. Thanks for you support and opinion. I need to add now viewTransform and world matrices for GameObject and SpriteComponent, I think you should look at how I would implement that. I work hard on TEngine, so everyone who know something about game engines programming, leave your tips here or in github discussions.
Thanks for watching!
Hi everyone,
I’m a junior programmer deeply passionate about game development, particularly engine programming. Unfortunately, I’m struggling to land a job in the game industry right now, and I can’t afford to stay unemployed for long.
I was wondering if anyone here knows of industries outside of game dev where I could develop skills that would transfer well to engine development. My ultimate goal is to return to the gaming industry with stronger expertise that aligns with engine dev needs.
For example, I imagine fields like graphics programming, real-time simulation or robotics might overlap, but I’d love to hear from those with more experience. Are there specific roles or industries you’d recommend?
Any advice, insights, or personal experiences would be hugely appreciated. Thanks in advance for your help!
What do you guys and gals think about having weekly discussions about a scheduled topic? Interested?
Let's try it here with the initial topic of "State of Game Engine Employment".
I'm sure many have, or have heard, stories both good and bad regarding the current state of employment within tech / general dev. What about the considerably more niche field of engine dev? What's your take?
Are you employed, seeking, not seeking whatsoever? How about your peers?
Want to celebrate each other's successes, commiserate around losses, but most importantly provide support and to help uplift each other?
Hey!
I have created a simple parser for a custom shading language for my engine (which uses GLSL constructs) to simplify the code management. For example, you use one file where you write vertex and fragment shaders in two separated blocks.
For reference, see here.
What are your thoughts? What could I implement now? (Note: you can't actually import other files, for now, it is for what I would like to implement later)
Please help me if you can...
Here is some of the code I have setup so far: Sorry for the bad formatting
// Main source: https://github.com/Cornflakes-code/OWGameEngine/tree/master
#include "Physics.h"
namespace BlockyBuild {
glm::vec3 Raycast::findNormal(float distance, float t1, float t2, float t3, float t4, float t5, float t6) {
if (glm::epsilonEqual(distance, t1, epsilon))
return glm::vec3(1, 0, 0);
else if (glm::epsilonEqual(distance, t2, epsilon))
return glm::vec3(-1, 0, 0);
else if (glm::epsilonEqual(distance, t3, epsilon))
return glm::vec3(0, 1, 0);
else if (glm::epsilonEqual(distance, t4, epsilon))
return glm::vec3(0, -1, 0);
else if (glm::epsilonEqual(distance, t5, epsilon))
return glm::vec3(0, 0, -1);
else if (glm::epsilonEqual(distance, t6, epsilon))
return glm::vec3(0, 0, 1);
else
return glm::vec3(0, 0, 0);
}
bool Raycast::internalIntersects(const Colliders::Collider& collider, glm::vec3& normal, float& distance) const {
if (collider.type == Colliders::Box) {
glm::vec3 dim = collider.box.size() / 2.0f;
glm::vec3 point = dim * invDir;
if (point.x > 0 && point.y > 0)
normal = { 1, 0, 0 };
glm::vec3 center = collider.box.center();
return false;
}
}
bool Raycast::externalIntersects(const Colliders::Collider& collider, glm::vec3& normal, float& distance) const {
if (collider.type == Colliders::Box) {
float t1 = (collider.box.minPoint().x - origin.x) * invDir.x; // left of box contacted normal = -1,0,0 dir of ray = Compass::West
float t2 = (collider.box.maxPoint().x - origin.x) * invDir.x; // right of box contacted normal = 1,0,0 dir of ray = Compass::East
float t3 = (collider.box.minPoint().y - origin.y) * invDir.y; // top of box contacted normal = 0,1,0 dir of ray = Compass::South
float t4 = (collider.box.maxPoint().y - origin.y) * invDir.y; // bottom of box contacted normal = 0,-1,0 dir of ray = Compass::North
float t5 = (collider.box.minPoint().z - origin.z) * invDir.z; // +z of box contacted normal = 0,0,1 dir of ray = Compass::In
float t6 = (collider.box.maxPoint().z - origin.z) * invDir.z; // -z of box contacted normal = 0,0,-1 dir of ray = Compass::Out
float tmin = glm::max(glm::max(glm::min(t1, t2), glm::min(t3, t4)), glm::min(t5, t6));
float tmax = glm::min(glm::min(glm::max(t1, t2), glm::max(t3, t4)), glm::max(t5, t6));
// if tmax < 0, ray (line) is intersecting AABB, but the whole AABB is behind us
if (tmax < 0)
{
distance = -tmax;
normal = findNormal(distance, t1, t2, t3, t4, t5, t6);
return false;
}
// if tmin > tmax, ray doesn't intersect AABB
else if (tmin > tmax)
{
normal = glm::vec3(0, 0, 0);
distance = 0;
return false;
}
else
{
distance = tmin;
normal = findNormal(distance, t1, t2, t3, t4, t5, t6);
return true;
}
}
}
bool Raycast::intersects(const Colliders::Collider& collider, glm::vec3& normal, float& distance) const {
if (false)//box.contains(mOrigin))
{
return internalIntersects(collider, normal, distance);
}
else
{
return externalIntersects(collider, normal, distance);
}
}
bool Raycast::containColliderInMask(const Colliders::Collider& collider) const {
for (const auto& maskCollider : mask) {
if (maskCollider == collider)
return true;
}
return false;
}
RaycastHit Raycast::hit(std::shared_ptr<World> world) {
glm::vec3 normal;
float distance;
glm::vec3 maxDistanceOffset = origin + (glm::vec3(1) * maxDistance);
glm::vec3 minDistanceOffset = origin + (glm::vec3(1) * -maxDistance);
for (const auto& collider : world->getColliders(BlockColliders)) {
if (containColliderInMask(collider.second))
continue;
if (intersects(collider.second, normal, distance)) {
return {
true,
{ collider.first[0], collider.first[1], collider.first[2] },
normal
};
}
}
for (const auto& collider : world->getColliders(MobColliders)) {
if (intersects(collider.second, normal, distance))
return { true, collider.second.box.center(), normal };
}
return {false, {}, {}};
}
Raycast::Raycast(const glm::vec3& origin, const glm::vec3& direction, const float& maxDistance, std::vector<Colliders::Collider> mask) : origin(origin), direction(glm::normalize(direction)) {
invDir = 1.0f / direction;
}
}
So, a couple of days ago I asked how you all handle Entities / Scene Graphs and the overwhelming majority was using ECS. So naturally, having little to no experience in C++ memory management, apart from half a semesters worth of lectures years ago (which I spent doing work for other courses), I decided I wanted an ECS too, and I wanted the flying unicorn type of ECS if I'm going through the trouble of rebuilding half my engine (to be fair there wasn't much more than a scenegraph and a hardcoded simple render pipeline).
In any case I read the blogs of some very smart and generous people:
And then I force fed ChatGPT with my ridicoulous requirements until it spat out enough broken code for me to clobber together some form of solution. To whit: I think I've arrived at a rather elegant solution? At least my very inexperienced ego is telling me as much.
In Niko Savas Blog I found him talking about SoA and AoS type storage, and seeing that it would be completley overkill for my application, I needed to implement SoA. But I didn't want to declare the Components like it's SoA. And I didn't want to access it like it's SoA. And I didn't want to register any Components. And I didn't want to use type traits for my Components.
And so I arrived at my flying unicorn ECS.
(to those who are about to say: just use entt, well yes I could do that, but why use a superior product when I can make an inferior version in a couple of weeks.)
Now, since I need to keep my ego in check somehow I thought I'd present it on here and let you fine people tell me how stupid I really am.
I'm not going to post the whole code, I just want to sanity check my thought process, I'll figure out all the little bugs myself, how else am I going to stay awake until 4 am? (also, the code is in a very ugly and undocumented state, and I'm doing my very best to procrastinate on the documentation)
using EntityID = size_t;
64-bit may be overkill, but if I didn't have megalomania I wouldn't be doing any of this.
The actual interaction of entities is done through an Entity class that stores a reference to the scene class (my Everything Manager, I didn't split entity and component managers up into individual classes, seemed unnecessarily cumbersome at the time, though the scene class is growing uncomfortably large)
struct Transform{
bool clean;
glm::vec3 position;
glm::quat rotation;
glm::vec3 scale;
glm::mat4 worldModelMatrix;
};
Components are just aggregate structs. No type traits necessary. This makes them easy to define and maintain. The goal is to keep these as simple as possible and allow for quick iteration without having to correctly update dozens of defintions & declarations. This feature was one of the hardest to implement due to the sparse reflection capabilities of C++ (one of the many things I learned about on this journey).
I handle the conversion to SoA type storage though my ComponentPool class that is structured something like so:
template <typename T>
using VectorOf = std::vector<T>;
// Metafunction to transform a tuple of types into a tuple of vectors
template <typename Tuple>
struct TupleOfVectors;
template <typename... Types>
struct TupleOfVectors<std::tuple<Types...>> {
using type = std::tuple<VectorOf<std::conditional_t<std::is_same_v<Types, bool>, uint8_t, Types>>...>; // taking care of vector<bool> being a PIA
};
template<typename cType>
class ComponentPool : public IComponentPool {
using MemberTypeTuple = decltype(boost::pfr::structure_to_tuple(std::declval<cType&>()));
using VectorTuple = typename TupleOfVectors<MemberTypeTuple>::type;
static constexpr size_t Indices = std::tuple_size<MemberTypeTuple>::value;
VectorTuple componentData;
std::vector<size_t> dense, sparse;
// ... c'tors functions etc.
};
The VectorTuple is a datatype I generate using boost/pfr and some template metaprogramming to create a Tuple of vectors. Each memeber in the struct cType is given it's own vector in the Tuple. And this is where I'm very unsure of wether I'm stupid or not. I've not seen anyone use vectors for SoA. I see two possible reasons for that: 1. I'm very stupid and vectors are a horrible way of doing SoA 2. People don't like dealing with template metaprogramming (which I get, my head hurts). My thinking was why use arrays that have a static size when I can use vectors that get bigger by themselves. And they take care of memory management. But here I'd really appreciate some input for my sanities sake.
I also make use of sparse set logic to keep track of the Components. I stole the idea from David Colson. It's quite useful as it gives me an up to date list of all entities that have a component for free. I've also found that it makes sorting the vectors very simple since I can supply a new dense vector and quickly swap the positions of elements using std::swap (i think it works on everything except vector<bool>).
Finally, to access the data as if I was using AoS in an OOP style manner (e.g. Transform.pos = thePos;
I use a handle class Component<cType> and a Proxy struct. The Proxy struct extends the cType and is declared inside the ComponentPool class. It has all it's copy/move etc. c'tors removed so it cannot persist past a single line of code. The Component<cType> overrides the -> operator to create and return an instance of a newly created proxy struct which is generated from the Tuple of Vectors. To bring the data back into the SoA storage I hijacked the destructor of the Proxy class to write the data back into the tuple of vectors.
struct ComponentProxy : public cType {
explicit ComponentProxy(ComponentPool<cType>& pool, EntityID entityId)
: cType(pool.reconstructComponent(entityId)), pool(pool), entityId(entityId) {}
ComponentPool<cType>& pool; // Reference to the parent Component class
EntityID entityId;
~ComponentProxy() { pool.writeComponentToPool(*this, entityId); }
ComponentProxy* operator->() { return this; }
// ... delete all the copy/move etc. ctors
}
This let's me access the type like so:
Entity myentity = scene.addEntity();
myentity.get<Transform>()->position.x = 3.1415;
It does mean that when I change only the position of the Transform, the entire struct is getting reconstructed from the tuple of vectors and then written back, even though most of it hasn't changed. That being said, the performance critical stuff is meant to work via Systems and directly iterate over the vectors. Those systems will be kept close to the declaration of the components they concern, making maintaining them that much simpler.
Still I'm concerned about how this could impact things like multi-threading or networking if I ever get that far.
If you've come this far, thank you for reading all that. I'm really not sure about any of this. So any criticism you may have is welcome. As I said I'm mostly curious about your thoughts on storing everything in vectors and on my method of providing AoS style access through the proxy.
So yeah, cheers and happy coding.
I wrote this Medium article a while back, talking about how I basically had a negative experience learning game development SDL2. The article still drives traffic today, probably due to its controversial title. Nobody ever called me out on this article, or told me that my points were wrong, and I'm just not used to an article of mine getting this much attention. At the time of writing it has almost 200 views, and they keep coming. This is the article:
I understand that you might not agree with me or like the article, but just don't blow me up in the comments with anger. Please. Like I'm literally using SDL2 to make a video game right now.
Hey 👋. I have just published a very early alpha version of my library timefold/webgpu. Its far from ready but you can already use it to define structs, uniforms and vertex buffers in typescript. This definition can then be used to:
No need to write a single type yourself. Everything is inferred automatically!
I am planning to add more and more features to it, but early feedback is always better. So reach out if you have feedback or just want to chat about it ✌️
Heya, just wondering if anybody has experience using Nvidia GameWorks NRI library to setup D3D12, Vulkan, etc and if anybody has reached any pain points, or is it just better to create your own device context, allocators, etc of graphics api yourself?
Hey 👋. I am working on my own game engine. It is written in Typescript and will use WebGPU for rendering. It will consist of several modules that can be used on its own, but also play well together.
I am proud to announce that the first little module is now available on npm: https://www.npmjs.com/package/@timefold/obj?activeTab=readme
This obj and mtl parser is the first building block that everyone can use in their own projects. Here is the overview:
It can parse the geometry into various formats. Pick the one that suits you best and let me know if you have any issues with it.✌️