/r/gameenginedevs
This subreddit is all about game engines! Talk about methodologies, projects, or ideas for game engines and software engineering. Feel free to post about the projects you're working on or find interesting.
We now have a Discord server!
/r/gameenginedevs
This is part 4 of a game engine basics mini-series. Enjoy!
I am interested to implement a solution on my own. So I am looking for state of the art algorithms and techniques, preferably performance oriented. Do you know of any good talks, books or papers about it?
When it comes to game development, there are many options to choose from. Unity and Godot are two of the most popular game engines available; each offers unique strengths and features. However, both game engines serve very different demographics. Unity is the industry standard and provides the infrastructure for many of the world's most popular titles, it is designed to deal with larger animation projects, while Godot is much more streamlined and focused on Indie game development for smaller teams.
Although Godot isn't as well established as Unity, it's becoming a more viable alternative due to an easier project pipeline, infrastructure, and interface. However, in recent years Unity has been diversifying further with added functionality for native animation and VR for mobile. With that in mind, let’s explore the pros and cons of both programs, helping you decide which engine will be the best fit for your project.
Unity and Godot Functionality
Let's start with the basic building blocks for both engines. Unity uses game objects and components. The components hold data and functionality, while the game Objects represent characters, props, and scenes. The components are used to define the game objects. They are displayed in the Hierarchy menu and can be nested. Unity's component architecture is powerful and scalable but is harder to maintain than a node-based system.
Godot uses nodes and Scenes for its basic elements. Nodes are Classes with default methods and attributes. These can be nested or have siblings. Multiple nested nodes create node trees that inherit functionality from each other. The Scenes then organize and display the nodes however you want. They are shown in the Scene menu and scenes can then be referenced in different scenes. In Unity, scenes are separate entities from their game components and objects.
Godot has a modular and flexible architecture that allows nodes to be associated with different scenes, this makes it naturally more intuitive and flexible for creating different games from the component-based system that Unity uses. This setup allows Godot to use Composition over Inheritance making it easier to scale.
Godot - Player Node Nested in Level
Looping a background scene is much easier in Godot as it has a built-in background node that can be mirrored. It also makes it more simple to create a parallax effect with multiple layers that can be looped on the chosen axis. In Unity, you have to reset the background position in the code by declaring a start position and an offset position.
Unity - Loop background layer in C# code
Downloading and Setup
Let's talk about setup. Downloading the execute files is the same for both engines. Navigate to their respective sites and download the file. With Godot, you can choose native GDScript or .NET for C#. Unity only supports native C#. However, you can use other languages if they can compile a compatible DLL
The two engines have different installation sizes. Godot is lightweight, about 40 mb compared to Unity which is about 15GB. You will need multiple versions of both engines to be compatible with older games. There are far more external modules to update with Unity, however, this will change as Godot grows and the developers add more functionality.
Both engines support version control. Godot supports an official GIT plugin making it easy to create metadata in the project manager. You can also use Anchorpoint for Godot, Although it's not officially supported by the app.
Learning and Resources
Godot has less learning resource material than Unity. But, this is slowly changing with third-party creators like GDQuest and Clearcode that offer free and paid tutorials. The official documentation for Godot has a 2D and 3D game tutorial but doesn't have the same quality resource material as Unity.
Unity has structured video tutorials that guide you through its core learning pathways, AR Mobile development, junior programming, and Creative Core. These are free courses. However, if you wish to take the Unity certification exams at the end, you will need to pay.
Unity's dedicated learning pathways are well-designed and guide you each step of the way. They are a good starting point for beginners who want a solid foundation in game programming and development.
Scripting Languages
Unity uses C# as its main scripting language. It is integrated with Microsoft's Visual Studio IDE which is a popular code editor for many programmers. However, you can sync with other IDEs within Unity including Visual Studio Code and JetBrains Rider.
Unity - Visual Studio and Visual Studio
Unity doesn't have a native scripting language specifically built for it, unlike Godot which is seamlessly integrated with GDScript. This isn't necessarily a drawback as many programmers like the functionality provided by third-party IDEs. But, for dedicated game programming, GDScript is a good choice if you plan to use the engine. It is dynamic and versatile, similar to Python, and has been specifically built for Godot. It has a built-in IDE which auto-completes and identifies nodes quickly. It harnesses automatic memory management, helping memory allocation and deallocation. You can generate bindings for C++ and Rust via the GDExtension if you want to use another language with Godot
Godot supports C# as well but it isn't as tightly integrated as GDScript. Overall, C# is a more mature and faster general-purpose programming language that can be used for many applications, unlike GDScript which is specific to Godot. C# does have some costly overheads when integrated with Godot. It sometimes struggles to identify new nodes created in the engine but is a viable option for people who want to use C# with Godot.
Visual Scripting
Godot discontinued its visual scripting language in Godot 4 because it didn't offer any useful abstraction compared to GDScript as it used the same API, so it didn't have the same advantages as Unity's visual scripting language which has a separate API. However, visual scripting has performance drawbacks compared to traditional scripting languages such as C# or GDScript which makes it hard to refracter and optimize your code.
Unity continues to support Its visual scripting language and it is a great alternative for rapid prototyping of simple games but isn't recommended for more complex projects. Code also has the advantage of using version control systems like Git which will be important as you become more experienced as a game developer.
Animation and shaders
Godot has built-in animation support with the AnimationPlayer node. This has key-framing, tweening, and slicing for sprite maps. The animation player is loaded when the node is added to the Scene menu. Unity has the same functionality for games but also has real-time animated storytelling for 3D animations.
Unity's real-time animation is for filmmakers who want camera angles, props, and animated characters. It uses HDRP (High Definition Render Pipeline) for its 3D renderer; this isn't specific to game development but is a good example of Unity's extensive toolset.
Unity: The Industry Standard
For high-end visuals such as real-time animation for film, and rigging for characters, Unity has all the tools you need to get started. For high-end animations, it has the Universal Render Pipeline (URP) template allowing creators to quickly iterate and collaborate on a project. Godot has no separate renderer for cinematic filmmaking but is very well-equipped for in-game animation. If you want to learn coding on multiple platforms then using Unity and C# is a better option. However, as a stand-alone game engine Godot shines due to its intuitive editor, ease of installation, code iteration, and node-based infrastructure. If you want to develop games using a simple coding language then Godot is a better option.
Conclusion
Unity is a veteran of the game development industry. It is still considered the industry standard with many employees wanting Unity certification and experience. Unity also offers structured learning pathways with a step-by-step curriculum and an industry-recognized certificate at the end. As a stand-alone game engine, it has become bloated with add-ons and external plugins that detract from its core functionality. Unity is a creation suite that has more scope for creating 3D animations, filmmaking, and realistic rendering but lacks the tight integration of Godot.
https://gamekitlab.com/software-review/unity-and-godot-the-ultimate-game-development-showdown/
🚀 TRenderer: My First Rendering Engine Project
I'm pleased to present TRenderer — the first open version of a rendering engine I developed to explore and deepen my understanding of DirectX and rendering engine architecture.
About TRenderer
This project was created as a learning experience and includes foundational features for 3D and 2D rendering:
3D
RenderingDeferred Shading: A modern technique for enhanced lighting realism.
Lighting Models: Support for point, spot, and directional light sources.
Directional Light Shadows: Dynamic shadows to add depth and immersion.
2D
RenderingSprite Rendering: Efficient rendering of 2D graphics.
Text Rendering: Bitmap font support for precise and fast text output.
Additional Features
Texturing: Texture mapping for object detailing.
Normal Drawing: Support for normal maps to enhance lighting and create surface relief.
Skybox: Realistic environmental backgrounds.
Next Steps
While this project served as a platform to learn the basics of DirectX and engine architecture, I am currently working on a more advanced version. The new iteration will feature a modern object-oriented design and leverage the latest technologies to improve flexibility, performance, and functionality.TRenderer is just the beginning of my journey in graphics programming, and I'm excited about the opportunities to grow and develop even more sophisticated systems.
🔗 GitHub: https://github.com/f1oating/TRender
🔗LinkedIn: https://www.linkedin.com/in/dmytro-toronchenko-190383293/
I’d be glad to hear your feedback or insights on this project!
My last posts about game engines:
Hi, I just wanted to let you know the OpenGL 4.6-powered Ultra Engine 0.9.8 is out. This update adds a new material painting system, really good tessellation, and a first-person shooter game template.
Material Painting
With and without material painting
The new material painting system lets you add unique detail all across your game level. It really makes a big improvement over plain tiled textures. Here's a quick tutorial showing how it works:
Tessellation Made Practical
I put quite a lot of work into solving the problems of cracks at the seams of tessellation meshes, and came up with a set of tools that turns tessellation into a practical feature you can use every day. When combined with the material painting system, you can use materials with displacement maps to add unique geometric detail all throughout your game level, or apply mesh optimization tools to seal the cracks of single models.
Sealing the cracks of a tessellated mesh
First-person Shooter Template
This demo makes a nice basis for games and shows off what the engine can do. Warning: there may be some jump scares. :D
First-person shooter example game
Website is here if you want to check it out: https://www.ultraengine.com/
This engine was created to solve the rendering performance problems I saw while working on VR simulations at NASA. Ultra Engine provides up to 10x faster rendering performance than both Leadwerks and Unity:
https://github.com/UltraEngine/Benchmarks
Let me know if you have got any questions and I will try to reply to everyone. :)
I basically followed the Learn OpenGL model importing lesson for my engine. I'm using some files from Kenney here. I import eg both the barrel fbx and obj file in Blender and they're normal sizes, and more importantly they have the same size. Meanwhile when I use Assimp to load both into my engine, the obj one is appropriately sized but the fbx one is I think exactly 100x larger. I suspect the fbx vertex positions are somehow being interpreted as cm instead of m, but I'm unable to figure out why or where this would be happening in the import process. Any idea? My asset import code is basically the same as this.
Basically what I need is a dynamic rigid body, that can not change its rotation and angular velocity by colliding other objects. I need my game engine to control rotation of the rigid body. I tried to set the local inertia to {0; 0; 0}
via setMassProps
, but with positive scalar mass it causes a rigid body to have {NaN; NaN; NaN}
linear velocity after a collision. I use btDiscreteDynamicsWorld
and Bullet 3.25
I have a pretty basic asset system setup for each asset there is a corresponding loader and this was working fine so far with textures and meshes, but shaders don't seem to fit as nicely. Unlike other assets where I can load them with a single filepath MeshLoader::load (path) shaders need at least 2 (vertex and fragment) this didn't seem like an issue since I know all my shaders I could just do ShaderLoader:: load (path1, path2), but for my editor I was experimenting with loading assets by dragging and dropping them which doesnt work so well with a method that takes 2 parameters and I cant necessarily load one at a time since I need both files to create a valid shader.
The “solutions” I’ve thought of all seem very error prone I think the easiest one is to pass a directory rather than the files, but if I have a shader that’s just a vertex or fragment or reuses an existing shader it might be a bit of a pain.
Hey everyone,
I've just made some big strides in making my engine, and now it's on to user defined behaviors/components. After adding a memory wrapper as to make sure access doesn't change if objects move around in memory, I realized that there's been a pretty major flaw in my design that I now need to think about before moving too much further.
I'm using a fairly standard ECS, I have entities that contain no real data except pointers (wrapped) to its components and a transform: And components of varying uses.
Both entities and components of each engine-defined type are stored in their own contiguous memory managers. And every frame I run along each memory pool to handle updates in a fast and cache-friendly cycle, everything's going quite swimmingly on that front. My physics, rendering, audio, and other in-built components are running perfectly.
However, when it comes to accessing one of these components from another, which in my user defined behaviors (which will be their own component types) is likely to be commonplace- It's looking like it's going to be pretty cache unfriendly, and quite unpredictably so at that. Types of operations like setting position or updating a collider's size could very well happen every frame, and I'm not entirely sure how I'd optimize such a thing.
I'm going to continue adding my behavior system in the meantime, can't bottleneck here just yet- Are there any tips y'all have for optimizing this type of thing?
Hello everyone, I have been developing a tool for creating interactive content that is very focused on the web. Even with other objectives, I understand that it can have an application in a niche of game creation, things like visual novels. Currently, it is years behind projects like renpy (many years), but I've been making some progress. I’m still working on the basics of the software, but I do have some ideas on how to simplify its use for creating visual novel-type narratives. I believe that in the next version I’ll be able to bring a simplified way of creating menus among other more common widgets.
Would you like to check it out? Its name is TilBuci, and the website is here: https://tilbuci.com.br
This may be a silly question, but I'm new to graphics programming and this has been bothering me for some time now. Basically I am working on an isometric RPG using C++ and SFML for graphics. I heard somewhere (either here or on the SFML sub reddit) that you should aways keep your textures to 4096x4096 to support older GPU cards.
My game is 2D so I use speitesheets for animations. Right now I simply have a PNG file that has all the frames for an animation with different directions and just make a texture in SFML from that. Then just move a rectangle through the texture to render a frame.
This is an older prototype demo just to give you an idea of the animation style.
https://drive.google.com/file/d/1GuXqKcGtklNnaVnQwFhDqYsnuaJ8VOfx/view?usp=drivesdk
They are very low frame, not smooth and realistic, kind of like a "retro" feel. But even then I have a trouble keeping it ro 4096 size for some. I have either 4 or 8 directions and the frame size is about 400x400 pixels, so anything larger than about 10 frames would go over the size.
So my questions are:
Thanks in advance!
I am about to drop my glsl shader compiling toolchain and switch to a hlsl based language. With currently Khronos launching Slang initiative, I am considering Slang to be more future proving for a Vulkan-based rendering backend. I wish to hear about yuour thought and it's even better if you have any experience using slang and dxc to share
Working on a 2D game engine.
With a Entity component system for the game objects that's defined in C#.
Then transformed for C++/CLI, which'll goto C++ through DLL. In a way that updates the original pointer data when modified in C++.
But because of the way I'm doing it, I have to have the parent object to each component. So right now trying to get the parent object of each component through a void* sent to C++/CLI.
Then converting back to the CLI container class to get each individual component ptr then look up the list get the C# pointer to update the parent object ptr for the C# container to update the data and interact with the other components.
Maybe I'm over thinking this a bit?
It is fun working with memory on this level though.
I'm aware, that there are several books on this topic. I've been recommended to read "Programming from the Ground Up" or "Learn to Program with Assembly" Jonathan Bartlett. I am curious what's your recommendation for learning x86 assembly in context of game dev and game engine programming? I understand, that we are in a decade where people don't write assembly, but I believe it would benefit me while writing C++ to understand what's going on under the hood. The end goal would be to write a simple software rasterization program in assembly.
I want to start working on an asset manager and I’ve done a bit of research to get an idea of what needs to be done, but it’s still a bit confusing specifically because an asset can be created/loaded in various ways.
The gist of it seems to be that the asset manager is a some sort of registry it just stores assets that you can retrieve. Then you have loaders for assets and their only purpose seems to be to handle loading from file? Because if I wanted to create a mesh from data I don’t think it would make sense to do MeshLoader.loadFromData() when I could just do AssetManager->create<Mesh>(“some name for mesh”) (to register the asset) and then mesh->setVertices()
The code I’ve seen online by other people don’t seem to do anything remotely close to this so part of me is seconding guessing how practical this even is haha.
Hello!
I super curious about this 3D Graphics Programming from Scratch from Pikuma. Ive done his 2D game engine course and it was great so I’m sure the 3D one would be great too.
However, Im curious if anyone here has done it and what your experience was? Was it “worth” learning it in C?
Is it a downside that’s it’s “only” Cpu based rendering and that you are not doing any GPU related things?
As someone who wants to move from mobile development in Swift to more technical things like game dev or the like using C++, is this worth doing basically?
Thank you and let me know in the comments!
I've started to learn OpenGL since month ago. I've been playing League of Legends since 2019, I really love MOBA-game genre. People around tell me that I should just use an engine, but I still want to make my own engine, even though I know it can take decades. I want to know how everything in a game really works. Do anyone make MOBA-game engine too?
While working on my custom rendering pipeline, I am trying to figure out the best way to render a scene that would include many types of objects, techniques and shaders like the following:
- many light source objects (e.g. sun, spotlight, button light)
- opaque/transparent/translucent objects (e.g. wall, tinted window, brushed plastic)
- sign distance field objects rendering (e.g. sphere, donut)
- volumetric objects rendering (e.g. clouds, fire, smoke)
- PBR material shaders (e.g. metal, wood, plastic)
- animated objects rendering (e.g. animal, character, fish)
and probably stuff I forgot...
I have written many shaders so far but now I want to combine the whole thing and add the following:
- light bloom/glare/ray
- atmospheric scattering
- shadowing
- anti-aliasing
and probably stuff I forgot...
So far, I think a draw loop might look like this (probably deferred rending because of the many light sources):
- for each different opaque shader (or a uber shader drawing all opaque objects):
- draw opaque objects using that shader
- draw animated objects using that shader
- draw all sign distance field objects by rendering a quad of the whole screen (or perhaps a bunch of smaller quads with smaller lists of objects)
- draw all volumetric objects by rendering a quad of the whole screen (or perhaps a bunch of smaller quads with smaller lists of objects)
- for each different light/transparent/translucent shader:
- sort objects (or use order independent transparency technique)
- draw light/transparent/translucent objects using that shader
But:
- Not sure yet about the light bloom/glare/ray, atmospheric scattering, shadowing and anti-aliasing for all the above
- Drawing the transparent/translucent after volumetric cloud may not look nice for transparent objects within a cloud or between clouds
- How to optimize the many light sources needed while rendering using all those shaders
- Many other problems I have not yet thought of...
Everytime I see someone building a game engine, it is always a full-fledged one with editors, scene trees, asset managers, etc.
However, I’m not very fond of this style of game development that much. I have always preferred the framework approach over the game engine approach, at least for 2d games. Instead of basing my ”engine” around something like Unity, Unreal, or even Godot, I am making it more like LibGdx or Monogame instead. Both of these approaches have their own advantages. But what I like about the framework approach is how I can only use what I want, it integrates better with non-game applications, it allows for a more traditional style of coding, and how reusable it is, basically an abstraction over operating systems. Scene trees, asset managers, stuff like that, are all their own classes which you can integrate with the rest of your code in case you need them. Not that you can’t do this with a traditional game engine, but I personally prefer it this way.
But hey, that is not what the title of this post is referencing. What the title refers to is that I am also making my own traditional game engine, but it is built on the top of my own framework. The framework part basically works as the kernel of the game engine. I’ve never seen engine builders do this before. My favorite advantage is that you can use your engine as either a traditional game engine, or a framework.
Just wanted to share this architecture with you all. It is not for everyone and it definitely has its cons, but for me at least it is very worth it and I’ve had no problems with it so far.
It's in the title honestly. I've been making games and other projects for about a year, and because my computer was old and had compatibility issues, I made all of my games from scratch, using SDL mainly. I became increasingly curious about-screw it, jealous of-the game developers who had visual editors and all that stuff. I eventually began to use JS + Canvas2d, but my weak CPU couldn't switch between high-res frames fast enough. I really liked JS, but I had to move on.
I have a newer computer now, and I'm really curious about WebGPU. It works with JS, Rust, and C++, all of which I know how to use to varying degrees. I'm excited to use JS again, but with my own custom game engine made in modules. I could write intensive parts in Rust + WASM and write the non-intensive parts in JS, or TS. This is probably a bad idea, or at the least a really complex one, especially as someone that's never made an engine before, but it excites me a lot. I also like the idea of being able to structure my game objects in the way that I desire. Also, I'd understand what was going on all the time: Things like Godot kinda confuse me when I'm just trying to get a project up and running(looking at you, layers and masks). Also, GDScript pisses me off. Although, I only ever use Godot for a few hours/days at a time. And my computer isn't strong enough to run Unity, and especially not Unreal. There's something else about all the other game engines that I know of off the top of my head that deter me from using them, as well.
I hear that you should use a lot of engines before making your own, but I've only ever used Godot. Should I go ahead, anyway? Also, I'm curious about job opportunities. This is also probably going to be a long term project. Will the professional payoff be worth it, especially if I use C++?
TLDR: I really want to use Rust + WebGPU + WASM to make parts of a game engine after coding games from scratch for a year, because I'm curious as to how game engines work, I like the idea of being able to structure my own game objects, I want a visual editor really bad, and I want to get a job eventually. Are these reasons valid, and will they make this potentially long endeavor worthwhile? I would use JS to interface with the WASM parts. I don't know if I'm skilled enough to do so, however. I know that I need to use game engines in order to make one, and even though I've only ever used Godot, I'm feeling really hasty. I would also prefer to write the WASM parts in Rust, because of declarative programming, but nobody uses Rust professionally, so going with C++ might be worthwhile, even though its syntax is uglier.
Hello.I want to create a game where a cell divides, mutates, and undergoes an evolution simulation in 2D. However, I have no coding experience. I plan to develop it with help from ChatGPT and by watching some YouTube tutorials. Could you recommend a game engine with high processing capabilities? It doesn’t need to have advanced graphics. Can you help me about that?
Hello all. Since late 2015 I've been writing a home brew 3D game engine in Haskell, which uses OpenGL for rendering. The project also includes a map editor that comprises a web server running on localhost paired with a TypeScript based web front end. My intention is to eventually build an adventure and puzzle game to run on this that is inspired by and pays tribute to the classic ZZT. Please feel free to leave constructive feedback.
Project repository: https://github.com/Mushy-pea/Game-Dangerous
Latest video update: https://youtu.be/h-RChZvQUyU?si=nZuQi6-bnwTpemSK
Playable demo: https://basicas-games.itch.io/game-dangerous
Hey, not a game engine dev but a game modder working on reversing some proprietary engine formats.
I mostly have figured out the navmesh format but I'm not quite sure how the dynamic links work. The navmesh references files containing "smart link rules", which take a general actor-specific movement (ie. Jump/fall for different tile heights) and bakes it into specific start/end positions in the mesh along with what areas/edges are used. In the linked file there's a tree whose nodes seems to be checking available space nearby, including a direction (up/down/left/right), the distance, an unknown with values "", "Min" and "Equals" and another unknown "Any"/"Forbidden"/"Solid"/"Stitched"/"Portal".
I know the game can generate these links after i stripped them out of the navmesh and ran a in-game lua function, but I can't quite figure out how those two strings work. I'm guessing forbidden is off the mesh and solid is on the mesh, but don't know what the others are and haven't been able to line anything up with the baked values. Are these terms that carry any particular meaning for this sort of data?
I know 0 about coding, i'm learning how to create a game with yt tutorials, thats my first day and i menaged to set the background and some sprite (rly ugly ones) but now i'm having alot of troubles with the script.
I want the little space ship moving in a more natural way and the big one launching missiles to the little ones... how i can do this?
how can i set a button to click to launch missiles?
and where i can find some good tutorials?
video
https://reddit.com/link/1h2rxrs/video/0hdod1yatv3e1/player
i know that's ugly but i'm trying to learn :)