/r/howdidtheycodeit
"Wow, how did they do THAT?" - Ask here, get enlightened!
This subreddit is for beginner/intermediate programmers to ask about how a specific feature in a game (or other program) was coded, if they can't imagine themselves how they would go about doing it. Answers do not actually have to be what the game was actually coded with, but can explain another method of accomplishing it.
We have user and post flair! If you want to see some other categories, let us know!
Question
: Add this tag to your questions!
Answered
: Question = > Answer
Showcase
: Use this to tag your write-ups of your own features that make people say "wow, how'd they code that?"
Article
: Use this tag for shared videos and articles that describe how a feature is done
Submission Guidelines
(These are just suggestions, you can post without all of these, but the conversation will be more lively with a little bit of forethought!)
I created this subreddit because I am a novice coder who has often asked the question to myself when playing even small, indie games how a certain thing was coded. The thing that actually prompted me to actually create the subreddit though was a random, cool game, a Ludum Dare winner, Tangent: I was confused about how the circular transitions between different rooms of the game was accomplished. Example post about Tangent
/r/howdidtheycodeit
This is for Best Buy. Forgot to mention!
Just want a 5090 and don't have a microcenter so i can't camp lol
This is a video demonstrating the capabilities of Unreal Engine 3 using DirectX 11. Clearly they created this effect using a warping, low-poly mesh and hardware tessellation, but what other techniques did they use to create this smoke effect? What shader tricks did they use to make this mesh look like smoke? It looks utterly real, I could never see this being rendered unless if I had been told.
I love collecting enamel pins and displaying them on corkboards but organizing them to fit each other perfectly on all sides takes a while, especially since a lot of my pins have unusual shapes.
I'd like to make a program that would allow me to automatically "nest" them so that I can see what the best arrangement would be before I sort them IRL. Kind of like deepnest io but for pins. I would also want to be able to tag the pins by fandom/color/creator so that they can by filtered/sorted in that way as well.
I know I would have to covert pin shapes into a SVG first but that's about all.
I am a complete beginner so I have no idea where to start, what language to use etc, to start towards this idea I have. Would also appreciate some honesty if I am in over my head with this.
Hello im playing a game that has a exchange system in it, i need a bot that will help me predict if the exchange im gonna do is bad or good, or bot that will tell me how to do the exchange the best possible way from one value to another if you guys understand me (just a bot to beat the exchange system inside the game)
Is it possible to find a bot/predictor that will do that? or should i quit
also this isnt for any kind of crypto, the only stuff i find on the internet is about that. TY
For example, let's say I want to turn an horizontal video into a vertical video format. I don't want to simply crop the middle of the video because it might not be the most interesting part of the frame. What I want is to determine where the most interesting thing is (probably based on the density of information or the variation of information).
The cropping part is probably simple using the FFMPEG library. It's an advanced video processing library so I'd be surprised if it was not possible to take a video, and crop parts of it frame by frame to reconstruct a new video output.
However, I can't find much regarding what kind of algorithms (if possible something that I can implement myself, so not LLM or AI-based) to use to detect where in a frame there is the most "information density" or "information variation".
I'm guessing such an algorithm would process frames using something similar to a sliding window, so that for each frame n
you can actually compare it to the a
previous frames and b
next frames.
Any lead regarding this would be greatly appreciated!
I am use sfml, and how can I make a fighting game, I have curiosity how to code systems like combos, hitbox, and characters with moveset like grappler,and footsie, rushdown,zoning,puppeteer,glass cannon,stance, and health bars for tag team, how shall I get started first?
For example C# has BigInteger class. But how it should do in general? Using raw bytes? How to store numbers bigger than MAX_INT and be able perform sikmple math operations on it?
Consider terrain like on this screenshot: it uses multiple types of terrain with smooth blending between them, and since the transition is smooth and uneven, it's clearly not tile-based.
What is the state of the art for rendering such terrain (assuming we may want enough performance to run it on mobile)? The two solutions I can imagine are:
Any suggestions are welcome!
I am football fan, last time I was playing FIFA, I was amazed like how player manage their run when they are in offside position. I have just start working on game engine. Now I am curious like how they manage to control all 22 player position and movement. Can you suggest me any resources, tutorial or book, that mention how football game build and implement all logic
I've been working on a procedural terrain generation experiment. Its largely minecraft-like cubic voxel-based terrain with the main difference being that the chunks are cubic (the world is 10 km high). The basics are working, but I am severely stuck at implementing biome selection. I've had a search and from what I've found, most explanations and tutorials suggest an approach where you use multiple noise functions representing various parameters, such as temperature, humidity, etc and determining the biome at each point based on those. This seems reasonable for a relatively simple world, but I can see a few potential problems and cant find how they could be solved.
If you have many different biome types, you would need many different noise parameters. Having to sample multiple noise functions, possibly with more than one octave for each voxel in the world seems like it could quickly become inefficient.
If you have lots of biomes, there will be situations where you have an area which suits a number of possible biome variations or options. How would you discriminate between them - picking one at random would be fine, but whatever biome option you pick for the first point in this area would somehow need to be persisted, so that it can be consistent for all the other points in the same area. I guess adding a noise function which is only sampled when you need to discriminate these options could work.
If you want any sort of special biomes, which require specific predetermined shapes and or locations, I cant see a way to make them work with this. The only way seems to be to basically add them as a separate system and have them override the basic biomes whenever theyre present.
It just seems like it takes away a good amount of control - for example, I can't see how to implement conditions like a biome which always spawns nearby to another. Or how you could find the nearest instance of a biome if it hasn't been generated yet (for functionality like minecraft's maps, for example)
Another option I looked at is determining biomes based on something like a voronoi tesselation, but that seems even more performance ruining, as well as being actually painful to implement in 3d for a pseudo-infinite world and also giving really annoying straight line borders between biomes.
If anybody knows the details of how to address any of these problems, I would be very grateful to hear it
When it searches for a song that matches the sample, which algorithm does it use to find it so fast
The 2000s living books programs had a system that would read text to the user. The individual words could be clicked to play the audio clip of that word. These were recordings, not generated speech.
How would a system like that work? Are there clips of each word, played in sequence? Or is it the other way around, with one audio clip and each word having time code data to sync it?
Here's a video of the program in action: https://youtu.be/MxndkXMN3KY?si=3mz_KnAE2HtJDEgz
So I'm looking for some information on what approaches there are to design drop systems in game?
So far in my game I have drops being just array of objects where first key was the item and second key the weight. Then I have just function where I select the drop based on these weights.
This works fine for simple randomized drops. However I've been thinking few issues. One issue with everything based on weights is that adjusting drop rate for 1 item will effect each items drop rate as well making things difficult to balance.
Additionally I guess guaranteed drops need to be handled separately. I know many games use drop table based method, but I'd like to understand how are drop rates in the drop table actually coded.
For example here: https://oldschool.runescape.wiki/w/Drop_table
You can find items and the drop rate is communicated by rarity, but how it practice does that actually work. Also any other material I should look into about handling drops?
There are "B2B" services that promise to Identify your anonymous website visitors. They then send you the visitor's LinkedIn profile in realtime. They claim it works on 20-30% of US based traffic.
Clients install a JS script, which must pick up on something from the visitor's browser and map them to their LinkedIn profile. How does this work and where do they get their data?
Noob to this zone ! hey subreddit(seniors) could someone help me with this coding, honestly have no idea where to begin(all I know is movies, gAmes 😅) TIA
Who is hosting?
How is this being done? I'm guessing they're reading data in (is there an API?) from a site like this
https://www.nfl.com/games/bengals-at-cowboys-2024-reg-14?active-tab=watch
So they take a live broadcast game on TV and show the game in Madden using play-by-play data and feed it into Madden?
I actually have the code for this. I'm having trouble understanding it.
I'm looking to find a specific area of gameplay in a 1990s PC point and click adventure game. Most of the areas (called "scenes" in the code) get their own script file. The script for this area only has procedures for entering and leaving the scene. The area has unique audio, unique use of conditions, and calls a movie file. I can't find direct evidence of where the area's files are used. Searching gives me 0 results.
But I have found small hints suggesting this area's might be cached in a script for a hub area. At first, I thought this was because the hub changes after this area is visited. Some graphics for the hub area and the area I am looking for are the same. Now, I think the programmers might have created a base scene that's reused for several similar areas. Using indirect asset names means they would not appear in the code when I search for them.
How might I confirm if this is what's happening, or confirm it's not happening?
The code is written in a variant of lisp that used a "yale interpreter." (Googling those terms gives no helpful results for finding the exact language.) Assets (graphics, audio and such) are referenced by ID number. Usually, this number is hard-coded.
I appreciate any help, suggestions, or theories. Thanks in advance!
Basically the title, as google docs are not web pages but web based apps, how do they fetch the data from the google doc canvas?
https://i.redd.it/extvqq080k3e1.gif
Anyone who has played Castle Crashers knows how fun and organic the battles against enemies are. The combat never feels linear or repetitive, and each enemy seems to adapt to the environment and situation. Moreover, even when multiple players are involved, enemies manage to strategically split their focus, targeting different players and taking turns attacking.
I've been trying to implement something similar in my game, but I haven’t been able to achieve a system as robust and natural as the one in Castle Crashers. If anyone knows how they developed this system or can share any tips or similar approaches, I’d be really grateful!
Hi everyone,
My team is developing a game where players can create their own dungeons, which need to be stored and accessed by other players who can raid them, even if the target player is offline. I’m looking for advice on the following:
Any advice, suggestions, or lessons learned from your experience would be greatly appreciated! Thanks in advance!
https://youtu.be/CIrAuLTwaaQ?t=36
Splines? Or lots or points around the map?
https://www.half-life.com/en/halflife2/20th
When you scroll all the way to the bottom and click on the Gravity Gun, you can use it on most of the text, images, and embedded elements on the webpage. They all have their own collision bounding boxes and physics. How was this done?
Another question I have, is: after the Gravity Gun has changed an element on the page, how would I make that element interactable before it was changed? For example, making the YouTube video embed on the page still interactable and play the video. Or text still selectable.