/r/proceduralgeneration
This subreddit is about everything procedurally generated (media, techniques, ...)
This subreddit is about everything procedurally generated (pictures, videos, discussions on techniques, ...)
/r/proceduralgeneration
I'm working on a small project on my spare time that is basically a murder mystery game set on the Orient Express and heavily inspired by Murdle. Generating a logic grid is, of course, pretty simple, but the clues are a trickier problem. The game plays out as a simple choose-your-own-adventure style interactive fiction, where you can visit all the cars on the train and examine things and talk to people to gain clues.
Main two main problems are:
- How to gauge when I have enough clues so that the game is solvable? I could just generate a clue for each data point and score the game so that the fewer clues you need to solve the puzzle, the higher score you get, but it does feel quite boring.
- How to get from a data point into an actual clue? I can probably start with really simple stuff (add a fingerprint from each suspect to their murder weapon), but I'm not sure how to go about the more complex and indirect clues. For example, how to make a clue that tells someone did not have a weapon.
I found this very recent article about the subject, but unfortunately it's behind a paywall.
If anyone has experience doing anything similar, I'd be happy to get any pointers, links to related articles, and so on!
Experiment with reactive metaballs in TouchDesigner
Track is Farther and Fainter by John Tejada
I'm looking to generate dungeons, buildings, etc. Not landscapes or meshes.
My current plan centers around Reagents and Recipes. A Reagent is something that can be placed in the level, which might contain a Recipe for placing more Reagents. I also have an AssetManager which will allow me to get Reagents by tag. I want breadth first generation. I want to have many different simple generation algorithms.
I'm currently struggling with how to organize my classes. Might you be interested in talking to me about it?
Hi! I have a procedurally generated terrain in unity. I've been working on foliage, and I would like to use unity's Detail objects to do so. I'd like to use these because they auto-place and auto-cull, which means I can have millions in the game, but only a couple thousand on screen at a time. My project uses runtime generation, and the world is infinite, so the ability to cull meshes beyond view distance is crucial. I'm going to need literally millions of foliage actors for grass, trees, etc. It's a big world.
I understand that I could probably do this with prefabs (and that's actually what my project looks like right now) but I would like to use unity's detail painting to streamline everything (plus it allows me to control a lot of factors such as "density" and "align to slope" that would be tedious to code myself). Right now, I generate "Detail Prototypes" and "detail layers" but when the time comes to generate, I just select random points and instantiate the prefab of the detail prototype, not the prototype itself.
Currently, in the editor, I can paint detail objects (the prototype itself) on terrain manually, but is there a command for me to do this using c# at runtime?
Note: I have functioning "detail layers," just no way to populate them with "detail objects." currently, I generate prefabs at random points within the detail layers.
Update: Thanks for the comments guys, but I managed to figure it out by sheer luck. It turns out the way I was instantiating my prototypes was the issue. For those of you who come across this post later, I'll give a quick explanation:
like wRhm013 said below, you need to assign each detail layer programmatically to the landscape. That should be the easy part-there's lots of material out there for that. Then, you need to instantiate a few detail prototypes. Loop through your detail layers and create a new prototype for each one. Creating the prototypes is tricky; you have to have a bunch of variables assigned or they won't show up. Below I've put my code for the instantiation, so you can see what worked for me.
DetailPrototype temp = new DetailPrototype
{
prototype = config.biomes[h].biomeData.foliageTypes[0].prefab,
density = 10f,
usePrototypeMesh = true,
renderMode = DetailRenderMode.VertexLit,
minWidth = 0.5f,
maxWidth = 1.5f,
minHeight = 0.5f,
maxHeight = 1.5f,
healthyColor =
Color.green
,
dryColor =
Color.black
,
noiseSpread = 0.1f,
alignToGround = 1,
prototypeTexture = ProtoTex,
};
After that, use the command terrain.Flush(); to force the terrain to adopt the new details and detail instances should spawn across your terrain.
Hi folks, some time ago (couple years ago).
I found an website, where author was describing in details how games were tackling world generation, but unfortunatelly I am no able to find it myself, but maybe someone got a link to it.
I remember there were articles about Blasphemous and their node based world generation. Spelunky, and much more.
Trying something a bit different with this one - 3D shapes with reactive layers of noise
Track is Pulse XIV by Peverelist
Visual made in Touchdesigner
🎵 // Eiko Ishibashi - Evil Does Not Exist
https://www.instagram.com/gi__o.h
✌🏼🖤
Cross posting from r/voxelgamedev bc community here prob has good experience. I'm working on a MC style clone and have 3 noises; one for land/sea (medium frequency), one for erosion(low frequency) and one for mountains/rivers(high frequency). all 3 noise values are sampled from their own configured splines. I then am taking the land noise sample, saying it represents a max terrain height, and then using erosion and mountain noise samples as multipliers for that terrain height. For example,
cont nosie sample = 150 terrain height
erosion multiplier = 0.1
mountains = 0.5
final terrain height at this point = 150 * 0.7 * 0.5 = 52
This is a simplified version of it but the basic idea. I'm doing some things to modify the values a bit like ease-in-out on mountain sample based on erosion ranges, and i also do interpolation in a 5x5 lower resolution grid to ensure jagged edges arent all over the place where terrain height quickly changes.
Basically my question is, is there a more intuitive way to combine 3 spline sampled noise maps? My results aren't bad, i just feel like im missing something. Screenshot attached of a better looking area that's generated via my current method
After a full day of tinkering to create a roof generator algorithm, I now present badly textured, procedural houses! 😉
There is still a lot further to go. The missing chunks in the roof I want to fill in with balconies or smaller roof pieces. The houses themselves definitely need better uvs. But once I have that, then I can add the fun stuff like doors, windows, and beams.
Here is a breakdown of my roof algorithm...
Take the ground points (the yellow dots 👇) and draw a line to their second nearest neighbors (red lines)
Find the intersections of these lines (the blue and red dots)
Filter the intersections by keeping the most square intersections (good intersections in blue and bad intersections in red)
Then use these good intersections to create an arched roof piece
Rotate the roof pieces with their intersection crux
Then ta-da, drawable houses with roofs! 🙂 ...I forgot to mention, the walls are just the yellow ground points extruded upwards. Hope you find this useful. Happy Unitying!
My first try was putting a bunch of pre made blocks together. It worked quite well for the walls, but did not give me as much control as I wanted. The roof was really difficult to put together as well, and the combination of these factors made me try a different solution. So, I am now using a point based system for it. I can click on the ground to define my walls, and these are then extruded upwards to create the mesh. I ended today working on an algorithm for defining the roof peaks. In the picture below is my walls in green and the white lines are the roof peaks. Hopefully I will have some meshes tomorrow. 😌
I thought it would help me add more variety to my villages and speed up the modeling process. I still need to work on the roof and add some doors and windows, but so far I am quite happy with how it's going. I made several 2x2 pieces in Blender and used GPT to help me script the placement algorithms. I would like to make it as high quality as the buildings I modelled by hand in the back.
So I've been trying to find something but no luck in my Google searches. I'm wanting to make a that is made in real time as the player moves and there's like a range from the player where it creates and deletes prior parts of the maze. Something that would make it feel infinite and always changing so even if you back track the maze isn't the same as when you passed by it prior. Is this even possible?