/r/photogrammetry
This is a community to share and discuss 3D photogrammetry modeling.
Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e.g. articles on new photogrammetry software or techniques.
Feel free to post questions or opinions on anything that has to do with 3D photogrammetry. The point is to have a place where we can help each other out.
Photogrammetry is the process of converting a series of photographs into a textured 3D model.
This is a community to share and discuss 3D photogrammetry modeling.
Links to different 3D models, images, articles, and videos related to 3d photogrammetry are highly encouraged, e.g. articles on new photogrammetry software or techniques.
Feel free to post questions or opinions on anything that has to do with 3d photogrammetry. The point is to have a place where we can help each other out.
Posting Rules
Images * include the software used in [brackets]
Articles * Must be related to 3D photogrammetry
/r/photogrammetry
Il looking to get into photogrammetry to scan small (ú inch x 7 inch) would a Nikon d3400 be up to the job in 2024?
Thanks
I need to mask a set of images for processing in PostShot. This tutorial page from early 2023 says "the information on your selection can be added to the originals (saved as tiffs)" and there are some posts out there which indicate it was possible to do.
On the depth and mask export in 1.4 & 1.5 it doesn't seem to be possible to change the mask export format from PNG and there's nothing in the application settings for it either. Was this functionality removed?
I'm having trouble with building textures for my scanned assets. My low poly and original mesh both end up having this diffuse map when I build textures. I went through the entire pipeline twice with the same results. This happens on all my assets, on both version 2.1.3 and 2.0.4. The point cloud aligns with the model.
The vertex colours look perfectly fine in agisoft and zbrush.
Hey Everyone! Question - can anyone recommend best practices for scanning flat objects, say, 3mm wide? I have Kiri, and this sub seemed to make the most sense. If anyone's interested, I am scanning ancient artifacts. Thanks! - E
https://www.kiriengine.app/share/ShareModel?code=WSNMKI&serialize=c6923e87d0ae468ab872e90a5d2e2829
I recently scanned a skatepark with a faro lidar scanner. i took 11 scans and exported the pointclouds through the Scene software into E57 files, one of each camera position and then one of the merged point cloud which is 3.1gb.
I am unable to import the merged point cloud into reality capture, and when i import the 11 separate files they dont align well. the images that were exported with the pointcloud are really hard to work with creating control points, and i dont seem to be able to use the 3d scene to place control points. my goal is to make a mesh with this and project photos i took on a dslr.
would appreciate any articles on this workflow and any general advice :)
How can I place points at X, Y, Z coordinates in a 3D map using Three.js? I'm working with Gaussian Splatting representations, and right now, raycasting with a splat mesh doesn't work with Three.js's raycaster because raycasting against a splat mesh works differently (it returns intersections with splats, not Three.js meshes). How can I place the points correctly in the 3D map using the .splat
file format?
So, I'm very into genealogy and have been visiting the local cemeteries where I have relatives. I take photos of the tombstones, but was wondering if it would be possible to use photogrammetry to get 3D versions of the tombstones instead.
A few things I'm wondering are: 1) whether there would be an easy way to store the results for future generations to be able to view, 2) whether it would take too much time to photograph, load, and build models for a couple hundred tombstones, 3) whether there's any real benefit versus just taking photographs. I like the idea of being able to see the tombstone in three dimensions versus a photograph that only includes the front of the tombstone.
What do you think? Thanks.
I know this has been asked a million times, so sorry. I’m making models of ancient wooden objects with photos taken in galleries and museum study rooms. I’ve been using an iPhone 14 Pro but would like to move up to something with better low light capabilities and hopefully better low light auto-focussing. Towards the compact end of the spectrum would be great. $2000 would be my upper limit.
As the title says, what PC component am I aiming for when building a working station to reconstruct point cloud data and a orthomosaic view for survey purposes? I realize RAM is important, and the more RAM I have it is usable for larger projects (more photos).
Usually I have around 60 to 100 photos per projects for small sites with houses and around 800ish photos for oblique flights when larger building modeling is required.
Is CPU more important than GPU? Should the CPU have an integrated graphic component in it?
I'm hoping someone here might be able to help.
I have a stationary rig of three cameras to take images during an experiment. I’m trying to find a way to batch process the workflow from alignment right through to exporting the 3D model as a .xyz file. The images are labelled ‘DSC0001, DSC0002 … DSC3600’, and I want to create a model for every set of three images (1200 models total). I know that Reality Capture can make models of sufficient quality from these three images as I have tested it, but I need to be able to automate this so I’m not doing it manually for four experiments (4800 models, 14,400 images). I have a set of ground control points that I generated when testing, so I’m hoping to be able to use these to scale each model as the camera settings and positions don’t vary during each experiment. The end goal is to difference pairs of point clouds to get change between consecutive timesteps.
I'm also open to trying a different software if you think it will be better suited to the job (free software if possible!), so feel free to throw out some suggestions. Thanks!!
Doesn't need to be free... but something that I don't have to have a subscription for would be best. We got a highend PC with a modern Nvidia GPU for processing and 3d modeling and stuff, so I'm quite happy to process on my own hardware instead of the cloud.
At first I was trying WebODM, but it seems to really suck at doing telecom towers.
So next I tried Reality Capture, it spits out a really nice point cloud compared to anything I got with WebODM, but after trying to convert it to a mesh model, it's either completely trash, or like a 16gb obj file which is completely unusable and crashes every software I try to import it into.
So now I'm trialing 3Df Zephyr, I'm currently waiting on it's attempt at processing my photoset. I noticed it had a preset for telecommunication towers, so I'm optimistic this could be the one.
Otherwise I'm open to suggestions. I know software like DroneDeploy and Pix4D seem to be highly recommended in my field. But my main mission right now is getting useable model which I could potentially import into other software like AutoCAD and Google Earth. Mapping isn't my goal here.
EDIT: Cool, thanks for everyone's help and suggestions, you all gave me a lot to think about.
I have a digital twin of an entrance area of a house and im trying to get an orthophoto of one of the walls but the rendering time is insane. I also need one for the ceiling and floor so i can’t be rendering for 3 days straight, how can i make this shorter? I’m on a laptop w/ an rtx 4070, an i9 and 32 gigs of ram. It took around 8 hours to build the point cloud and build the model on high settings so i don’t understand how this can be taking this long
I've been looking to 3d scan horses for over 10 years now. I was thinking of a handheld "bar" about 5 feet tall that I would hold from the middle. that can hold about 5 cameras each angled slightly to overlap, so that I can capture the top as well as under the animal in one swoop. I was thinking of using video for capture an walking around the horse to get the shot. I know that there are issues with the horse moving, but I was thinking that I could fix that in the CAD model.
Hello folks.
I use agisoft to align photos and create a tie point cloud for gaussian splatting training.
What I wonder is, is there a another way than simply manually rotate and move the point cloud to have a proper orientation of the cloud or is this really the only option?
I scan a lot of rooms and interior places and it kinda sucks to always orientate the point loud by eye :-)
I would appreciate any ideas here!
What free software do you guys suggest for photogrammetry?
I am a bookmaker and artist that has used Metascan and other apps to create experimental models in the past. I am not a coder and cannot figure out how to get a functioning NERF application on my computer.
If you think you’re up to the task reach out and we can discuss price or some kind of work trade.
Thanks in advance
Hey everyone! Absolute beginner here. Mostly want to start 3d-scanning rocks I find in Iceland, so objects not spaces.
I was thinking to start with creating a scan through an Iphone app and then exporting the file to my computer so I can edit-refine it with CloudCompare.
Is that a valid workflow? I am open to any suggestions-corrections you can share :)
Hi,
I'm particularly interested in using 3D models for inspection purposes, especially focusing on the blades. How much detail can typically be captured in such models?
I’d love to hear about your experiences and see examples of your work if you’re open to sharing.
Thanks in advance!
Hello, I am wondering if and how I should approach using photogammetry to create a 3d model from a 2d animation. I've tried Meshroom, Zephyr, and poly.cam to no quick success. Is this sort of thing possible? Thanks in advance.
can somebody please help me? I have problem with the gradual selection tool in my dense cloud. I‘m able to perform the process on my tie points but when I produce the dense cloud an choose I can not clean it up by using the gradual selection tool. it always stays light grey so it‘s not possible to choose it. I’d be really thankful for your help
Tearing my hair out with this one and any pointers for a solution would be much appreciated. I have two sets of drone photos (Mini 3 pro) for the same part of a site taken a month apart. The metadata across both sets, quality of the images and everything else I can see looks equivalent between the two sets. They both contain a combination of multiple views e.g. orbits/ortho. Overlap between the photos is good.
The first month's photos align perfectly, as do all the other sets for the other parts of this site that I've photographed.
The second month's set refuses to align. At all. No matter what I do I get no components, even using the same alignment settings as the apparently nearly identical first month's set. I've tried clearing the cache, changing all the regular and advanced alignment settings, feeding in individual orbits separately, deleting old crmeta.db files, nada. After alignment each image says it has 40,000 features, but nothing is being matched.
If I create control points then RC will align only the photos that I manually put a control point on and no others. If I create multiple control points it creates separate components with each containing only the photos that were manually tagged with one of the control points.
This happened for this image set in 1.42 and is also happening in 1.5, I've not had it with any other input set. I'm thinking there must be something wrong with the image files but I can't think what it could be. They look fine in editors and within RC, and it is managing to feature detect them. I've even been able to match the first month's images with a set taken the month after this problematic set with no problems, despite things like vegetation changes.
Edit - of course, you post and then you find the fix! Turned off "Use camera priors for georeferencing" and all images aligned perfectly. Suspect this means the drone GPS was having a bad day somehow, but not sure.
New to photogrammetry, hoping to get some direction on what 3D modeling softwares are best.
My goal is to be able to help clients design their large scale land scape of their properties using accurate 3d model via drone.
Hello,
The Godox AR400 is a tool that is widely used in photogrammetry.
I have several questions about its use:
Thank you for any future replies.