/r/depthMaps
Got something depth map related? Post away! I use mine to make magic eyes, do you use them differently?
/r/depthMaps
Hello, how do you suggest i use the outputs from Leiapix AI converter for depth maps of a 2d image to create a 3d image fbx format?
Hello (@3Dsf), hopefully this is the place for this post (there is also a duplicate of this post, (as a comment to your post on Reddit where you explained about Inverse vs Normal Depth maps). Please direct me to where might be appropriate and I can delete here and post there. This post is copy/paste from the one I made on YouTube relating to your tutorial there. You had mentioned in your reply to my comment there that you had been adjusting variables to target different depth regions.
"Hello, I had a question for you. I noticed that the left ear, and a part of the snout is still a bit dark... For my type of application when using depth maps. I would like to be able to dial down the ear and parts of the snout, so that they're not quite as dark as they appear in your finished depth map. The problem I'm having is that I don't seem to find any control that I can latch onto for being more specific in affecting the darkness and light of specific areas of the depth map. I was wondering if it's possible (for example) to have a depth map (for example) for the head... And another for the tail Etc.? In this scenario, maybe I would keep my 3-D model in separate parts and not join them all together or, for example, I would join all the parts of the body and not join the head to the body. Whereby I would create a depth map for the head, and then a separate depth map for the body. And then try to make adjustments to the head so that it's not quite as contrasty and more harmony with the lightness and brightness of the overall body. I do not make stereograms, but I work with lenticular Art, and having an accurate depth is something that's very useful for me. Any ideas you could share I would appreciate.
Lastly, I don't know how blender chooses the darkness for the specific areas...but in general the contrast between the closest and furthest away areas seems to be set. Meaning, that which is closest (in a traditional depth map in Blende...) is VERY bright and that which is furthest is close to black. Getting some control over the default brightness and darkness that Blender chooses for these extreme areas of the 3-D scene of which my model is in would be helpful. Would you know anything about how to control these defaults?"
I need some help in my project. I am trying to detect movements (basically drifting) of a blind person while walking on the sidewalk. I have tried segmentation and then extracted the edge points of the sidewalk and mapped it to depth maps. but this approach doesnt seem to work because if any object is seen on the frame, then the depth values drastically change on the one side. I am not sure what other methods to implement . Am I missing something? ANY HELP IS APPRECIATED!
This action is not deprive anyone of content, so please dm me if you want access to content.
This action is done with humility, and done to show solidarity.
This "under review" article seems to present a considerable advance in depth map detail rendering and semantic scene segmentation from spherical 360 panoramas (eg. Matterport dataset) https://arxiv.org/pdf/2209.13603.pdf
There's a good progress implementing depth maps https://forum.sexlikereal.com/d/2656-updated-videos-slr-depthmaps-in-the-making
Download SLR app dev build and some few videos with embedded depth map. The best way to see the difference is to pause a video where a girls gets right into the camera and turn auto focus on and off. We will add a biding to controllers mapping with the next release.
You can expect much better visuals, fewer distortions and much less eye strain.
We are fundamentally working in that direction to bring VR video experience to the whole new level.
Also going to bring much more realistic scale with dynamic shaders projection mesh.
We are actively hiring Depth maps engineers to move things to the next level
LifecastVR makes hemispherical basreliefs with inpainting from VR180 footage. Can be fed with VR footage even. Free version with watermarks. They have a very nice JS player for all of that. Works on flat screens and VR, I guess can be viewed even on a lenticular lense array display.
I am able to create a depthmap for you, send me the file you would like, I will create a proof for you and if you like it you can pay me and I will send the depthmap to you.
(No payment for a proof version)
Email: depthmap@outlook.com
Hey guys, apologies if this question is slightly naive but I just can't seem to find much information out there (at least that caters to my knowledge base) about depth maps. I've been trying to find a way to take a non-static video and turn it into a depth mapped sequence. My ultimate goal is for use in after effects with a multitude of different plug ins and effects that allow for depth map inputs to accurately apply the effect to key areas. This seems quite revolutionary in the vfx world and I am quite shocked at how little information I can find.
I did find a program pfdepth which does look like it may have the capability to do what I need it too but again it doesn't seem like a very updated project for what I would think would be the industry standard for approaching vfx work. This leads me to believe I'm either looking at the problem wrong or just a few years early with my requests. Can anyone shed some light on my admittedly novice understanding of how to create a depth map from 2d video footage? Thanks in advance, any tips are much appreciated!!
I wrote this for another group -- but maybe it is of interest here ...
https://drive.google.com/file/d/1jOyfOZH7iswuaPo_twiJQy0c_b-_ULd5/view?usp=sharing
https://drive.google.com/file/d/1EnvjgJ-4-TKusky8IgDEtx6v7knyI6eU/view?usp=sharing
I was looking for Helmut Dersch's image/depth pano viewer "PTViewer3d-rt" version 0.1 on my computer the other day and had an anxious moment before I found it was still online --- https://webuser.hs-furtwangen.de/~dersch/PTViewer3d/PTViewer3d_rt_v0.1.zip
This is a really cool viewer and I recommend you download it in case the link disappears. The zip file has a sample panorama +depth map pano image pair, the viewer files and a readme. To make it work you just drag the panorama.jpg image onto PTviewer3d.exe file or shortcut. (There is also a MacOS version but I haven't tried it.). The pano and the corresponding depth map have to be 360/180 equi images. The depth map has to be 8bit greyscale png. The depth map is coded with close being dark, far being light. You can also use the player via command line.
It has quite a few stereo viewing modes, settable via R click or command line. There is anaglyph, LR SBS, RL SBS, squeezed LR SBS (great for 3d TVs). Also a couple of interlaced modes, and quad buffered (shutter glasses). There is also R click access to "stereobase" setting for controlling amount of depth effect.
You can zoom in and out with +/- keys on the number pad (or t/T). F is fullscreen. You can use very large files with the player and it is still smooth to navigate with any sort of decent graphics card.
There is excellent stereo impression in any direction including nadir and zenith. If you want to use it with Oculus Rift or Vive or Quest 2 you could set the mode to fullscreen SBS and view the desktop with Virtual Desktop on a virtual screen in VR.
I have used it before to help me visualize my retouching of panoramic depth maps -- which is why I wanted to get it working again. The image/depth pano links above --on Google Drive --(of a dance party) will work with it. If you wanted to see the same scene with the Pseudoscience 6DOF Player on desktop VR you could try this link: https://www.reddit.com/r/6DoF/comments/ocsjzh/boosting_monocular_depth_adobe_sponsored_research/
There is a new tutorial by ugocapeto3d on using the Google Colab code of the Boosting Monocular Depth paper -- the depth maps it produces are fantastically detailed (compared with what was available before) -- and I actually managed to make it work after the tutorialhttps://www.youtube.com/watch?v=SCbfV80bZeEHere is a Facebook 3d Photo I made with it from an old panorama of mine:https://www.facebook.com/photo?fbid=4464281770334865
I figured out a way to composite detail areas (with depth maps from much smaller overlapping image crops) with Adobe Depth Blur depth maps -- using panorama software PTGui. PTgui can be used to stitch depth map crops together by telling PTGui they were taken with an extreme telephoto lens (eg. 2000mm) and then you can tell PTGui they were shot with automatic exposure (which will get it to compensate exposure variations). You make a stitching pattern by feeding it the source image crops to work out the layout and then you replace the source crops with the depth maps.https://bit.ly/3ghxDeV