/r/virtualproduction
A community for the growing world of virtual production. Virtual production combines physical and virtual elements in real-time (often using game engines) to produce media such as films, TV shows, live events, and AR/VR content.
Virtual Production
CAD, CAM, CAE
RULES
Please do not post memes. A meme is a repeated joke involving a template photo with caption.
Please do not trade pirated materials. Talking about the subject is fine, but do not actually share any links.
Racism, sexism or any other kind of intolerance or discrimination will not be tolerated.
Trolling, posts intentionally inciting conflict, personal attacks, and spam will be removed.
Avoid posting blogspam or personally monetized links
Breaking the rules will result in your account being temporarily silenced or banned.
RESOURCES
CosmoLearning
MIT OpenCourseware
LearningSpace
Math
WolframAlpha
Khan Academy
Paul's Online Math Notes
Math Insight
PatrickJMT Video Math Tutorials
Math24
Electronics
All About Circuits
Circuit Lab
Programming
C++.com
StackOverflow
Mechanics and Materials
MatWeb
MecMovies
Thermodynamics and Related
Cambridge Materials Science DB
Cambridge Materials Science Videos
Cal Poly Pomona ME Videos
ChemEng
LearnChemE Screencasts
Other Subreddits
r/AerospaceEngineering
r/AskElectronics
r/AskEngineers
r/CAD
r/CAM
r/CAE
r/ComputerScience
r/Engineering
r/EngineeringMemes (Memes can be found here)
r/EngineeringTechnology
r/ECE
r/LaTeX
r/MatLab
r/STEMdents
r/WomenEngineers
r/FE_Exam
/r/virtualproduction
It’s been almost 3 months since I’m building my VP studio, broadcast oriented. Choices that were made are aximmetry, vive mars (+fiztrack), vmix and we’ve just received a JibPlus from Edelkrone for this nice smooth floating jib movement.
It’s a pretty hard journey, and learning how to use Aximmetry is a daily struggle. Some days are like a 100% dedicated fight with your gear, softs, video signals, network… And some, planets aligned and its a kind of magic.
Zoom movement in the end needs to be improved, we have some stuttering on the focus motor. Almost nothing but the fiztrack looses some data, enough to be noticed on screen. Thats probably the most difficult part in VP, calibration and sync must be so so precise, or you just throw to the bin your recordings.
Hello, I'm unsure how to word this but I will try my best.
For some reason, on my studio's nDisplay, the picture seems to be at a weird angle and give it a curved appearance? Like it distorts the environment strangely?
It happens in every environment.
If I have the frustum activated (using Mosys for camera tracking) it fixes the issue and shows the correct view, keeping objects straight etc.
BTW, we already re-made the wall mesh in Blender to make sure it was accurate and it is :/
how the actual environment looks, all walls horizontally straight
again, this temple in the background is completely straight YET distorts on ndisplay
When lighting in your environments for LED walls do you use real world lighting levels (example 1700 linens for a 100 watt lightbulb), or do you just use what looks good while building the environment before putting it on the wall?
Hi everyone,
We are presently testing new possible LED panels for our studio and we have encountered this really strange phenomenon. Some details:
Any ideas what we are witnessing here, and maybe also how we can tackle it?
Thanks everyone!! Appreciated.
Has anyone been able to get 5.5 running on their stage? I have tried to get it working but it keeps crashing or freezing.
Hi
just watched some bits from the Mike Tyson vs Jake Paul boxing fight and curiocity has gotten the best of me. Does anyone have any knowledge on who or what company did the virtual graphics for the program. Or even better some software / hardware specs, I´m assuming at least one spider cam and probably some other assortment of Stype stuff.
In following along with the “Autoshot Unreal Round Trip” tutorial, I’m attempting to replicate this specific step-in-the-process (but within Unity) beginning at this timestamp: https://youtu.be/XK_FpXXBU7w?si=rUbzpH_ERUbTq-By&t=2299
My Jetset track is solid. My recorded Jetset comp is solid. I seem to be facing the same problem demonstrated within the video caused by the inaccuracies of the iPhone LiDAR system.
In my efforts to replicate the outlined solution (from within Unity) as provided by the video for Unreal - I’m not achieving a similar result.
Attached are two screengrabs: https://imgur.com/a/DlyCQUh
Replication Notes:I’ve tried deleting keyframes and manually entering a position for the Image Plane gameObject’s Z-axis Position (Unity Equivalent). I’ve also tried deleting keyframes and manually entering a scale for Image Plane gameObject’s Z-Scale. Neither approach succeeds in replicating the process outlined in the linked video tutorial.
My three questions:
Besides building a studio, do you just need to get a foot in the door with a working team?
Doesn't seem like there's an easy inroad besides spec work.
Looking for an artist to assist with our unreal scenes. We have modeled majority of geometry in 3dsmax (mainly architectural imagery like exteriors, boardwalk, retail, office interior) Tasks would be to translate finished 3dsmax files, replace any assets (plants), redo lighting etc.. Looking for help next week. I can send brief if you have a portfolio via email. Can sort pricing after.
Freelance, remote work per scene.
How common is proper techviz implemented in virtual production for commercial productions? Is it typically offered but omitted due to budget/timeline constraints?
Hello everyone!
I’m a final-year filmmaking student, and I’m currently writing a dissertation on how advancements in technology and software have made advanced filmmaking more accessible. To get a range of personal insights, I’ve created a short questionnaire on how these tools have impacted people’s careers. If this topic resonates with you, I’d be grateful if you could take a few minutes to share your thoughts: https://forms.office.com/e/2t5LSGrZyt
Thank you for helping with my research!
We're a small VP studio with a 30'x12' LED wall. We are trying to ensure our render node is running the best it can. We've had some questions come up over the last year as far as best practices go, specifically relating to performance. We have two a6000 cards in the machine but we'll often find levels run with unusable frame rates for icvfx until a level is really pared down to the bare bones. Is this to be expected?
Also just looking for ways to test and get benchmarks. We've sometimes wondered if we are indeed using both GPUs and using them in the most effective way. I haven't been able to find definitive answers on nvlink, SLI, multi-gpu etc. so just wondering if anyone can weigh in on the matter.
Specs:
AMD Ryzen Threadripper Pro 5995WX 2.7GHz 64 Core, 256 GB RAM, 2x NVIDIA RTX A6000 48GB
Looking for resources like
-Pixel counting ruler
-pngs for seeing dead pixels
-anything else
Hi everyone,
I'm not 100% sure if my question covers the standard virtual production method/workflow since my interest is specifically with only the Lens File and Lens Component setups, and not relying on using additional live-action plates or LED wall panels.
I've been wondering if anyone is familiar with the process of transferring raw static and/or dynamic solved lens data that's from 3DEqualizer into Unreal's Lens File setup? There's very little information I've found about this topic online since it's not a real-time live link workflow directly within Unreal.
The goal I have in mind is to investigate what distortion parameters are transferable; Especially if the data is recorded across each frame for an image sequence. Whether that can cover lenses that are dynamically animating over time due to a change in focus pull, focal length, as well as lens breathing and/or re-racks if using anamoprhic lenses.
Can someone please explain what these are used for?
Hi folks, I'm looking for any free trainings, videos and documentation that give a broad overview of how a Virtual Production studio "works". Basics like genlock, video processors, LED arrays, etc and how they all work together is what I am looking for. I've been watching YouTube videos trying to learn what I can but wondering if anyone has any recommendations? Is there anything that covers the basics, VPS-101 type of thing?
A little background, my company's marketing department is setting up a VPS and my team (internal IT/AV) will be supporting them from time-to-time. I'd like me and my guys to learn some of the basics so we are all on the same page when we help out. Basics on motion tracking systems (mo-sys), how the signal flows from camera-unreal-video wall, how video processors (Brompton) work, etc. I'm not expecting us to walk away from watching some videos to be experts, but I want us to have a good feel for the process.
I would also like some of the managers and directors to go through these trainings so they have a better understanding of how this whole process works.
This image shows our current studio setup.
One PC is connected to an LED processor via HDMI, and the background is displayed on the LED WALL through N-Display.
We are currently using a GH4 as our test camera.
And this is the Genlock configuration diagram that I've studied and put together.
Is this the correct way to configure Genlock for an LED Wall
And I have another question.
Is a Quadro graphics card absolutely necessary for Genlock between the camera and LED WALL?
I understand that Quadro is needed when running N-Display with multiple computers.
However, since our studio runs N-Display with just one computer, we determined that we don't need a Quadro and built our computer with an RTX 4090 instead (Quadro is also too expensive).
I am preparing to open a virtual production studio in Korea.
We are currently testing with a Panasonic GH4 camera, and the results are absolutely terrible.
The footage is so bad that we can't even tell if we're doing things correctly.
When we get even slightly closer to the wall, there's severe moiré, the colors look strange, and overall it's just terrible.
However, when some clients came to our studio and shot with Sony cameras, the results were decent (though this was shooting 2D video played on the LED wall, not Unreal Engine content).
Therefore, we feel it's urgent to establish what the standard specifications should be for cameras suitable for virtual production.
I don't think it's possible to get detailed camera recommendations from this Reddit post.
I would be grateful even if you could just give me a rough estimate of what level of camera would be suitable.
Hey there. In order to create a few "simpler" setups on our LED wall, we've been doing some UE 5.4 renders to put on the wall instead of doing live-tracking. (This of course means a fixed view without parallax and that's fine for this purpose) Is it possible instead of rendering one specific cinecam to render the (curved) LED Wall projection that's used for the outer frustum instead? Meaning, in that high quality that the movie render queue allows. That would probably work better in terms of more accurate display of the world...
Thanks for any advice!
This might be not the smartest question but I'm serious here.
I've set up a virtual production with a green screen room. I'm using the vive mars setup, the BMD ultimatte 4k, and an otherwise all in UE5.4 setup which gets me all the way to a final composition over adi outs to the preview screen and I record takes to render out with path tracing afterwards.
What exactly does aximmetry do to lighten/ ease up the load? I see that it manages Hardware and tracking, can load scenes and key the green out, but is it still beneficial enough currently to pay the hefty price for it?
We're currently looking to optimize our studio to be more reliable although we are already in a pretty good spot, we get 50fps with scenes that are all Megascans and have foreground elements in front of the recorded person in the Greenscreen too.
I'm genuinely asking this because I can't find anything about aximmetry use for VP that's less than 2 years old. Two years ago the UE was wildly different when it comes to VP...
Hello.
As the title says, we offer this service worldwide.
We are based in France and we have teams so we can scale and deliver pretty much anything remotely.
This allows us to collaborate with studios outside of France.
Quality is always photorealistic but how much really dépendent of your needs. We recreated the Eiffel tower, our dataset, but we also can give you a soccer field or the moon.
The video is a BTS of this clip where we delivered 6 environnements in 5 days : https://youtu.be/YDBIxhq6pH4
Since then, its been wonderfull time and hapiness mixed together. The last two were 4 environnements in ~48h optimisation included and 2 environnements (pretty complexe) in 72 hrs.
We can definitely deliver any standard but please let us more time if you guys call us, results will always be better.