/r/AudioPost
We are sound for picture - the subreddit for post sound in Games, TV / Television , Film, Broadcast, and other types of production.
Audio-Post Audio Post Editors Sync Sound Pro Tools ProTools De-Noise DeNoise
Specializations Info
Various Related Links
Business Resources
FAQ / Getting Started
Free Resources
DeNoise and Removal basics. What's possible?
See subreddit rules before posting.
This is the subreddit for post-production sound geeks in Games, TV, Film, and Broadcast. Audio post is;
If your questions are about what lavalier mic's to use on set, how to make your shotgun mic look like a dead opossum, boom pole preferences, which cable actually is connected to the camera, etc., you should ask those questions over in r/LocationSound
Please Note
Use the AudioPost Community Corner FAQ post for the following;
Use the comments section of the AudioPost Mine feature post at the top of the subreddit to link or tell us about things which you are affiliated with
Check out the AudioPost wiki
/r/GameAudio
/r/LocationSound
/r/RateMyAudio
/r/ProTools
/r/Music
/r/StudioPorn
/r/SFXLibraries
/r/SoundEffectSwap
/r/FilmMakers
/r/VideoProduction
/r/Videography
/r/VideoEngineering
/r/Editors
/r/DocProduction
/r/Freelance
/r/ProduceMyScript
/r/AudioPost
Hello there,
I am nearly done mixing a feature film in 5.1 and have a little doubt regarding the target loudness. I'm well versed with most of the technical details regarding the mixing process, but this is my first feature film that might end up on Netflix.
So, as per Netflix spec, the integrated loudness should be -27 LUFS measured over the entire programme. My mix is hitting an integrated of -28.1 LUFS. It's a social drama kind of film, and it's all comprised of dialogue, and there's a long gaps between each dialogue.
If I try and increase the loudness of one of the acts in the film, that act sounds way to loud compared to the other acts. I mixed the first few lines of the film keeping in mind the target level and have kept that as my anchor point. But doing that causes my mix to be just short of the target loudness.
What are the ways in which I can fix this? Any help would be appreciated.
Please feel free to correct me if I'm wrong on any details.
Thanks!
Hello there,
I am nearly done mixing a feature film in 5.1 and have a little doubt regarding the target loudness. I'm well versed with most of the technical details regarding the mixing process, but this is my first feature film that might end up on Netflix.
So, as per Netflix spec, the integrated loudness should be -27 LUFS measured over the entire programme. My mix is hitting an integrated of -28.1 LUFS. It's a social drama kind of film, and it's all comprised of dialogue, and there's a long gaps between each dialogue.
If I try and increase the loudness of one of the acts in the film, that act sounds way to loud compared to the other acts. I mixed the first few lines of the film keeping in mind the target level and have kept that as my anchor point. But doing that causes my mix to be just short of the target loudness.
What are the ways in which I can fix this? Any help would be appreciated.
Please feel free to correct me if I'm wrong on any details.
Thanks!
Hi everyone, it's my first time doing audio post production for a film and I have a doubt. The video editor said he would pass me the AAF but, working on Ableton, I saw that it doesn't support this format. my question is: what changes between working with the AAF and letting me pass the video and the various audio files exported individually? Is it still good or am I losing some advantage? because in this case I consider subscribing to a month of pro tools that I know how to use.
Is it possible to maintain original fades between clips that are in the AAF files? I managed to relink all dialogue audio files and get a Pro Tools session, but all fades are not there anymore.
Recently learned about the ADR workflow in Nuendo 13 and it felt awesome for dubbing. It's not the most reviewed topic, I know, it's a pain to look up any info about it. Decided to write to this sub, because Nuendo, Cubase and Steinberg ones seem almost dead, so sorry for a bit of off-topic post.
So, I have an awesome subtitles file in Aegisub - ASSA format (.ass - funny named definition), which seems to have all that needed - actors, text, start/end time. How do I convert it to work with ADR feature? Netflix's TTAL format seems perfect and compatible, but conversion tool is limited to Netflix's partners.
What part of the broadcast chain causes ducking on some streaming shows? I just watched A Man in Full on Netflix, and the backgrounds duck like crazy before every bit of dialog. It certainly wasn't mixed like this. And other shows don't do it. Is it an auto companding thing at the broadcaster?
I love using the preamp on the Scheps Omni Channel 2 for dialogue. I'm curious if anyone knows what the Preamp/saturator is based on? And does anyone know a plugin with the same saturation character?
I am newish to mixing in surround, and working with a fairly limited setup. I’ve basically just exported the multitracks and put it into the video with ffmpeg. It works fine on a 5.1 system, but when played on a 7.1 system I’m hearing what should be the rear speakers play out of the middle side speakers. I played another movie that seems to be 5.1 also, and it’s surround sound plays how I expect out of the farthest back pair (and I hear no sound out of the middle side pair). Is there something special I need to do to the final video file to tell it to always put the rear surround tracks in the farthest back pair of speakers, regardless of number of speakers/however far back they may be?
Hello, I wanted to ask if anyone could provide some information about mixing and sound design in Ambisonics for VR video in Protools. I have problem routing the audio properly. I need a mono and stereo sound sources to be encoded to Ambisonics space.
I tried to use AudioBrewers encoder plugin on the stereo tracks and their decoder plugin (on quad channel master) for monitoring. But when I insert the encoder plugin it says that”this track isn’t wide enough to output Ambisonics” Then I tried to use waves b360 and waves NX for the same purpose, and this worked. I exported a Ambisonics quad track, but the spatial metadata injector doesn’t let me to inject audio spatial metadata.
Am I doing something wrong?
Most of tutorials about this mono to Ambisonics are 7 y/old outdated and they use plugins that doesn’t even exist.
Does anybody have a simple way to encode mono sources to Ambisonics?
Thanks.
I'm mixing one of my first sound design works, I have a 30sec-1min ADV with just sound design (mainly water and yacht SFX) and background music. I have some doubts about how I should manage the stereo image of the two stems/the different sounds I have:
Considering that sound design has the aim to provide the images with impact in this case, and considering a major part of the sound design lives in the low-mids since we're talking about water deep sounds, I would go with SFX at the centre and a somehow wider stereo image for music (maybe splitting bands with a stereo imager to keep the <200Hz at the centre and gradually widening the upper bands) in order to leave space enough for the SFX.
However if I were to mix this for a feature type of work I would make sure to give width enough to the SFX so that they would somehow "surround" the listener/viewer. In this case if I were to choose this option I would leave the music stereo image just as it is, without looking for extra width.
I would be curious to get feedbacks from you and discuss the options: I'm sure there are some aspects I'm not taking into account here that might determine what path I should follow! Open to all opinions and discussions!
Thank you
Hi everyone, I started working as a sound recorder for the national news in my country. Our basic setup when going out is a sound device 302/633/833 with batteries, a wisicom 2channel reciever and a wisicom emiter for the camera. (+ Some backups mic i carry in my backpack) Long story short my back is killing me. We stay standing up for long periods and i constantly have to bend my back (to get in the car etc..). We have very basic orca bags and harness but i feel i should buy my own harness bc i got myself a lumbago in 3 months. Whats your best recommandations for harness ? And bag ? Thx in advance
pls dont criticize me for my ignorance... everyone should somehow start ///
I set a bunch of markers for a project that was being worked on in 1-hour timecode, but now we're switching to a 10-hour timecode, and I'd like to not have redo all these markers...
Is there a way to shuffle a bunch of ruler markers from 01:00:00:00 to 10:00:00:00???
Film instructor here. My school is looking to expand into teaching post in a more comprehensive way, and I’m looking for some advice from the trenches to make sure we get the right gear for an industry-standard post audio workflow. I’ll be asking local post audio pros as well, but I figured I’d cast a wide net here first to make sure I’m headed in the right direction. We’ve already figured out speakers, room treatment, microphones, etc via a consultant, but I’m having a hard time figuring out what audio interface and controls would be best (other than they should be Pro Tools based). I’m looking for what would give students hands-on time with gear they’d likely be using in film/TV post jobs, as well as something that could last for a while without being immediately outdated right after buying.
Here’s the rough parameters for two potential spaces we might be using:
For the smaller space, I’m currently thinking of asking for either an S1+Avid Dock combo or an S4, either of which would be hooked up to an MTRX Studio interface.
For the screening room, I’m leaning towards an S4 or (budget permitting) an S6, then an MTRX II as the interface. My reasoning for the latter is to allow for future expansion or reconfiguration without having to buy a whole new interface.
In addition to rating the above-listed hypothetical setups generally, I’d love it if you could help me answer the following questions:
And yes, I will also talk to Avid for advice as well. Just looking to see if I’m in the right ballpark, or if there’s some crucial stuff I’m missing before going forward.
Welcome to the AudioPost Community Corner Post for FAQ discussion. Based on community feedback, the following types of FAQ posts are no longer allowed on the subreddit front page. Those conversations must instead use the comments section of this post;
If you are submitting something for evaluation here in the comments, be sure to leave feedback on other evaluation requests. This is karma in action. For evaluations of audio work, you can also submit to the /r/RateMyAudio subreddit
If you are wanting to discuss audio being fixed, repaired, removed, isolated, or tools or techniques related to it, then the discussion goes here.
If you are looking for free or very low pay help for your AudioPost needs then ask here. While this post allows low/no work requests, please note that we strongly discourage this kind of thing as it rarely proves to be the benefit claimed or desired. DO NOT put personal info in the comments including work history. Instead, use PMs to pass things like contact info.
Questions about schools, getting started in your career, and other newcomer FAQs go in the comments here. Before asking, be sure the topic is not already covered in the subreddit. The FAQ section of the AudioPost wiki offers shortcuts for searches of common topics.
You are invited to join us in the Reddit Pro Audio Network AudioPost Channel on Discord
If it's yours, by you, for you, about you, or something you are otherwise affiliated with, tell us about it here in the AudioPost Mine
This post is the only place in the sub for discussion about your latest site/works/product/app/content/business related to Audio Post. Have a new SFX library? Tell us about it here!
This venue allows you to get your info to our readers while keeping our front page free from billboarding. It's an opportunity for you and our readers to hear about your latest news/info. Please keep in mind the following when using this post;
Anything added MUST pertain to Audio Post. Tangential content will be removed
Accounts which are predominantly or solely promotional or spam may not submit here and will be banned.
Download and document links are NOT allowed but you MAY link to your site or video.
Content evaluation requests go in the Audio Post mine
NO sharing of personal / identifying info - Posters and responders to this thread MAY NOT include an email address, phone number, facebook page, or any other personal information. Use PM's to pass that kind of info along.
Welcome to the AudioPost Mine. There's going to be a lot of dirt but we hope for some gold too.
I've been kinda learning as I go with mixing/editing sound for my short film (although I knew a little when I started). I've been mixing everything in Adobe Audition.
One small thing I've run into: Each clip of dialogue from the original shoot is mono (as in, each clip shows a single waveform) which is how dialogue ought to be, to my understanding. However, I recorded some ADR for some lines, and these clips seem to be in stereo; the clips show two waveforms instead of just one. Should I bother converting the clips to mono or is stereo fine/no difference?
And, should anything in particular be in stereo (SFX, music, etc.) or should everything be mono?
Sorry if I messed up some terminology here... but thanks for any help!
Hello!
Asking some professionals here regarding my subject.
Sorry. And thank you in advance. I’m really blinded when it comes with costing of different tasks.
I recently reopened a project I did some time ago but some audio files were missing that I don't have right now because I remember erasing them a long time ago. I tried a file recovery program but failed to find them. I must've downloaded some type of bundle pack with the audio files inside because nothing came up when I tried to search on the net. Is there any way to find them somehow? would really appreciate the help!
In case anybody tries to find them, the file names are exactly these:
"[ETP Bonus Loops] - Dipset- (D#min) (140BPM).wav"
"Evil Steve (Emin) (160BPM).wav"
"Xen (F Lydian) (86BPM) - FULL LOOP.wav"
When working on fiction, if you organize the tracks by character do you keep the boom and lav on the same track or do you separate them so you can probably use them at the same time?
Or is it better to organize by shot and keep the lavs separately. Or does it depend what you are working on? Is one better than the other?
I have a Source Connect Pro license that I no longer require. I’m open to offers if anyone is interested in buying it.
Hi all, I'm part of a university animated film team. I'm the producer, not the sound designer, but I'm writing this post so I can send some advice his way.
We are screening our short film in a 200-seat movie theater w/ some kind of Dolby setup (our university owns the premises for their film school). No info was given on exactly what they meant by Dolby.
We are being asked to submit a stereo mix for the screening, so no need to mix for surround, phew.
Unfortunately, we cannot get access to the theater to test our mix before the screening. Everyone's short film submissions are being played in one huge supercut, so no one gets to check or customize their settings.
With this in mind, how should we guesstimate our mixing to fit the theater environment?
Neither of us are super experienced with this so we'd appreciate some good rules of thumb to make our mix sound the best it can. Thanks!
(crosspost as suggested by r/audioengineering user)
***Disclaimer: I know there's only so much plugins can do, and having a deep, gravelly voice like this certainly does some heavy lifting***
A few months ago I saw a post about techniques/plugins that could help with getting this type of sound on vocals, but I can't, for the life of me, find the post. It had something to do with phase shifting in the low frequencies combined with saturation, but I can't remember. Please help.
There was supposedly a town hall on Sunday to finally disclose to us what the local specific deal with AMPTP entailed. All the emails leading up to it said there would be a zoom option but I never got a link.
I was going to make the drive out to Hollywood for it but then I ended up having to take my kid to urgent care instead. I was hoping there would be some kind of update communication afterward, or maybe even a recording of the zoom portion somewhere, but I can’t find anything.
Did anyone go and have any updates about what’s going on?
So, I think like all of us, I'm feeling the pinch a bit at the moment so I'm looking to expand out of my little local australian market a bit but have no idea how. I've got 10 years of experience under my belt, a bunch of mostly Reality TV Credits and an idea to DLG or FX Edit for NA productions overnight as my timezone is 12 hours or so behind the US and Canada (unless that seems crazy or is already being done?)
I have no idea how to get in touch with those that might need my services that isnt just a cold email, So I'm just wondering if there is anyone here that has been able to make that jump or is it as my self-doubt is telling me that no one will want to work with someone on the other side of the world?
Someone please enlighten me on this insanity as my Nugen LMCorrect 2 which is even what Disney+ recommends keep rejecting my 5.1 mixes for being 27.7 when measured using Nugen LMCorrect 2 (AS SUGGESTED BY THEIR OWN WEBSITE) is marking 27.2 from first frame of action to last frame of action. ITU-R BS 1770-4. Is it possible that a person working on their team is using a different loudness meter or measuring it from silent points in the picture to get this figure? How can there be such a discrepancy between 27.2 and 27.7, am I missing something?
(Hopefully this is an appropriate channel for this question--let me know if there's a better one to use.)
Does anyone know how to play system audio from their Mac out of the L-R speakers of a 5.1 setup using Pro Tools Carbon as the interface? I'd like to avoid the redundancy and cost of setting up another pair of speakers for stereo monitoring when my main L-R speakers in the surround setup work great.
When I was working in stereo only, using the Main L-R TRS outputs of the Carbon, this worked fine--both Pro Tools and system audio were routed to those outputs. Now I'm unable to get my system audio to route to any of the Line Outs 3-8 (DB-25 connector) that feed the 5.1 speakers.
In Audio MIDI Setup Speaker Configuration for the Carbon interface, I'm able to hear a test tone through all 6 speakers by setting the channels correctly. I believe the Main L-R TRS outs are fed by channels 1-2, Alt monitors by channels 3-4 (which come out Line Outs 1-2 of the DB-25), and six line outs for surround fed by channels 5-10 (which come out Line Outs 3-8 of the DB-25).
Oh, and I can feed system audio to the L-R speakers of the 5.1 setup if I pass system audio through Pro Tools using Aux IO--I just don't want to have to have Pro Tools open and routed correctly to hear system audio.
I also have a case open with Avid support about this, but I'm not expecting great things. I'm paying for a 4-hour response time support plan, and they just got back to me with generic, unhelpful FAQs today after 10 days.
MacOS Ventura 13.6.6, M2 Max
Pro Tools Ultimate 2024.3.1
Im currently in the Netherlands and seeing the market here i am wondering which
would be the most established and big post production companies to look into?
Thanks a bunch for any lead!
So I did the Sound Design for a few small projects in Logic already but this time, I‘ll do the whole Audio Post. As there are many people lamenting how this workflow doesn‘t work that well (I tried it once just to see what errors it would create, it was a mess), can somebody who regularly does this workflow tell me the best practices to make the transition between apps as smooth as possible?