/r/WeAreTheMusicMakers
WATMM is a place for music makers to discuss the music-making process (and a few closely related endeavors)
WeAreTheMusicMakers (WatMM) is a subreddit for hobbyists, professional musicians, and enthusiasts to discuss making music. Welcome and enjoy the community!
RULES FOR POSTING:
We expect that all posters read the rules before posting or commenting. Click here for a full list of rules.
This subreddit has weekly threads for various things like Promotion, Feedback, Collaboration, etc. Each thread lasts for 1 week. If you post a new thread for promotion, feedback, or collaboration, you will be banned without warning. You must place these posts in the relevant recurring thread only.
/r/WeAreTheMusicMakers
Just asking… in the past 3 or so months, the quantity of music tracks on YouTube that swap vocals has exploded. I’m talking about “Michael Jackson sings a Weeknd song” type of tracks.
What software is the most popular for this right now?
And is something like Melodyne being used to isolate the original vocal track before an AI software changes it to another voice?
For anyone interested, im selling bitwig studio 4 on knobcloud. Great daw, but i don't have time to make music anymore. https://knobcloud.com/item/89648
I use Amuse.io to distribute my music across over 20 stores, and I just submitted an album set to release in October. Now, I have a bunch of clean versions of my music that I am also planning on submitting, but I want to make sure I do it the right way. 4 out of the 12 tracks on that album are clean/non-explicit. 8 out of the 12 tracks are explicit. Now, I want to release the 'clean' album on the same day, but I don't have isrc's for any of the tracks, since they're not out yet. Do I just release these as all new songs? Help me out, never done this before lol!
I want to make prog metal. I use neural dsp for guitars, and get good drums for programming, and have an at2020 for vocals which I'll eventually do. Ik that ggd sounds pretty good(once you get good at mixing them), but I can't get results which are too good out of neural dsp plugins. Is it related to having good guitar pickups?. Also the at2020 seems to sound pretty good for bg vocals, and harmonies and such, but i haven't tested it out too much for lead vocals (i suck lol).
Okay so, I've been producing/making music for about 7/8 years now, I have a decent home studio set up and some half-decent gear (Maschine, Komplete Kontrol, Adam monitors etc.) I started off making hip-hop (boombap style stuff) but have really gotten nowhere in terms of getting fans, listens, engagement etc. BUT have been levelling up with in terms of quality, ear and the like. Around 4 years ago started experimenting with making electronic music, mainly Drum & Bass, some Halftime and 140 stuff. had some positive reactions and found it alot easier to build traction when putting out this style of music, but it is out of my comfort zone and requires a lot more time and dedication mainly due to the fact I'm not too well versed in synthesis and sound design. I also often get the dreaded imposter syndrome when making this music and feel like im not doing it the right way, probably due to me often using loops/sample packsa lot.
I find Hip-Hop a lot easier to make and am very well versed in the workflow and process to making this stuff, but am feeling stagnant at the moment.
My question (or need for advice) is do i continue making hip-hop, not caring about going anywhere in music and keep it as pretty much just a hobby, or do dive headfirst into the world of dance music? Do i stay comfy, or graft to try and actually build a solid rep/fan base in order to step closer to fulfilling my dream of doing this full-time? Side note - know that making music isn't about worrying about fans etc. but do know want to do this full time and feel like dance music (at least the genres im making) is less saturated and therefore easier to build a fambase around.
TL;DR -I make hip hop and dance music and cant choose between which one to pursue.
Kickstart is a quick and outstanding side chain ducking plugin. The one thing I don’t like about it is that it ducks the volume by the same amount no matter how loud the audio is that triggers it. Say for example, a kick channel is the side chain trigger. The kick has varying volumes. On the softer, quieter kicks however, I DO NOT want kickstart to react the same as it would to a louder kick.
I know I could set up a volume controller on the kick channel and link that to the mix knob in Kickstart, but is there another option that I am missing?
So over the last week I've had 2 songs get over 2500 plays in a day in spotify radio (My high is usually about 100 per day from radio). Song A was over 2500 on back to back days, and 3 days later Song B was over 2500 for one day. My question is, is this the Spotify Algorithm testing out my music and should I expect more exposure soon? Or is it possibly just a fluke and my numbers will go back to normal?
Ok, I know questions like this get asked a lot re: removing/keeping old material up on streaming platforms (and the usual answer is just: it’s your music, do what you want… which I totally agree with). However - in this case I’m looking for what you’d do in this situation.
I wrote a bunch of songs over the Covid-19 quarantine and recorded them at home. I used Logic’s drummer in place of a drummer because I didn’t know one or have any available to me at the time. I liked the songs so much I put them out, software drums and all. Now - some time later, I’ve decided to re-record everything from the ground up (including real drums). The song STRUCTURES are the exact same, but… they’re different recordings.
Do I kill the EPs in favour of the “new” considering they’re the “same songs”? What would you do? Obviously I love the originals but I’m tired of having to preface every convo with “ignore the drums, they’re fake” lol.
I was quoted 1000 a single (mixed and mastered)
Engineer also does $500 a day for tracking.
Album (12-14 tracks) would most likely end up being $10-15k
My friend has a basement his dad turned into a studio in the 80s. So decent isolation and rooms to record are there for us to use. He’s willing to work out a really good deal for us to track there.
My band is proficient in tracking and we’ve recorded a 5 track EP before with decent/good results. We’d use my friends 6 unit rack and mic the guitar amps and DI the bass. I would then want to send all the tracks to be mixed and mastered professionally
My budget for this is currently 2 grand. I want to allot more money for marketing. Of course as I save more money this budget can increase.
We’re a 4 piece band (bass, 2 guitars, drums)
Is the DIY tracking route and then sending to an engineer comparable to shelling out loads of money to a reputable engineer?
Current music industry is catered towards singles and EPs. So maybe even tracking and breaking up the album into 2-3 DIY EPs to allure a publishing agency that would help us further our goals.
I’m just wondering how bands shell out so much money to record professionally and not have the success warrant that price.
Basically I was going through the songs I made in the very first week that I was sure were garbage but I quickly found out there was a lot of salvageable material to turn into better songs with the knowledge I have now. Such material included chord progressions and melodies that only needed a little bit of tweaking to sound good.
Take my advice with a grain of salt though, as I’ve only been making music for 6 months.
I found somewhere on reddit a site that had a geocities/lycos type of vibe, and it had a huge backlog of old samples (some zip, some in ISO format). I downloaded a bunch of disks of the site but forgot to bookmark it, and now I can't seem to find the site.
Some of the samples I got from there:
Back In Time Records - Korg Universe VOL-1
Best Service ProSamples vol.12 - Dance Vocals [AKAI] 1CD
ILIO - Vocal Planet VOL-1 - Gospel
There were tons of AKAI and Korg sample CDs, appreciate any help!
In the heyday of vinyl the idea of releasing without high quality mastering by a professional would have been ridiculous to suggest, however I've heard several people talk about how this could be changing, especially for smaller artists.
If you aren't going to have a vinyl pressed and just intend to upload to Spotify or the like, there are LUF targets on those sites and a process in place that will boost or reduce the volume of your tracks if they are too quiet or loud. Even talking about the colour across the album, now there are programs like Izotope's Ozone, which claims to be able to master multiple songs to a profile using AI.
So here are the questions:
Do you guys think professional mastering is still as important as it once was for budding musicians releasing their music on streaming platforms?
Can AI programs do the job as well or well enough?
Do the LUF targets and associated adjustments at point of upload affect this further?
Would be really interested to hear your opinions on the subject.
When you are recording live sounds to make a loop or multisample, do you master or otherwise process your recorded sound?
Hi guys! I'm currently working with my band on original songs. I'm the singer but i'm also fairly proficient in playing bass and guitar. Before i joined the guys i used to make all my songs from start to finish, having absolute control over the creative process. Now we need to find a process that works for everybody, but i don't know what's the "common practice" in bands on how to make a song. Do you develop the idea and then show it to the band so they "fill in the blanks"? Do you show the song already finished and they learn to cover it? Do you have a person responsible for instrumental and other for lyrics? Do you come with a simple concept in a rehearsal and everybody builds from there?
I would like to know how you guys do it and what's the pro's and con's of each approach!
I have a Presonus FireStudio that I'd like to be able to use for live performance running through Studio One. I could buy a used MacBook for running the DAW and effects, but modern Macbooks don't have Firewire ports. I've seen Thunderbolt to Firewire adapters, but I'm always skeptical of adapters since there's such a high percentage of them that are just garbage or straight up fake. Anyone have experience with these?
If the adapter will work, any recommendations on a Macbook model to buy used for running 3-5 live inputs with effects including Amp modeling?
Hello. Most mix/mastering in my area seemingly sits around 200 bucks, but I was wondering if the pricing is relative to the amount of tracks/instruments the song has? Like, if it’s only guitar and vocals should I seek something cheaper? What would you pay/charge for something like that? Cheers
From a performer's perspective, I get it. But from an audience perspective, it really just is so dumb. It seems a lot more standard with guitar based music, with the hip hop artists not dealing with it as often.
It's just a really goofy holdover. The fact that it's all planned out. If there was a legitimate "We aren't leaving unless you play another song or leave on the tour bus", I'd understand. Pull out a weird one and then a show stopper. But just doing it night after night is so funny, when you look at the big picture.
I went to a show tonight, and I knew there was an encore, and I knew what the songs would be. It's basically like a Marvel post-credits scene at this point
I'm sure some people get something out of it. But I do feel like at some point there will be a group of extremely popular young kids in a rock band that just say "No, these are very corny", and that'll be the end of it for good outside of older acts
Maybe I'm wrong though. Do you think they're around for good, even now?
So I have this weird problem on Abelton I've never had before. Whenever I record something like bass/guitar, the levels are just fine and I can hear them allright, I'm gettting a good and normal signal as always and not clipping or anything. However when I play back the sound clip (which has perfectly normal waves and stuff) the volume is extremely quiet, like I can barely hear it.
I'm getting good singal level on my master too (green line) when playing back so IDK what could be happening. Tried old projects and the same thing is happening, but only to some of the tracks... I have a Roland Rubix 22 and been using it for a year without problems. Any idea?
Welcome to the /r/WeAreTheMusicMakers Weekly Quick Questions Thread! If you have general questions (e.g. How do I make this specfic sound?), questions with a Yes/No answer, questions that have only one correct answer (e.g. "What kind of cable connects this mic to this interface?") or very open-ended questions (e.g. "Someone tell me what item I want.") then this is the place!
This thread is active for one week after it's posted, at which point it will be automatically replaced.
###Do not post links to promote music in this thread. You can promote your music in the weekly Promotion thread, and you can get feedback in the weekly Feedback thread. Music can only be posted in this thread if you have a question or response about/containing a particular example in someone else's song.
#Other Weekly Threads (most recent at the top):
Hey everyone, I was trying to record some music on to a cassette and realized after I recorded it that the right channel was very crackly and degraded. Ive checked cables, speakers, cassettes, and ive narrowed it down to a problem with the actual recorder. I believe its a problem with both the recording and playback heads I also tried cleaning the them with 70 percent isopropyl alcohol. I was looking into degaussing but idk if that will change anything. does anyone have any other techniques on what else I could try?
I'm in the process of recording a bunch of guitar cover arrangements. Very Sungha Jung-y, or "Stuff I would hear in coffeeshop." I have about 6 songs ready but have more I could throw together. My target audience is similar to the same group for lofi/ambient music - cafe playlists, potential background event/wedding music. It would also be a portfolio as wedding music I could play. I had some thoughts I could use a second opinion on:
Release strategy? My thought is to just release about 6 songs as a compilation and regularly post IG reels, stories, etc. It didn't seem to be worthwhile to release any songs as singles since covers don't really seem notable enough.
Is it correct that should probably be finding playlists to apply to or submit to? Do they exist on the same level as for covers? In general, is the strategy for original music/covers roughly the same?
Recommendations for a distributor? I use Distrokid for original music I have, but not sure if I should be using something different for covers
I really want to start getting serious with recording music, and up until now, I’ve just used digital instruments for everything. Now I’ve been delving into actual singing and recording with real instruments, but it’s been kinda rough. I’d like to say that some of it is because of my gear (literally just my phone and AirPods) but I can’t say for sure. So I have a question; how much better would my song quality be if I invested in some good gear (laptop, microphone, audio interface, all that jazz). The thing stopping me from getting all those things is that I don’t feel that my song quality warrants all of this gear. So I just wanted to know if investing in gear right off that bat would be a good decision or not.
Link to setup: https://imgur.com/a/nwgX7rE
Trying not to break rules 9 & 10, but I’m having great difficulty finding a solution that will fit my uniquely small space.
5” seems to be the threshold of “worth it” for buying monitors. As a result, my desk is too small to accommodate most decent monitors.
The intent is to do more playing/amateur recording (piano/synth/guitar/VSTs) and I’d outsource mixing/mastering if I got that far. I say this to emphasize that I want a detailed, but pleasant sound because the space can’t be easily treated to facilitate real mixing- that’s a project for next decade.
With that in mind, I want to explore the use of wall brackets flanking either side of the laptop to accommodate larger monitors that’ll give adequate lower frequency response despite the limitations of the desk.
Does anyone have experience with wall mounting a solution in limited real estate? Or should I try combining 3” monitors (the only ones that would fit on the elevated desk tier) with a woofer? I’ve included a link to images of the setup and listening position to clarify the question.
It’s a tiny apartment, so what you see is what I get in terms of space.
Thanks, and again if this breaks rules 9 or 10 I’ll try to think of a more general way to inquire.
I’ve had a problem since I basically started producing music (10 years ago). Ears/hearing tests seem to be fine.
Whenever I’m working on my own/somebodies track, it’s hard for me to tell if the mids/highs (1k and above) are too harsh, bright, or present, UNTIL I turn up my headphones/monitors to significantly loud levels. I don’t like doing this, because it often turns into a situation where I turn the volume up loud, briefly, and test/hear “does the mid/high end hurt my ears?” This worries me as I don’t want to have to hurt my ears to tell that something is too much.
To put it another way, I can’t tell if the mids/highs are too much until the music is too loud. Anyone else have this problem? Maybe I need to train my ear more, but this has been a constant struggle for several years for me.
Also on another note, I find it interesting that there is a trend for music to have less mids/highs than it used to. In my opinion, a lot of older hip hop was brighter and had more presence. In general, I think modern production has more shrouded mixes than say 10-25 years ago.
Getting ready to start releasing songs from my new project. Is it better to release the best song first or a song that is still good but not what you consider your best one? The reason I am thinking about holding off on the best one is that I've heard you cant pitch your first release to Spotify playlists and that usually your first one doesn't get listened to as much as there isn't much activity on your account yet. Does this make any sense?
I've made a few tracks recently that I've put on Soundcloud, however I am kind of self-conscious about releasing them through Bandcamp/Distrokid/etc.
When I listen to other people's music I feel like even if the idea is simple, there is still so much nuance about the implementation, so many layers, meanwhile all I have is a couple guitars, a couple synths playing kind of simplistic arpeggios and programmed drums.
Is there place on the major platforms for music the production and composition of which is not at world-class level or should it stay on Soundcloud? Should I stop comparing myself to musicians whose production is way above my level and release stuff anyway?
EDIT:
Thanks everyone. Guess I had this thing that a band encounters the day before their first gig where they suddenly feel like they are absolutely unprepared despite the fact that everything was okay a day ago.
It's definitely true that the worst thing that can happen is that I will get zero listens (which is probably the case since I don't yet have a following besides a few of my IRL friends). For some reason I felt bad about polluting the platforms even though they are already full of AI music and stuff like that, which makes my concerns ridiculous.
I'm using Shreddage 3 for the first time in a song. I'm just using my midi keyboard and the strumming buttons to just add some basic strumming patterns (along with piano chord voicings on the left which I know is not very authentic but shreddage seems like it nicely revoices them). When I listen back to it, it seems like almost random parts of the strumming are louder than others, and I'm not sure why. I've noticed velocity controls the speed of the strum, not volume, and I've checked all the other cc controls and I'm not sure what is causing it. My best guess is that the way shreddage is revoicing my piano chord voicings is just making it so that certain chords are naturally louder than others (i guess in like the tessitura of the guitar). It could also maybe be that the amp has some like phaser effect that makes it seem like random parts of the playback are louder? but I'm using a very basic amp. I'm at a loss and don't really want to automate the volume to make everything even.
I've always been completely bad at this. Melodies in general, but especially vocal melodies.
I can write lyrics all day. I can make beautiful tracks that people love. But I damn sure am not a singer.
Still a songwriter though. And I've always wondered if it's completely a "I just think something up" or if there's any remote method, or just... vague instructions even!
Does singing other people's songs help you make up your own melodies? I've always found that to be the case for me and writing out interesting chord switches or general feelings. But it's always kind of eluded me for melodies, again.
If you have any suggestions, I'd love to hear. I'm not going to become an established singer-songwriter anytime soon, but I'd sure like to keep going with songwriting in general and making those melodies comes with the territory
Running Logic Pro and will be grabbing a new M2 Mac. Would it be wrong to use an external SSD for most of my sample library plug-ins? I assume Thunderbolt 4 data transfer speeds will make this possible without noticeable performance or lag? Apple is great, but the prices for additional hard drive or RAM is bonkers AF.
I noticed that when i closed my eyes and played the same parts, my ability to feel the beat and groove increases dramatically. It’s amazing how distracted you can get be with your eyes open.
A lot of us are recording while simultaneously producing/mixing/engineering/ keeping our eyes glued to our DAW, and this can seriously distract us from PLAYING our instrument.
Anyways food for thought, if anyone feels like this helps then that’s great.