
Music Production and Mixing Tips Podcast for DIY Producers and Artists | Inside The Mix
If you're searching for answers on topics such as: How do I make my mixes sound professional? What equipment do I need to start producing music at home? What is the difference between mixing and mastering? What are some of your favourite production tools and techniques? How do I get my music noticed by record labels? Or what are the key elements of an effective music marketing strategy? Either way, you’re my kind of person, and there's something in this podcast for you!
I'm Marc Matthews, and I host the Inside The Mix Podcast. It's the ultimate serial podcast for music production and mixing enthusiasts. Say goodbye to generic interviews and tutorials, because I'm taking things to the next level. Join me as I feature listeners in round table music critiques and offer exclusive one-to-one coaching sessions to kickstart your music production and mixing journey. Prepare for cutting-edge music production tutorials and insightful interviews with Grammy Award-winning audio professionals like Dom Morley (Adele) and Mike Exeter (Black Sabbath). If you're passionate about music production and mixing like me, Inside The Mix is the podcast you can't afford to miss!
Start with this audience-favourite episode: #175: What's the Secret to Mixing Without Muddiness? Achieving Clarity and Dynamics in a Mix
Thanks for listening!
Music Production and Mixing Tips Podcast for DIY Producers and Artists | Inside The Mix
#203: What Actually Happens When You Choose a Sample Rate? (feat. Ian Stewart)
Confused about which audio sample rate to choose for your music? Whether you're recording your first track or mastering for Dolby Atmos, knowing the difference between 44.1kHz, 48kHz, and 96kHz is more than technical—it’s essential to how your music sounds and where it ends up.
In this episode of Inside the Mix, mastering engineer and Berklee professor Ian Stewart returns to demystify one of the most misunderstood elements in music production: digital audio sample rates.
We answer common questions like:
- What is a sample rate in audio production?
- Should I use 44.1kHz or 48kHz when mixing?
- Does a higher sample rate improve sound quality?
- What sample rate should I use for streaming or mastering?
Ian explains the Nyquist-Shannon theorem, how aliasing impacts your mix, and why 48kHz is the new practical standard, from YouTube to Bluetooth devices to Dolby Atmos. Plus, we discuss how higher sample rates give your plugins more headroom for harmonic generation and cleaner processing.
Don’t miss this one if you’re serious about choosing the best sample rate for your music. And stay tuned, next episode we dive into dither.
Links mentioned in this episode:
Follow Ian Stewart
Ways to connect with Marc:
Radio-ready mixes start here - get the FREE weekly tips
Grab exclusive access to BONUS content
Book your FREE 20 Minute Discovery Call
Follow Marc's Socials:
Instagram | YouTube | Synth Music Mastering
Thanks for listening!!
Does sample rate actually make your music sound better? You've probably heard arguments over whether you should record at 44.1, 48 or even 96, but here's a real question does it really matter? In this episode, we're diving deep with someone who not only understands the theory, but lives it every day in the mastering room.
Ian Stewart:You're listening to the Inside the Mix podcast with your host, mark Matthews.
Marc Matthews:Welcome to Inside the Mix podcast with your host, mark Matthews. Welcome to Inside the Mix, your go-to podcast for music creation and production. Whether you're crafting your first track or refining your mixing skills, join me each week for expert interviews, practical tutorials and insights to help you level up your music and smash it in the music industry. Let's dive in. Hello folks, and welcome to Inside the Mix.
Marc Matthews:Today's episode is part one of a special two-part conversation. This might be the first time. No, I tell you what it's the second time I've done a two-parter in all 200-plus episodes now. So we're with mastering, engineer, educator and all-round audio wizard, ian Stewart. Ian is a returning guest. Check out the listener. Favorite episode, episode 165 what is midside eq? One of the most popular episodes of the podcast, might I add. He's the founder of flow town mastering and an assistant professor at berkeley college of music, where he teaches the science and art of audio mastering. His resume includes work with legends like krs1 and he's a regular contributor to iZotope's Learn blog and Wave Labs as well, recently on YouTube. In this episode we are talking all things sample rate, what is it, what it isn't, and why. Getting it right could make or break your project. That's enough from me, ian, how are you? And welcome back.
Ian Stewart:I am excellent, Mark. Thank you so much. I'm excited to be here. I am in lovely New England, where today it is a balmy 90-something degrees, which for the rest of the world is 30-some-odd. I don't know. So, it's sticky up here. I've got the windows open. If you hear a little extraneous noise, that's probably from my end. So I I do apologize for any of that, but I am great, it's summer, we're doing it, life's good working on cool music, so yeah love it.
Marc Matthews:Yeah, that's the attitude man. I find, like when it I think the audience, when they listen to like the back catalog of the podcast, they could tell the seasons, because there's certain like episodes or series of episodes and then you just hear like animals, like birds, seagulls, all this stuff, so you can kind of like the podcast when you're listening to it.
Marc Matthews:You move with the seasons, which is quite cool yeah I appreciate it, but I know it's been hot here in the uk, as we were saying off episodes, so it's uh hot, hot time, especially when you're in a studio as well, like it's, and you haven't got that properly that the hvac going and stuff yeah, we're not.
Ian Stewart:We're not quite it was. It was, like you know, cool and rainy until a week ago and then, all of a sudden, bam summer. Yeah, so yeah, it's caught me a little off guard yeah, yeah, it always doesn't.
Marc Matthews:It always catches us off guard, to be honest. Yeah, it doesn't take much for everyone to start getting the barbecues out and then the shorts and t-shirts, even though it's probably. I mean, it has been hot here, but sometimes it's in the late teens and you still people are like I don't think it's that hot to be like that.
Marc Matthews:But there we go, so in this episode, folks, we're peeling back the layers of confusion around sample rates. This is kind of almost like a back to basics, really. From 44.1 to 96 and everything in between, Ian's going to help us separate technical truth from audio folklore. Think of this as your go-to guide for demystifying sample rates. But before we do that, if you want weekly tools and tips to make your music playlist worthy, click the link in the episode description and get my free weekly tips direct to your inbox as well. No spam, no fluff. Just once a week I'll send you weekly tips and tricks, not just for me, but from also individuals that have been on the podcast and stuff that I find out about as well. So enough for me. Going back to sample rate, I think to start our discussion, it's probably quite important just to outline what sample rate actually is. What are we referring to in audio?
Ian Stewart:Yeah, so when we record digital audio, right, when we want to get something from a microphone for those of you that are just listening, I'm pointing to my microphone and we want to get it into a file, we have to do a bunch of conversion. And part of the process is, you know if back in the old days, when we use tape or other mediums, right, that it was just a voltage that came out of the mic, went through a preamp and that voltage would get stored, mad magnetically on a tape. But when we go digital, we have to measure that voltage and store it as a value. And because we can't do that infinitely fast and continuously, we kind of take a sample, we, we measure that voltage a bunch of times every second and save that. Then we say, okay, the voltage is this, and we save it as a value, as a digital number, and we move on to the next sample and say, oh, now the voltage is this and we save that, and so the sample rate just tells you how many times every second you're taking that sample.
Ian Stewart:So for 44.1 kilohertz, that's 44,100 times every second. We're measuring the audio. So kilohertz is thousands of hertz. A hertz is just a measure of cycles per second. So it's just how many thousands of times every second? Really, we're measuring the voltage of the audio and storing it as a digital number, and that's pretty much the root of the audio. And storing it as a digital number, and that's pretty much the root of it.
Marc Matthews:Yeah, yeah, yeah, definitely. So whilst we're on that, just unpacking the 44.1. So if we were to dive into sort of the frequency that we can hear from 20, let's say, 20 hertz to 20,000 hertz, can you maybe just explain to the audience why it is that we're using the number 44.1?
Ian Stewart:explain to the audience why it is that we're using the number 44.1? Yeah, absolutely so. Um, if, if you've ever heard of the shannon sampling theorem or the nyquist sampling theorem or the shannon nyquist sampling theorem or anything, any kind of combination? Uh, these were two engineers back in the oh, I should know a time range, but I I want to say maybe even the 30s, 1930s, I think Nyquist certainly. I should have been more prepared for this. I think Nyquist worked for Bell Labs, but he kind of pioneered this digital sampling theory way before we even had the technology to do it. And what he computed mathematically is that if we want to store any given frequency say it's 100 hertz, pretty low, right. But if we want to store that, we need two sample points in that waveform, right. So if it goes from zero up to its peak, back down to zero, down to negative one, its negative peak and back. Basically, the idea is we at least need to store the maximum and minimum, those positive and negative peaks, to be able to accurately describe what that frequency is. So you need two samples for every period of a waveform. So commonly the range of human hearing is given as 20 hertz to 20,000 hertz, 20 kilohertz and, to be honest, that's pretty generous Really. Once you get out of your teens, maybe even a little younger, 20k is probably gone and not coming back, but we kind of take that as the standard accepted range.
Ian Stewart:And so when digital audio was really being developed by Sony and Philips back in the 70s or 80s, they were trying to figure out what's the ideal sample rate that we need to capture audio, and they knew they wanted to put the data on a disc, right, an optical disc, cds. And so they're trying to play this balance of having good enough quality that they can capture everything that human range of hearing can appreciate, but not storing more data than is really necessary, right? They don't want to, because more data means you can fit less music on the disc ultimately. So they said, okay, well, if we can hear up to 20 kilohertz, that means our sampling rate needs to be at least 40 kilohertz, right? So if we need two samples for every period, it's got to be double. Whatever the frequency you want to capture is Sample rate needs to be double that top frequency you want to capture.
Ian Stewart:So if we're trying to capture up to 20 kilohertz, we need at least 40 kilohertz sampling rate and at that stage, one of the things that you have to do is you have to remove frequencies above that. So if you say I'm going to sample at 40 kilohertz, there is a requirement that you not let any frequencies above 20 kilohertz into the digital conversion chain. And if you do, that results in something called aliasing, and maybe we can get to that a little bit later if you want. If you've heard of aliasing, that's kind of where this idea comes from. And so you have to put a really steep low-pass filter on your input audio before it hits the converter. All right, stuart, can we talk about digital audio please?
Ian Stewart:So back in the 70s and 80s these filters like to make. A really steep filter was tough. You couldn't. There are limits to doing it digitally. Doing a really steep analog filter was tough. So basically they said okay, you know what, we'll have the filter start rolling out at 20 kilohertz, but we need a little bandwidth for it to get the signal low enough that it doesn't create aliasing. So they add a little bit more. They go up to 22.05, right, two and a half kilohertz, or two and a whatever 2050 kilohertz of extra bandwidth to allow the filter to do its job. Now you double that extra bandwidth to allow the filter to do its job. Now you double that, you get 44,100. So that's where that 44.1K sample rate comes from, is saying we want the 20 kilohertz bandwidth of audible frequency, then we need a little extra room for the filter to knock out stuff above it, and then that's what our sample rate's going to be.
Marc Matthews:Love it. This is great because it takes me back to many, many, many years ago now, when I was doing my studies and we did a whole thing on um, sampling and synthesis and it was, yeah, we sort of dive, dive deep into this and hands, and, yeah, a lot of it I have forgotten because it's not something that, oh, I mean, we play, we, we look at sample rates etc and we're setting sample rates, but beyond that I don't really dive too deep into it. So this is a.
Marc Matthews:This is a great refresher for me, yeah you, you usually don't have to think about it too no, no, exactly, yeah, yes so, um, but you say, when you say things like aliasing nicholas, like oh yeah, yeah, that does ring a bell.
Marc Matthews:now I remember going through this and what you described there about how, uh, when we're taking those two, two samples and then we're doubling it and then you've got that, basically that buffer, when we're going up to 44.1. But it kind of leads me on nicely to my next question, which is when we start to enter the realm of 48, 96 and beyond and everything in between, what would you say is the standard sort of sample rate for music production? If someone were to ask you okay, well, I'm recording a band, I'm recording a band, I'm recording an artist, whatever, I'm just doing a recording. Yeah, what would your advice be?
Ian Stewart:with regards to sample rate? That's an excellent question. I think not that many years ago the answer might have been a little trickier. Actually, definitely when CDs dominated. 44.1, you knew you were going to have to end up there at some point. 44.1, you knew you were going to have to end up there at some point, and sample rate conversion technology was not as good, so there was maybe an argument to be made for recording at 44.1 some years ago. These days, though, I think it's pretty clear cut that we could say a really good standard is 48k. Um, for a number of reasons. One, we know atmos is still being pushed, and I think it's still. For me, the jury's still out on on how long-lived atmos is going to be as a format and if everyone's really going to start doing it or whatever.
Ian Stewart:That's topic for another day yeah, an interesting one yeah, but, but Atmos can be at 96k, but it's kind of a pain in the butt, yeah, and so practically Atmos mixes need to be at 48k, like it's one or the other, and really the vast, vast majority of them are at 48k. Youtube recently updated their audio recommendations For the longest time. It was funny. They recommended 44.1, but they recently updated their recommendations. Recommendations for the longest time is funny. They recommended 44 one, but they recently updated their recommendations to 48k. 48k has been the standard for audio that accompanies video for pretty much ever.
Marc Matthews:I was just going to mention that. Yeah, is that with youtube? Now, is that for youtube music where they're recommended for?
Ian Stewart:yes, for, for all of youtube not just youtube music, for all of it and even even for the main video site. For a long time they recommended 44.1 because they encode it to other codecs or whatever, right, but now they're saying 48. So there really has been a standardization in the last handful of years of almost all deliverables moving to 48k as a minimum. Yeah, so I would say 48k is a great sample rate to record at, to work at, certainly to deliver at. I still, as a mastering engineer, I still give people 1644 files for CDs in case they want to do that, but the main deliverable I send out is 24-bit 48K, yeah. So, yeah, I think there's a benefit there, just in terms of standardization and not having to go through extra rounds of sample rate conversion.
Ian Stewart:And then the other thing. So I think the natural next question is well, if 44.1 is all we need to capture our range of hearing, and really a little bit more, what's the benefit of going up higher than that? Uh, and there there are a few, and if I'm jumping the gun here, feel free to bring me in, but I was going to say it was going to be.
Marc Matthews:My next line of questioning was with regards to so we're getting in sync.
Ian Stewart:we're starting to know each other, mar, mark, Please carry on.
Ian Stewart:So, yeah, the nice thing about 48K is that basically it allows that filter that rolls off the high end to be a little gentler, and while I'm not a particularly woo-woo kind of guy, give me facts and good, hard evidence, or it just doesn't really matter. I you know and this this extends to many areas of my life in audio, though, there's no good evidence that audio content above 20 kilohertz makes any difference to anyone. Uh, there's never been a test that has conclusively shown that there are some people who can hear to 25 kilohertz and it changes Like it's. There's just no data on that. Yeah, it doesn't mean that those people don't exist.
Ian Stewart:It just means, in all the studies that have been done, we've never found one right so, starting with that as the baseline, we know that, however, what filters do is they change the phase response. Um, some of these filters can be linear phase. A lot of them are not. They are minimum, because linear phase filters add latency, uh, so a lot of times we're trying to get fast performance, so there are minimum phase filters that that do this, roll off and those do have a phase shift that's associated with them, and a phase shift is not exactly the time the same as a time shift, but it's. You can kind of think of it like that.
Ian Stewart:And one thing that we do have evidence to say is that human ears are unbelievably sensitive to timing, like like crazy and there's a good evolutionary precedent for this that if there's a tiger that's half a meter closer to you, you want to hear that one first and pay attention and get away from it, and not the one that's a little further. So we're really sensitive to timing and so these phase shifts. A lot of times people will hear differences in different sample rates and I don't know that we can say this conclusively, but I think there's enough strong evidence that points at this as the conclusion that really what you're hearing is the phase response of those filters. And by going to 48k or 88.2 or 96, you allow that filter to be a little gentler and not actually get in the way and impact the stuff that you can hear. The frequencies you can hear, which may be down at even 16, 18 kilohertz. That phase shift may still impact them a little bit. So that's one of the benefits, right, it can sound a little more natural.
Ian Stewart:So nonlinear processing is something that's dependent on the input level, right? So like a compressor or a limiter or a clipper or a saturator and all of these things. One of the characteristics that they have in common is that they create overtones. So if you feed them a one kilohertz tone, depending on the type of processing, they may create overtones at 2k and 3k and 4k and 5k, or, it may just be odd, it may be three and five and seven and nine and so on. So when you feed a nonlinear processor a very high frequency, there's a chance that it starts creating content that is above our Nyquist frequency, and the Nyquist frequency is that half the sampling rate.
Ian Stewart:So if we're at 48K, the numbers are a little easier.
Ian Stewart:So let's just say, for 48K the Nyquist frequency is 24 kilohertz as the highest frequency that we can capture, and so if we start generating content that's over that, now that breaks that band limiting requirement. Remember, I said earlier that we have to have this filter because we can't allow stuff into digital audio that's above that half the sampling frequency rate. So now we have to deal with that, and you can either not deal with it, which is what some older plugins that was kind of the naive way when we didn't know better you didn't deal with it and it turns into aliasing. Yeah, the other common way to deal with it is to oversample the plugin. So you internally switch to a higher sample rate in the plugin, you allow for that higher frequency content, then you filter it back out and then you kind of re-inject that into your main audio stream. Um, and so there's a theoretically there's an argument that if you work at 96k, uh, you have even you kind of have to worry about that a little less yeah, so.
Marc Matthews:So with regards to the overtones that being created by non-linear processing, but how? How is that being created? How are the what? How are the overtones that are being created by nonlinear processing? How is that being created? How are those overtones being created?
Ian Stewart:It's basically what's happening is the shape of the waveform is changing. So let's think clipping. Clipping is kind of the most extreme example and also creates a lot of overtones. So say, if you clip a sine wave, right, you've just got your regular sine wave, and then you push it up into clipping, the top of it gets just flattened off.
Marc Matthews:Yeah.
Ian Stewart:And you can. There are kind of two common ways.
Marc Matthews:We're getting a little deep and again call me back at any point, if you need, but I'll go for it Two common ways that we think about audio.
Ian Stewart:We talk about time domain, we can talk about frequency domain and they're interrelated and you can't really disentangle them. So the time domain version of a clipped sine wave is that it goes up and then it flattens off at the top and then it goes down, it flattens off at the bottom. The frequency domain version, if we think about what happens to the frequency content of that sine wave, in order to flatten the top off, you basically have to add higher frequencies with different phase relationships back into that sine wave. That is mathematically what's happening, right, and so clipping or any other sort of wave shaping where you you're really changing the amplitude shape of of the waveform, it changes the shape of it. The change of the shape makes these extra harmonics and frequencies pop out.
Ian Stewart:If you try and take those frequencies out, the shape goes back to what it originally was. Yeah, so, like a square wave. If you in a synthesizer, right, a square wave is the fundamental, and then odd harmonics at a specified kind of ratio. That's what a square wave is. Same thing. If you start filtering out those harmonics, it just slowly turns into a sine wave as you get down to the fundamental. Yeah, so.
Ian Stewart:So this is one of these things. If I were better prepared and thought we were going there, I would have images for this too, but it's a little complicated, for sure, and it's a little hard to just wrap your head around and envision. But they're really two sides of the same coin. As you start changing that waveform shape and doing clipping or limiting, limiting, does this too? Limiting will add high frequency content. Yeah, any compression, saturation, even any cue that models saturation, right, all these things will add these higher harmonics to whatever. And so if you're feeding in a signal that has frequencies up to 20 kilohertz, it's going to start generating stuff up to 40, at least, if not more, and so you need a strategy to deal with that, otherwise you get aliasing.
Marc Matthews:Yeah, interesting. I think what we'll have to do is I say this every time we chat. We'll have to do another episode whereby I prepare, because I again I didn't realize it's just when you say these things I'm like, oh, I'm gonna dive deeper into that and get to the uh, the nuts and bolts of it myself.
Marc Matthews:yeah, I think, yeah, if we pull it back, because maybe we'll come back, but really interesting stuff. And it, the, the oversampling, because I that you see that now more and more in plug-ins, in particular when you mentioned the the clipper there and we got our standard clip myself and you got the oversampling feature in it for that reason, so really interesting. You mentioned there about aliasing. Yes, maybe, if we sort of like change tact a bit and then just have a quick overview of aliasing and basically what that is.
Ian Stewart:Yeah. So I'm going to make up some numbers just to try and make this a little bit easier, if I can. Let's say that we're playing a 10 kilohertz. Uh, no, sorry, I'm thinking through this in real time. You're seeing my seeing the brain, the gears turning here. Let's start with sample rate. Let's say our sample rate is 20 kilohertz low. Right, that's lower than usual. That means that our nyquist frequency, the maximum frequency that we can capture, is what would you say it was 20?
Ian Stewart:kilohertz. If, if our sample rate is 20 kilohertz, half of that, 10 kilohertz, 10k, right, so 10k is the maximum frequency that we can capture. So what aliasing is? Aliasing is sort of the the easy ish not easy, but easy ish way to think of it. Visually is almost like a mirror for frequencies and it's almost like you put this mirror at the, the Nyquist frequency, so half of your sample rate. Okay, so here our our mirror is at 10 kilohertz, so if we do a sine sweep, starting down at 20, that ramps up to 20 kilohertz, right. So it goes past our Nyquist frequency, past that halfway point. When it hits 10 kilohertz, when it hits that mirror, it bounces back down into the audible range, down below 10K.
Ian Stewart:So, up to 10K, everything's as expected. When the frequency, once it goes above 10K, when it gets to 11 kilohertz at the input, the digital output is nine kilohertz, so the input has gone one kilohertz above the maximum. The output ends up 1 kilohertz below. Right, right. So you start to hear this tone sweeping back down Rather than continuing on up to 20k. You start to hear it sweeping back down in the opposite direction. Yeah, okay, right. So if the input gets up to 15k now that's 5k above the input or the above nyquist the output ends up 5k below.
Ian Stewart:So now the output is back down at 5 kilohertz right, I got you and as it gets up to 20 kilohertz the output goes right down to dc right. It goes down through 1K, 500, 120. It turns into this really low rumble. So aliasing creates these inharmonic right. There's no musical relationship. The relationship of the input to the output is a mathematical relationship between the input and the Nyquist frequency which has. It's not harmonic. It has nothing to do with the musical timbre or quality of it, and so it's a very kind of. It's a very digital type of distortion and it just sounds weird and hashy and usually, since the frequencies are much higher, it just ends up sounding brittle. It kind of adds some high end, but in a very brittle, unmusically related way.
Ian Stewart:Yeah yeah, so that's aliasing. And while we're talking about that, I should just mention there's also something called imaging, which is basically the opposite at the output. And so, just like we have to filter the input to make sure that frequencies above our Nyquist frequency don't make it in at the output of a converter, we're basically sending out these pulses every 44,100 times a second or 48,000 times a second or whatever, at the specified value, amplitude, voltage, whatever and if you don't then filter those, they actually create a whole bunch of extra frequencies above and below. So you've got to filter the output too, in the same way that you filter the input. Same frequency, same roll-off, same phase response.
Ian Stewart:So you really you're hearing two filters, right? You're hearing whatever, the filter that was used to record the sound, and then you're hearing the filter that's being used to play it back. And so aliasing and imaging very similar Again, kind of two sides of the same coin. One happens at the input when you're going from analog to digital, and the other happens at the output when you're going from digital back to analog.
Marc Matthews:Yeah, so just to clarify that. So when we go in aliasing and output imaging, imaging, yeah. So we used the analogy there of the 20 kilo sample rate. Half of that, 10 kilohertz, 10K, 11K then results in Bounces back to 9K Bounces back to 9K, yeah, and then so forth. So we go all the way up to 20. That bounces back to zero.
Marc Matthews:that's when we get the low rumble yeah and then when we have the, the imaging of the output, it's another filter on there. So we've got two filters, as you say, that filter going in, filter going out as well. Yeah, yeah, it's really interesting stuff, like you say. I said right at the beginning, it's not we just look at sample rates but and you just think, well, I'll just go for 44.1 or go to 48, whatever it may be. I mean, it's not often that we really dive deep into okay, well, why, and it's really why is he doing that?
Ian Stewart:I think it's worth saying here too. A lot of people hear about this and all of a sudden, they start thinking, oh my God, I have to set some high-pass or low-pass filters. So I do want to pause and reinforce. This is not something all this filtering that we're talking about. It's not something that you need to take care of. Yeah, yeah, of course, if your gear is properly designed which, to be totally honest, in 2025, even the most kind of consumer stuff is right. Yeah, we've gotten pretty good at this kind of stuff that you know, even very cheap AD and ADC and DAC chips can do a very good job of this. Now, it's not something that you need to worry about. It's just understanding what's happening in the background and why the sample rates are what they are in relation to the frequency content that we can capture.
Marc Matthews:Really really interesting stuff.
Marc Matthews:I realize we're coming towards half an hour now with regards to this this always happens with the, because it's probably because I go off on tangents when we were talking about the linear phase and et cetera earlier. Yeah, but I think the final question, really, I think for the audience listening with regards to sample rate and you touched on it earlier when I asked you the question about what sample rate would you advise? And we went through 48 kilohertz, because the majority of platforms now are requesting 48 kilohertz. You mentioned YouTube, for example, so would that be your advice for sort of streaming as well? I don't know how much experience you have with actually uploading it to sort of like a digital distributor, something like DistroKid Others are available. In that scenario, would you also recommend 48 if they allow it?
Ian Stewart:yeah. So usually what I recommend these days uh, really, for my clients, right, for my mastering clients I say, um, look, if they send me mixes at 96k, I'm gonna master at 96k and I will give them back 96k 24-bit files as their kind of high res master and and basically what I recommend is that they upload their high res master to distro, kid tune, core, cd, baby, whatever. Um, most services have a lossless option these days, right? Uh, apple music has a lossless tier title, amazon HD, co-buzz, a lot of them do, so people are going to be able to listen at 2496 or whatever in certain scenarios. The other thing I didn't mention earlier is that pretty much any Bluetooth headphones AirPods, airpod Pros, airpod Maxes, even other ones 48K is a very standard sample rate for Bluetooth devices. I didn't know that. And when I say standard, you can't switch it to anything else. That's what they run at.
Ian Stewart:So even if you're listening to something higher or lower, it's going to get converted to that on playback. So there's kind of another good reason to deliver at 48K, right? One less conversion step. But I generally advise my clients to upload 24-bit at whatever sample rate they sent me their mixes at, which is what I'm going to master at and kind of conform everything to that master at and kind of conform everything to that Um, because for the few platforms that don't support that Spotify, who for five years has been saying that a lossless, at least CD quality tier is coming People keep promising oh no, it's happening soon. I will believe it when I see it. Um, uh, but my, my feeling is is kind of that if someone is content to listen on Spotify, or they don't have the high res version of whatever service, they're not listening to music because they want the ultimate in fidelity, they just want some good music to listen to.
Ian Stewart:And they probably aren't going to notice that it's streaming at some lossy codec, lower bitrate. Anyway. There's plenty of other things that are happening to change the signal a little bit and change the imaging and high frequency stuff. So the sample rate conversion is kind of the least of the worries at that point. And then if you do upload your 96K 24-bit masters for the people that have the setup to appreciate it and want to, they're getting the absolute best experience that they can. That said, I think 48k is great. It kind of shallows out the filter enough that it sounds a little bit more open. You get a little just kind of more natural top end and we can't go there today. This will have to wait for another one. There are other trade-offs negative trade-offs as you start to increase your sample rate, certainly above 96 kilohertz, as you go up to 176 and 192 and 384 and crazy shenanigans like that. There are actually other negative trade-offs that most people in the audiophile community will gladly tell you you're wrong about and why they think it's so much better.
Marc Matthews:But there there's good science to show that actually it can create other nasty artifacts yeah, I think that would be a fantastic uh discussion, because it's a very, very thought conversation, provoking discussion. I think the negative trade-off of 96k and above, as you say. I think we'll save that for another episode and hopefully get some examples and some other bits and pieces in there as well, because I think that would be super interesting. But touching on super interesting, I didn't realize Bluetooth was 48 kilohertz. I would have gone with the assumption that it was 44.1.
Ian Stewart:And it's weird because it's already a lossy codec, although it's getting better, like aptX, and there are a few different iterations on Bluetooth that have really made it sound a lot better than it used to. Yeah, I don't know if I don't. There may be Bluetooth chips that can run at other sample rates, but usually, for whatever the device is, it's fixed. You can't change it.
Ian Stewart:I know pretty much all of Apple's stuff is 48K. My older Sony, you know whatever Sony model number. They're really got to love Sony's model numbers Super memorable. I can't remember what it is, some string of whatever. Those were 48K, you couldn't change it, so it's pretty standard and there may be some exceptions, but again it's going to be set at 44, one or whatever it is.
Ian Stewart:Um, so yeah, I really I think 48 K is a great deliverable. If you want to record at 88, two or 96, there's no harm in that it. It may make something sound a little better while you're mixing right, it gives you a little more room for those overtones and the oversampling. But as a deliverable, 48k is great. And I also should just say quickly that sample rate conversion has gotten unbelievably good, like really really, really good. And so if you have a mastering engineer who has the right tools, which anyone with rx, can do this, um, uh, wave labs, built-in sample rate conversion is also excellent. Um, other dawes not so much, but the ones that pay attention and get it right it can be almost completely invisible yeah, so you kind of.
Ian Stewart:You know, if you need to get down to 48 K, you can do it, and if recording at 88, two or 96, you feel like you get more out of the recording and your plugins and some things just work better. They're great, you know, and and you can easily, at the end of the mastering stage, come down to 48, if you want that as your deliverable.
Marc Matthews:So yeah, essentially the options there isn't it?
Ian Stewart:now with the technology.
Marc Matthews:we have now the option's there, isn't it?
Ian Stewart:Yeah, With the technology we have now. The option is there, yeah, but I would say 44.1, it's not. I'm not going to say it's dead, I'm not ready to declare that yeah, but it's. There are fewer and fewer cases where you really need that right. Pretty much the only case these days is if you are actually making CDs, then you've got to get it down to 44.1.
Marc Matthews:Yeah, and I suppose as well, while we're talking about technology, we also have larger the accessibility of storage as well, because obviously it's going to, but in doing having these larger sample rates is going to take up more storage. Ultimately, yeah, now we're in a position whereby we can have more and more storage.
Ian Stewart:Yeah, if you've got it, yeah, why not use it and go up to that for you. You can go get a 5-terabyte hard drive for $100 or something, right, yeah?
Marc Matthews:yeah.
Ian Stewart:These days, why not? It's wild.
Marc Matthews:It's mad. I was having a bit of a tangent. I was discussing the other day and I used to work for Staples I think they're still going in the States. I don't think it is in the uk anymore, I don't think still over and I remember selling hard drives. This was about 20 years ago. I'm selling the lamb and you we were selling like memories, and it was like 128 megabits, yeah, and now, and I, I think I found one the other day and I was like man, I remember when people come into the store and they would buy like 32 meg, like um sd cards and stuff, and I was like, yes, crazy man, crazy yeah, I had a boss a long time ago who had a story he loved to tell about his first hard drive that was 600 megs, basically a CD's worth of audio.
Ian Stewart:Yeah, that cost him several thousand dollars. It's the complete opposite relationship now.
Marc Matthews:Yeah, I was watching something the other day and it was a film and it was the early 90s and they referenced a solid state and I was thinking to myself were solid states I'll get corrected on this were solid states around in the early 90s? They may well have been, but a solid state in the early 90s?
Ian Stewart:if they were, that would have Insane amount of money Very esoteric, I think the technology and the idea for it was probably there, but it was probably large and very expensive and difficult to make.
Marc Matthews:Yeah, yeah, it was one of those ones where I was watching the film and I called it out and I was like that's not accurate, being one of those people. That's clearly not accurate that teenager's talking about having an SSD. There's no way they could afford that in the early 90s. Xst.
Ian Stewart:There's no way they could afford that in the early 90s.
Marc Matthews:There we go, Ian. Thank you so much for this this has been brilliant.
Marc Matthews:And I appreciate it. I put you on the spot with a few questions, so I appreciate your willingness to go into those subjects. So I know the audience will get loads out of this, in particular, because sample rate is one of those ones. You see questions all the time online. You see discussions and someone says one thing, someone says another. But it's always nice to when you're searching for those questions and someone says, do it at this reason, do it at that rate, and then explains and explains why, and this is this is what this episode is going to give.
Marc Matthews:We're going to give an explanation as to why you can make, and then you can make an informed choice. That's the sample rate that you want to use. So really, really good stuff. Before we wrap up and I go into what we're going to be going through in part two, is there anything you want to share with the audience? Alternatively, where can they? I say alternatively, in addition, where can they? Find you if they want to learn more.
Ian Stewart:Yeah, so my website is flotownmasteringcom. F-l-o-t-o-w-n. I've got links there. There's a page that has links to all the podcasts I've done, so the past episodes I've done with you Mark and some other shows I've been on.
Ian Stewart:If you want to hear me yammer on about other things, if you want to get in touch about mastering, there's a page with kind of a little about and some of the tools that I like to use, which you can see some of if you're watching the video, and an intake form where you can just submit all your stuff. Yeah, ian Stewart Music on Instagram. I don't post a ton, but you know, there's like pictures of the dog who left, like if we go hiking, you'll see pictures of me and the dog or sometimes a post-release stuff. Yeah, I write for the iope blog. If you, if you go look for mastering articles there, you'll find some of my stuff. I just did some videos for wave lab. It's there. It's wave labs 30th anniversary, so it's a great time to grab wave lab if you've been thinking about that. Um, if you are in need of a mastering dog certainly not for everyone yeah, I.
Marc Matthews:I don't know.
Ian Stewart:But really honestly, mastering is the main thing. I love mastering, I love talking about it and teaching it. So if you've got some music you want to make awesome, I would love to do that together.
Marc Matthews:Amazing stuff Links in the episode description for the audience listening. Please do go and check that out and just follow Ian as well. So whilst we are still in this episode, don't forget part two is going to be the next episode, so it's going to be episode 204. So if you're listening to this when this episode drops, you've got a bit of time, but if you're listening to this way forward in the future, make sure you check out episode 204, where we're going to be diving. I can't get my notes here head first. Why put head first? Why not? Yeah, well, head first, I guess not literally into another mysterious but crucial topic, dither One. I don't think I've ever mentioned dither on the podcast before. No sense.
Ian Stewart:Look, if you're going to get involved with dither, no sense in dipping your toes in Just head first.
Marc Matthews:Yeah, just go all in All in there, no sense in dipping your toes in just head first. Yeah, just go all in all in hold your breath yeah fantastic and uh audience remember as well.
Marc Matthews:I mentioned it earlier. If you want to get weekly tools, tips etc and just this week I included uh ian's wave labs, uh features on youtube as well, so an example of what you might find in that newsletter click the link in the episode description and get those free weekly tips direct to your inbox. And until next time, stay inspired, keep creating and don't be afraid to experiment inside the mix.