Inside The Mix | Music Production and Mixing Tips for Music Producers and Artists

#139: The Future of AI-Assisted Music Production and Mastering with Ian Stewart

April 23, 2024 Ian Stewart Season 4 Episode 17
Inside The Mix | Music Production and Mixing Tips for Music Producers and Artists
#139: The Future of AI-Assisted Music Production and Mastering with Ian Stewart
Subscribe to the Inside The Mix podcast today!!
You, can help me continue making great new content for listeners, just like you!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Have you ever wondered if AI can mix and master music? Maybe you're seeking answers to topics: what the term mastering refers to, what is normalisation, what does Spotify normalisation do, the best AI mastering, is there AI for music production, is there an AI bot for music, can AI make music, or maybe even can AI make my song sound better? Then check out EP 139 of the Inside The Mix podcast.

Prepare to be enthralled as the exceptionally skilled Ian Stewart and I lift the veil on the future of music production with the advent of AI-assisted mastering. We venture into the world where artificial intelligence shapes the sounds that move us, touching on the nuances of genre recognition and the potential shifts in the music industry's landscape. As we converse, Ian's expertise shines, offering an intriguing perspective on the complex relationship between AI advancements and human creativity, while also sharing indispensable tips for mastering with the aid of AI, ensuring audio engineers and enthusiasts alike can harness the full potential of these cutting-edge tools.

The ethical conundrum of AI-generated music on streaming platforms takes centre stage as we confront the implications for artist royalties and the inherent human desire to create. We navigate the delicate balance between technology's role in automating routine tasks and the potential for AI to introduce innovative forms of ambience and engagement—imagine your very own personalised concentration playlist, crafted by an intelligent algorithm. This thought-provoking discussion not only dissects the ethical implications but also delves into how AI might support the undercurrent of artists fighting for visibility in the streaming era.

In our final act, we unravel the intricacies of mastering for optimal sound quality, where my revelation about the impact of subtle gain adjustments sets the stage for a broader conversation on the art of mastering. We challenge the current paradigms of loudness normalisation and debate the merits of album versus track normalisation, all in service of preserving the artist's vision. Ian and I also demystify the technicalities behind using reference tracks and navigating loudness measurements, ultimately guiding you towards making your music resonate powerfully in the tangible world. Join us for this illuminating journey through the harmonious blend of technology and artistry in audio mas

Send me a Message

Realer Than Most Podcast

Here at Realer Than Most Podcast, we believe that hip-hop/rap is more than just...

Listen on: Apple Podcasts   Spotify

Support the Show.


► ► ► WAYS TO CONNECT ► ► ►

Grab your FREE Test Master at Synth Music Mastering TODAY!
✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸
Are you READY to enhance your music with my steadfast dedication to quality and personal touch?
Bag your FREE Test Master at Synth Music Mastering: https://www.synthmusicmastering.com/mastering

Buy me a COFFEE
✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸✸
If you like what I do, buy me a coffee so I can create more amazing content for you: https://www.buymeacoffee.com/marcjmatthews

Send a DM through IG @insidethemicpodcast
Email me at marc@synthmusicmastering.com

Thanks for listening & happy producing!

Worlds Collide Podcast:

Hi, inside the Mix podcast fans. This is Victoria, the host of Worlds Collide, the podcast about moving abroad, with guests from all over the world. Follow me on Apple Podcasts, Spotify or wherever you listen to podcasts and search for Worlds Collide, the podcast about moving abroad. And now you're listening to Inside the Mix with your host, mark.

Marc Matthews:

Hello and welcome to the Inside the Mix podcast. I'm Mark Matthews, your host, musician, producer and mix and mastering engineer. You've come to the right place if you want to know more about your favorite synth music artists, music engineering and production, songwriting and the music industry. I've been writing, producing, mixing and mastering music for over 15 years and I want to share what I've learned with you. Hey folks, welcome to the Inside the Mix podcast. If you are a new listener, make sure you hit that follow button on your podcast player of choice and to the returning listeners, as always, a huge welcome back. So I've got a really exciting episode in store for you today. You're going to love this one. But before we do that, I just want to let you know about another podcast that I featured on just this week, so I'm recording this on the 19th of April and it was on the Master your Mix podcast with Mike Indovina, who has been on the Inside the Mix podcast. So if you want to learn more about sort of mastering synth music, it's episode 194, mark Matthews, mastering Synth-Based Music, and we chat about finding your niche in the music industry, the dark art of mastering, the financial side of getting into mastering, hearing low-end, why people struggle with mixing low-end, the importance of volume matching, human versus AI mastering and the challenges of mastering synth-based music. So I'll put a link to this in the episode description, but please do go check out this podcast Mastering your Mix with Mike and Davina and this is episode 194. And just follow and subscribe to it anyway, because it's a fantastic podcast.

Marc Matthews:

So in this episode I'm joined by mastering engineer out of Boston, massachusetts, and also an educator, ian Stewart. On the episode, or rather in the chat at the beginning, I do go through Ian's bio, so I'll leave it to my past self to introduce Ian. So in this episode we talk about AI-assisted mastering how can we produce pro sounds using an AI mastering assistant and also what are the limitations. We also dive into the importance of genre recognition in AI assisted mastering and what you need to be aware of. We chat about the next breakthrough in AI assisted mastering and music production in general. We also talk about scratching the creative itch and how AI can have an impact on that. And then we dive into AI and streaming platforms and the impact of AI on streaming platforms and what we can expect and what the future might hold. Then we dive into streaming normalization and what you need to be aware of and top tips for when it comes to normalization and streaming platforms. Ian gives his top tips on mastering assistant plugins versus online mastering platforms. And finally, ian gives his top tips, or tenants, on reference tracks and how to use reference tracks in mastering and also mixing.

Marc Matthews:

So before we dive into this episode, I just want to make you aware of a couple of free resources that may well help you and your mastering. So if you head over to synthmusicmasteringcom, forward slash free. There is a link in the episode description. I've got my 12 steps to a mastering ready mix. It's a free checklist with 12 steps and you'll be able to make the mastering process super smooth and exciting and also take your mastering up a notch. And then I also have my five essential free mastering plugins so another freebie. You can download and start taking advantage of those. So that's enough for me in this introduction. Let's dive into my chat with mastering engineer Ian Stewart. Hello folks, and in this episode I am very excited to welcome my guest today, mastering engineer and audio educator from Western Massachusetts, ian Stewart. Ian, thank you for joining me today. How are you today? I am excellent. I'm so excited to have this chat with you and thank, thank you for joining me today.

Ian Stewart:

How are you today? I am excellent. I'm so excited to have this chat with you and thank you so much for having me.

Marc Matthews:

Yeah, I'm looking forward to this one. So I'm very, very much looking forward to this and it's going to be part of a nice curated group of episodes with regards to mastering that's being released at the moment. So really excited on this one. So I'm just going to read a bit of your bio for the audience listening who may not have or may not be familiar with your work. So you've worked with many wonderful independent artists and auto artists like krs1, and you also teach mastering at berkeley college of music and I've had jonathan weiner on a few episodes back, who is also at that particular college. And you, you co-developed the Baseline Pro plugin. Now that has been on my radar for a very, very long time and I need to make that purchase or at least try it. And I was having this discussion with another mastering engineer the other day, totally off the podcast. So that's really exciting stuff. So that's Tone Projects and you write for the iZotope Learn blog as well and I frequently and I've said this before in another episode, but I frequently signpost the listeners to that particular blog as well. There's so much good stuff in there, so much good stuff.

Marc Matthews:

So in this episode, we're looking at mastering in particular. As I say, it's part of a curated group or series of episodes and we were chatting off air folks about the list of questions I have. Here. We're actually going to dive straight in to AI-assisted mastering Now for the audience listening. If you are a returning listener, which I hope you are, if you're a new listener, welcome. You would have heard previous episodes with regards to AI mastering, so I think this is a really good place to start. So AI mastering tools are becoming increasingly available for mastering, as we well know, right? How can these tools assist independent artists and producers in achieving a professional sound at home? Maybe you could talk a bit about that and maybe some potential limitations or drawbacks to get us started.

Ian Stewart:

Yeah, well, I think one of the interesting things to kind of consider, you know, obviously AI is a big buzzword right now. Everyone's talking about it. We're seeing it in all these different areas, right Chat, gpt and image stuff and video. Now there's AI, video engines and all these things, right. So one of the things to kind of maybe understand a little bit, first and foremost about AI in audio and specifically mastering, is most of the products out there, right, it's easy to think, oh, ai and mastering. There's some sort of sentience inside my computer that's listening to it and figuring out how to process it on the fly. But really most of the things out there that claim to use AI or do use some form of AI, they're using it for genre recognition and that's kind of as far as it goes, and from there you really don't need AI, right? So it's saying what am I listening to? Am I listening to a folks kind of singer, songwriter thing with acoustic guitar and voice, or am I listening to a full on rock song or an EDM thing or whatever? And there might be a lot of nuance in the type of genre classification that it does. But from there it's kind of saying, okay, this is my genre. Here are the relative targets for loudness, width, dynamics, tonal balance, all these things that we tend to adjust in mastering, and just from there it's almost like arithmetic how do I use the processors that I have to push the song towards those targets? So I think that's the first thing to be a little bit aware of. That said, it's not to say they're not useful and they don't have a place, right. So you know, for for people that are interested in it. Yeah, I know I write for iZotope, right, so there's an association there. But even before I was writing for them, and part of why I even started, you know, uh, kind of working with and collaborating with them in the first place, was the AI mastering a system in Ozone. I think is is at least one of the more interesting ones in that it shows you what's going on under the hood, right, so it does its master assistant thing, but then it says, hey, here's everything that I'm using, which then allows you, the end user, to go in and say, okay, maybe this has gone a little too far, or this is interesting. What if I push this slider a little bit further and so you can at least kind of see where it's going or if it's totally misidentified the genre, right, you could say, well, you characterize this as pop, but I think this is more like R&B, right, so you can switch stuff there and you get a little more control than some of the other ones online that are maybe a little more ubiquitous, I guess. But I think it's certainly a valuable tool.

Ian Stewart:

You know, in terms of drawbacks and limitations, I guess the first thing I'll say is I try to be really careful. You know, never say never, right, never say an AI will never do X, y, z that a human mastering engineer can do. I've heard people say, oh, you know, but a human engineering mastering engineer will send you an email or call you up and say, hey, I'm noticing this and I, you know, fix that. Or is this intentional? And while it's true that any sort of AI mastering platforms right now won't do that, I'm hesitant to say that they'll never do that.

Ian Stewart:

You know, I think we look at the rate that the technology not specifically in music necessarily, but you know, I don't know if you know the YouTube channel MKBHD, but he did a video recently about AI generated video and how far it's come in one year from like this absolutely hilariously bad video of Will Smith eating pasta and, just like you know, he's got like six arms and he's just shoveling pasta off a plate into his mouth to now stuff that is very passable unless you know exactly the kind of flaws to look for, and like stuff that if you didn't know it was AI generated and you weren't looking for that, you probably wouldn't think twice about it. So the fact that we've seen in one year you know things with video developed that far, I'm hesitant to say that audio. You know AI couldn't be applied in audio to take over a lot of what humans do, and that's where I think the conversation actually starts to get a little more nuanced and interesting. For me anyway, yeah, most definitely.

Marc Matthews:

And what you said there about genre recognition is really interesting, because I remember having that conversation with jonathan, because he mentioned a similar thing, yeah, and and also about nuances as well. So, going into the nuances of that human interaction, that human touch, that human element, and, like you say, it can't do it at the moment, but it's like growth mindset, isn't it? It's like you can't do it yet, but there will come. I don't know if I I may have been like this might have been fake news a while back, but I swear I saw something online where there was somebody having an actual interaction with a, an ai platform, and it was giving them real-time feedback. I kind of guess it does anyway, doesn't it? The mastering system is real-time feedback as it is, but this was, this was a conversation back and forth between two parties well, uh, a human and a computer, which I thought was very, very interesting.

Marc Matthews:

And, like you say there, about how it's come on leaps and bounds, particularly with video in the last year, and I've said this before with regards to the platform I'm using for the podcast and, in particular, and other bits and pieces I use to promote the podcast and the artificial intelligence that these platforms are using have just like hockey stick curve last year and it helps me certainly promote the podcast. It's amazing stuff and it'd be interesting to see what are your predictions. What do you think is going to be the next sort of big breakthrough in sort of music production, because at the moment I know you can deconstruct a mix, can't you? And you can separate and isolate instruments in a, in an already mixed track. What do you think would be the next big breakthrough in mastering with regards to AI?

Ian Stewart:

I mean, I almost I can't help but wonder if it just bypasses mastering entirely. Right, because right now, mastering is a separate discipline. But we're going down this path of generative AI. Right, that's what we're seeing with video Chat, gpt will generate entire essays and whatever. Right, so, like, generative seems to be the direction that our people people are heading. So I can very easily see it getting to a place where create me a song in the style of Taylor Swift, but with, you know, uh, lyrics similar to uh, you know, pick your favorite artist. That's about X, y and boom finished song. No, you know, it just generates it from thin air.

Ian Stewart:

And to me, where that gets a little bit, I don't want to say scary, I'm not scared of it, because here's the other part is that there's a lot about the potential behind AI and ways that I think it could help the music community. But I can very easily see not even record labels, but the streaming platforms. Right, if a streaming platform has access to something like that and we know there are people that they just want to put something on when they go to sleep or when they're in the car or when they're working on something they're not really focused on the lyrics. They're not really consuming it as art. It's just some background noise to help them focus or sleep or whatever, and that's fine. I do that. I use that when I'm writing. If I don't have to be listening to stuff, I'll absolutely just have something on, you know, kind of softly in the background. I think streaming platforms could very much capitalize on that.

Ian Stewart:

And now, instead of having to pay, you know, even a fraction of a percent of royalties which that's a separate conversation to artists, now they have to pay none because they own it and and so like that part is a little like. You know, why are we we've got this amazing new technology? Why are we potentially going in a direction where it's taking away something from humans that they love to do? Like none of us do this? Because it's the most financially viable option. We have right. I mean, it's funny.

Ian Stewart:

I was thinking about this this morning. I haven't really talked about this much. I was a senior design engineer for commercial audio visual integrator for 15 years. I did classrooms at Harvard Business School. I did like a lot of, and I made a pretty nice living doing that, and so why I have this house and why I was able to build the studio and why I was able to take all my income and I've been doing mastering the whole time but it's why I was able to take all of that income and put it back into the studio and build out the room with the tools that I have and all of that. And then I left that because I am so much happier making a little bit less money but getting to do music all the time.

Ian Stewart:

Yeah, right, and and there for artists out there, we, you don't. You don't make music because you're getting rich doing it right. You do it because you need to and it, you know it, it does something and you know in in your whether you subscribe to the idea of a soul or whatever right Like we need to do this. It's this thing we have to get out, and so for AI to then just be taking that and being like it feels a little bit like low-hanging fruit and if we get to the point of it being generative, that to me is like the sad, dystopian future that it could be. But I don't think it has to be that and I think that's the important thing and I think there's some important stuff happening, some important conversations happening. My dog has entered the frame for anyone watching the video.

Worlds Collide Podcast:

I just saw that Beautiful dog.

Ian Stewart:

Yeah, that's Stella, and I think there are things that we can learn too from you know, I think, and I think there are things that we can learn too from from, like other disciplines that are using AI, and that that, to me also, is where it becomes. So I know that that doesn't really touch on mastering, but I guess that's kind of almost where I see it going is is eventually why, like sure, we could use it. You know, you could create a virtual assistant, or I could have an AI that deals with emails and billing, which is maybe someone's job too. I mean, there are ethical conversations, like literally any direction you turn with this topic.

Marc Matthews:

It's all really, really interesting stuff that you mentioned there and, in particular, that last point, what you made about with regards to your calendar and to-do lists and things like that, because in my mind, I use ai to do the tasks that stop me from being creative and doing what I want to do, and it goes back to what you said earlier about um, to paraphrase, scratching that creative itch isn't it, and that that's what we do ultimately.

Marc Matthews:

And I think, with regards to ai creating content, um, it's interesting the thought process of like a streaming platform, because I listen to what are they called? It's a playlist called, or it centers around deep concentration, and I can imagine playlists being created by generative AI for that exact purpose. But then I think you're still going to have well, I still will. For example, you're still going to have artists that are creating music for like legacy and it's it's something that they want and it and it represents them. So I still, I still think you're going to have that, but I think ai is definitely going to. It's going to grow in terms of creating content, like you say, for youtube or an advert or the background music in a in like wicked tuner or something like that. I was watching that earlier. That's why I mentioned it.

Marc Matthews:

Or something along those lines. You know, I think that's where you're going to.

Ian Stewart:

That AI is going to grow, and I think you're right in that respect. I don't think people are just again, we don't. We've never really done this because it makes us rich or famous or whatever right, the vast majority of artists out there aren't that.

Ian Stewart:

They're still just doing it because it's scratching that itch. It's something that we have to get out right. So, yeah, I don't think that's going to go away. But you look at, people already struggle so much to make an income from streaming or whatever. And I mean, I'm just thinking about it. It doesn't even have to just be a playlist of stuff that's generated. It could literally be on the fly, based on other things that you're listening to. Right, the streaming platform knows what kind of stuff you like, so it could have a playlist that just on the fly keeps. Not even a playlist, a stream, a channel, a radio station Right, a radio station, right.

Ian Stewart:

But but I and I guess what I worry is that does that then even further box out the small independent artist. Yeah, and you know, like you mentioned, that that's a lot of who I work with. You know, I've I've gotten to work on a couple of big, cool projects and that's awesome and I love it. But most of who I work with is are these little independent artists who may have, you know, a hundred or a thousand listeners, and so, yeah, you know.

Ian Stewart:

But here's the other side of that If the streaming platforms do go that direction and they're utilizing that like that could actually be a way to subsidize artists who are getting less streams. To subsidize artists who are getting less streams, it just takes the streaming platform not being as greedy or as capitalistic, right. There's like if you get a streaming platform that wants to actually give back to the music community and goes that route and says, hey, we're going to take all the earnings you know that we get from our AI generative stuff that people are using, we're going to give a percentage of that to artists with lower you know, lower play counts or the other, I guess the other place. I know we're getting like totally away from mastering and feel free to to push me back, but I guess the other place that I.

Ian Stewart:

I imagine, ai in streaming. That could actually be beneficial for artists. So now, um, spotify has had right. Spotify has, like their, their recommendation. Um, was it? It's there's? There's new music on fridays and then they do a playlist yeah, there's also the spotify dj.

Marc Matthews:

I don't know if it's still a feature. Actually, I was using it about a month or so ago, but it's dropped. I don't, I was using. I've been listening to spotify all day today in the studio, but I don't remember seeing it. Actually, I'm going to check that after the podcast, but there was a dj feature.

Ian Stewart:

Yeah I heard my wife had that on at one point in the car.

Ian Stewart:

It was kind of interesting, um, but but one place where I can, I can actually see it being really kind of beneficial is so.

Ian Stewart:

So they're the way they they Spotify did recommendations for new music that they suggested for a while at least my understanding of of it very broadly was they would.

Ian Stewart:

They would look at a whole group of users and if, if we just simplify it down to, let's say, two people, right, person one likes artists A, b and C and person two likes artists B, c and D, okay, so these two people have like 66% overlap, right? So it's not unreasonable for Spotify to say, hey, person one, you might like artists D and person two, you might like artist A. That's great, but it takes a significant portion of the population that's using Spotify to discover and find those artists on their own in the first place, whereas someplace that I can see it like there's so much music that comes out every single day, right, and that's part of the difficulty is rising above the noise and just getting above that noise floor of how much is out there and it's, frankly, more than any one person could listen to and catalog and try and tag and whatever, so that there's like a really cool use for AI.

Ian Stewart:

Ai can now listen to every single new release and say, wow, here's this artist. It's their very first release. They have 50 fans that have been playing it, but this music is really similar to all these things that this group of 3 million I don't know what the numbers are anymore, but 3 million people or 300,000 people or whatever, like I'm going to recommend this person that almost no one has heard of to all these people because the music is very aligned and like that to me. Okay, that's cool, that's a way to help artists, right. So again here, I think there are these very positive ways. We could use some of this technology and its capabilities, you know, to do things that either humans just can't do right, we can't listen to the sheer number of songs that are being released or that we just get too tired doing, or whatever.

Ian Stewart:

So, yeah, it's not that I'm all doom and gloom about AI. Really the opposite. I just think we need to start being careful about how we think about it, how we talk about it, what we push the not just individual users, but the bigger companies that are using it. How we give them our feedback and say listen, if you use AI in this way, we're happy to support you. If you use it in this other way, we're going to take our money somewhere else. Right, like? The idea of voting with your wallet is like it's a big thing for me. In this household, we talk about it a lot, so, yeah, well, you mentioned that.

Marc Matthews:

It got me thinking, actually, when you were talking about um, the content and how it's. It will churn through and pass all these songs, go through them and then categorize them in the hope, like you mentioned, about promoting you to a wider audience. But it also made me think to myself as well. I mean, if you were to go down the total, I guess you could call it the capitalist route. Would there ever be? Do you think this is going to? This is going to total rabbit hole now?

Marc Matthews:

But do you think there will ever be a time whereby this comes down to computing power as well, this comes down to computing power as well, whereby you could put in what you want to listen to and then it could create a bespoke song for you using a streaming platform. So a streaming platform. You put in your mood, where you are, what you want to experience, and it says all right, I'm going to create you an entirely bespoke catalogue of music that no one's ever heard before for you to listen to right now. Yeah, I was just thinking of it. Then, when you were talking, I thought actually, I mean, that's quite powerful, but at the same time, that's going to I don't know if I like the idea of it.

Ian Stewart:

Right, exactly. I mean and for again, for those that aren't watching the video, I'm nodding my head vigorously Like, yeah, that's exactly, and that's kind of what I was getting at right with the idea of not even a playlist or a channel, but something that is just on the fly creating and, yeah, maybe there's a prompt, right, maybe you say, hey, I'm doing xyz, I'm feeling a little down today. I want music that leans into that, or I want music to pick me up, or you know, like, yeah, absolutely go back to the spotify dj filter.

Marc Matthews:

I did feature, rather, I did find that I mean it was in its I think it was in its beta phase, and some of the music it did promote to me I was kind of like I'm not too sure where you got that from.

Worlds Collide Podcast:

But then again.

Marc Matthews:

I think I got quite an eclectic mix of music in my liked songs that I've accumulated over the years. So I guess it's testament to that as well. But the release Friday release I can't remember the name of it and Release Radar I regularly listen to those. Least radar um, I regularly listen to those and for the most part it gets it quite, quite accurate, although I did hear sort of. There was a rod stewart swing track that appeared on one one day and I was like I don't recall ever listening to some rod stewart swing, uh, so I'm not entirely sure where that came in.

Marc Matthews:

But yeah, I think, going back to sort of like the, the ai assisted mastering if, if our listeners are using sort of AI-assisted mastering, do you have sort of a top tip for the user If it's the first time they're ever going to use it Say, for example, they're going to use something like Ozone Do you have a top tip for them using it?

Ian Stewart:

Well, so yeah, for something like Ozone. First of all, I would say something like Ozone, if you can swing it right I know it costs a little bit more right to get the plug-in versus just going to Lander or Aria or eMastered or whatever. You all these meta level controls on the top level, right, so you can dial in things like basic loudness and how much of the width processing you want and stuff like that, without even diving in super deep. So that's a really nice, useful feature that you can customize stuff a little bit. To start there, and then also in Ozone there is and honestly this goes beyond AI mastering this is just like top mastering, tip period, I think but in Ozone there's a loudness match feature and just anytime you're doing something like this, turn that on and match the loudness of the master to the mix and it'll do that automatically in Ozone. And that's another thing that separates it a little bit from some of the online platforms, although I've seen more of them doing that, which is great to see. Frankly, I'm glad that they're doing that, because it's so easy to just be fooled. I mean, I did a perfect example of this.

Ian Stewart:

I did one of my first classes in the semester we talk about level and loudness and how those things interplay and what is loudness even, and all that sort of stuff. And I play an example for my students where I take two files and I say I've done something to these and they're thinking I've EQ'd it or I've compressed it or whatever, because that's kind of what we've been talking about so far and all I did was change the gain of them 0.3, 0.3, right, a third of a decibel. And I play it and I ask them what they hear and they go well, first of all, do you hear a difference? And I go, yeah, I think so, I'm not. I think it might be an EQ thing. I think one student this year said I so I'm not, I think it might be an EQ thing. I think one one student this year said I think I might just be hearing the sound of my own delusion, which is like maybe one of my my favorite quotes now. Um, but, but just gain, changing gain changes the way we perceive things. Right, different level.

Ian Stewart:

So by by gain matching, you can get a much truer sense of what that processing is doing to your audio and if it's taking it in a direction that you like. And so you know again, ozone makes that really easy. You can run the master assistant, you can see what genre it has chosen and you can correct that if you want. Or you can just try playing with different ones and say, okay, this picked EDM, but what if I try pop? Or what if I try rock? How does that change how it's processing it? And you might get an interesting result that you like that way.

Ian Stewart:

And again then just doing the loudness matching thing and hit bypass with that on and you'll hear your mix and then the master, and that makes it so much easier to figure out. Have I pushed things too far? Have I made it too loud and just really crushed it? Or maybe I actually want to make it a little bit louder, to get a denser sound, and you can start to judge those things objectively, whereas if you've got a built-in gain change that's 0.3 dB or 10 dB like making those sort of comparisons is kind of out the window I think that that gain, uh the loudness match, is really important because I think when you're first starting out, it's quite easy to.

Marc Matthews:

It's that classic sort of louder that you, you automatically think it's better because it's louder, right, and you get that. There's that false I don't want to say false promise then, but I think that's my flu kicking in. That made no sense whatsoever. Um, but yeah, that it's, yeah, you want to get it down to that level so you can see where the differences are, what you, what you're making there. So I think that's really really good advice. Another thing I was going to mention, or rather ask you about, was the use of reference tracks. So, um, in particular, what are your thoughts on platforms that I mean ai system mastering without a reference track? What are your thoughts on that? Should the audience be using reference tracks? Is that sort of like, if they have an access to it? Is that sort of a golden rule you should be using reference tracks, or can we rely on what the AI-assisted mastering is telling us? I think?

Ian Stewart:

first of all, yes, If you can, especially if you can get high quality, right. So Spotify is lossy, but there are so many lossless and even high res options out there. Now, If you can get Apple Music or Amazon, I have mixed feelings about because of how they do normalization, Whatever that's a tangent feelings about. Because of how they do normalization, whatever that's a tangent Um title right, there are lots of. There are lots of good platforms out there, and actually I have been. You know what.

Ian Stewart:

This is going to be a good kick in the butt for me. I've been, I did a whole bunch of research, uh, cause I switched streaming platforms recently and I did a ton of research. Um, and I've been meaning to write up a blog about, like, what's the best streaming platform for you, right, Cause it is, it's a personalized thing. So I'm going to do that, I'm going to finish that freaking blog and get it up on my website and and so for listeners, I think, by time, um, this comes out, I will make an effort to do that. So so go check that out for sure If you're interested in. There's basically a whole spreadsheet where you can plug in what's important to you. You know, discovery versus audio quality versus credits versus whatever.

Marc Matthews:

I'll put that in the episode notes as well for the audience.

Ian Stewart:

Yeah, so we'll, we'll be in touch, I'll get that written up soon and and get you a link for it. But anyway, uh, reference tracks right.

Ian Stewart:

Sorry, get totally off track that's okay but if you can get yeah, if you can get a lossless high res stream or you just buy lossless downloads or you still have cds or whatever, I think reference tracks can be great. They can also be a little misleading if you're not careful with them. So I, I, for me, the. The thing that I kind of say to my clients and my students is yes, use reference tracks, but here are some kind of tenets to keep in mind, right, and here they are.

Ian Stewart:

So one, the reference track has kind of got you know, it's got to be stylistically and arrangement wise, at least in the ballpark, right? I can't tell you the number of times someone says you know, sends me a track, and says, hey, can you, you know, can you make this sound like Radiohead or whatever? And it's it. It's just nowhere near that in terms of arrangement or mix or anything. So, no, not really Right, like we got to be a little closer to start, so so so, first, picking references is really important and just being honest with yourself about the, is this reference, you know, is my mix a good reflection of this reference and vice versa? So that's that's kind of tenet number one. Tenet number two is that I think it's really important to check against and this kind of gets more specifically towards mastering, check against your reference, both loudness normalized and not loudness normalized these days you really kind of got to do both right. 20 years ago everyone was worried about am I the loudest kid on the block? And it was just how loud are you in absolute terms compared to the next song? Everyone wanted to be louder, louder, louder, louder, right, and it's still, I think, important to listen in that context and part of the sound right. Say, you're working on I don't know some drum and bass stuff and you pick out a Noisia track right, part of their sound may be a lot of limiting and compression and saturation and stuff Fine, so you may need to incorporate that into your approach to get some of that sound. But then the other part that's really important is because most people are listening in a loudness normalized context these days, like when we talk about casual listeners not us engineers and creators and whatever right, but just my wife, your best friend, who you send your mix to and you're like, hey, man, check this out, whatever right, they're listening predominantly loudness normalized. So it's really important then to go and take your master. And so a lot of platforms use negative 14 LUFS. Apple uses negative 16. There are a few different ones. So you've just got to kind of understand what the level that any given platform or the platform you're checking against uses, but then turn your master down to that level. And now to me, the more interesting question these days is, rather than how loud is my master is, how loud does my master sound at negative 14 LUFS integrated, because the LUFS measurement system is very imperfect. It's a lot better than anything we had in the past, but it's still very imperfect. And two things that both measure negative 14 integrated can have a very different perceived loudness and so understanding how your master will actually sound at negative 14 or negative 16 or whatever, next to all the other stuff out there Today, I think that's that's the more important way to to use a reference Like yes, absolutely, in terms of figuring out tonality and overall level and density and width and all that stuff very important, provided you have the right reference, but then what's it going to sound like?

Ian Stewart:

And and and. Actually this is something that I think I see and end up helping artists with more than the other thing, more than the like. I can't get it loud enough, right, it's? I can't get it loud enough at negative 14, which is kind of a weird sentence to say.

Ian Stewart:

Yeah, yeah, but it does make sense, but it makes sense when you think about it right, they'll come to me and say, hey, my master's you know I did my master or so and so mastered it for me and it's at negative 12 integrated. So it's getting turned down at least two DB, but it still sounds quieter than other stuff. What's going on? Um, and that's where this gets hard to give concrete tips and explain in a nutshell, frankly, and and the experience of doing this all day, every day, kind of starts to kick in, you know. So I guess if people wanted tips there, there are a couple of things I can say. One often it comes down to tonal balance. That's a really big part of it.

Ian Stewart:

You'll think that your EQ balance is close and it's maybe not actually that close or there's some part of it in the very low end or very high end that your monitors aren't really telling you a true story about, they're not giving you enough accurate information and so it's not as close as you think it is. But then the other thing is kind of the macro dynamics, the long-term dynamics of the song and in particular LUFS measurements. To do an integrated measurement it uses a couple gates where basically if a loudness value is below a gate and the first one is at negative 70, so that one kind of almost doesn't matter, but then there's basically one that floats, it moves depending on what the final integrated level is, and if there's stuff that's below that it doesn't get counted towards the final value. And you get into this weird situation where by actually making quiet stuff louder, you can make the measurement quieter, so it gets turned down less, which is like that's a little mind bending right there.

Marc Matthews:

It's a riddle in itself, isn't it? Yeah?

Ian Stewart:

And. But then, like, the takeaway from all of this is that, like I can't tell you the number of times I have said context, you know, in my classes this semester. But like, context is so important, right? So we don't want to just do this just for the sake of making it playback if it's not playback a little louder on streaming if it's not the right thing for the music.

Ian Stewart:

Yeah, yeah, so like there are ways around this and there are ways to figure out why one thing might sound quieter than another at negative 14 LUFS integrated. But then you have to make the more important decision of OK, I can do X, Y, Z to manipulate it and make them more even, but then does the song still sound the way that I want it to and have the dynamic impact and emotional impact and all that sort of stuff?

Marc Matthews:

It's very interesting. As soon as you mentioned the streaming platforms and their targeted loudness, I immediately in my head I see these threads online of various discussions of people saying X, y, z, you should do this, you shouldn't do that and whatnot. So, with regards to that, I think this would be good, because we are at 35 minutes now. These episodes always fly by. Um, I think it'd be good. What? What is? What are your? Because you've already you've already mentioned about sort of the minus 14 at ufs and minus 16 for apple.

Marc Matthews:

With regards to mastering, um, I mean in my head, your mastery. You want to make it sound as good as possible. Ultimately, that's what. That's what you want to do. You want to make it sound as good as possible. Ultimately, that's what. That's what you want to do. You want to make it sound good. Should the audience listening? If they are using ai assisted mastering, should they be mastering to a targeted loudness? Should they be aiming for minus 14 or should they be just aiming to make it as sound as good as possible with what they have? What are your thoughts with regards to that?

Ian Stewart:

yeah, I. I think the goal is always to make it sound the best possible that it can. Certainly certainly don't aim for any number at all. Definitely not negative 14, but really any number. Like you shouldn't be, like you shouldn't be aiming for negative six, integrated, or negative 10, or negative 14 or negative Like. There should be no number that in your head I have to hit Now. I guess the other part of this is if, when you start thinking about a multi-song release or negative like there should be no number that in your head I have to hit Now.

Ian Stewart:

I guess the other part of this is if, when you start thinking about a multi-song release and this actually I'm going to go back right, I said I had a little bit of beef with Amazon music and the way the normalization. They normalize everything to negative 14, even if you're listening to an album in full. So when I master an album let's say it's a more dynamic one right. One track might be at negative 10. One might not be at a negative 11 and a half. One might be at negative nine. They're, they're gonna move around and that's what's gonna feel artistically right for that collection of songs, based on their tonal balance, based on the arrangement based on all these things and if all of a sudden now you start changing relative levels between them, that's, that's not what I did, that's not what the artist signed off on. Yeah, yeah, yeah. So that that really frustrates me.

Ian Stewart:

About Amazon, and I mean it's been a couple months since I looked at them. Maybe they've changed in the last few months, but I kind of doubt it. I wish they would. I wish they would move away from that, and there's actually really good statistical data from lots of listening tests that actually shows that untrained listeners, just casual listeners, prefer album normalization where basically, you take the loudest song on an album and set that to negative 14 and then let everything move up and down as it may, right? So if one song was at, you know, if the loudest song was at negative 10 and there's another that was at negative 12, now the one that was at negative 12 ends up at negative 16 on the normalized version, 16 on the normalized version.

Ian Stewart:

But, but, but, to get a little bit back to the question really is is no, I don't think you should be aiming for any specific number, and the main reason that I will point to as as like evidence for this is that the numbers change. They've changed before right. Spotify for a long time was negative 11. And not using LUFS. They used replay gain for a while, which is different, and then they changed to using LUFS and then they made it negative 14. And then other things change and move. Youtube has changed in the past and there's nothing to say that either. The reference level that they use and I really prefer that word over target, because to me, a target is something that you're trying to hit- yeah, yeah, if we say reference, level.

Ian Stewart:

That's the level that they're using as their reference. That language feels a little, I don't know, to me it makes a difference in my head, so whatever. But their reference levels may change and the way that we do normalization may change. Right, right now everyone's kind of using integrated LUFS. Actually, when I set up a group of songs for mastering so like I work in total tangent here, I work in WaveLab. When I bring the songs in, wavelab has a function called the metanormalizer. It basically allows me to take a group of clips, songs and normalize them. But it's much more flexible than just doing.

Ian Stewart:

Integrated is by using the loudness range, the LRA, which is another metric that LUFS spits out that people honestly don't, I feel like, don't talk about or think about it a lot and it's sort of more like LRA is. Loudness range is sort of more like musical dynamics, right, the difference between the loudest fortissimo section and the really soft pianissimo stuff, right. That's sort of what loudness range is getting at. And by taking the top of the loudness range, basically the loudest average part in each song, and putting those all, even the integrated levels may vary a bit but that actually creates a spot. Then when I run stuff through analog. I'm usually tweaking things half a dB plus or minus once I capture them back in to get them at their final level.

Ian Stewart:

So we could change the method. It could not be LUFS at all, it could be something totally different. Right, you know, negative or based around negative, 14 integrated LUFS, whether that's all of the songs or some of the songs, and but you've, you put the loudest one there or whatever, and then in six months stuff changes. Now you've, you've created these set of masters that you know. Do they still sound right?

Ian Stewart:

Did you make sacrifices in in the process of, of creating those masters? Um, that weren't the best for the music. So so yeah, my, my this is a very long way of saying my goal is always just make, make levels in between songs, make each individual song just sound the best it can at the right level. And again, if you integrate the loudness matching thing to understand when you're pushing stuff too far in terms of level and the limiting is doing too much it really simplifies a lot and you can get to a place where you can really find this optimal spot. I find for a lot of music that is above the reference level but doesn't have to be super loud and crushed and is still punchy and exciting and will get turned down a little bit and life's kind of good.

Marc Matthews:

I echo everything you said there and I think, in particular, what you said about reference rather than target, and I think that's very important. But the bit you mentioned there about how standards can change and reference can change as well is important, because I've mentioned that a few times on the podcast and when I've had people approach me and they've said you provide a master for each particular platform and I said, well, not really, because that platform could well change, you know, and you don't know what's around the corner. So I think that's incredibly important and I kind of want to see more of it in threads online. When I see these discussions and in particular groups as well, I think it's really important.

Ian Stewart:

I think the other part of that too is like, in terms of providing, I'm surprised that I actually haven't had that many artists ask me that clients. But that surprises me a little bit because, like, do you, as an artist, want to pay for six different uploads to each to one platform?

Marc Matthews:

I, I wouldn't if I were in your shoes?

Ian Stewart:

Right. So like that's the other part of it. Sure, I can do you six masters or five masters or whatever, to different targeted loudnesses, I guess. Do you really want to pay for that? And also, I'm just going to get them sounding right and then turn them down. Yeah, exactly.

Ian Stewart:

Exactly so yeah, the platforms do this in an automated way. They're good at it, let them do it, and that way, if it changes, you don't have to worry. They're good at it, let them do it, and that way, if it changes, you don't have to worry. I mean, I've even seen things that, like DistroKid, I think, has a thing where it will pre-normalize stuff for you. Wow, like, yeah. And so there are just be aware of that. There are entities out there trying to make a quick extra buck by doing things that the streaming platform is going to do anyway yeah yeah yeah, you've got to be careful, haven't you?

Marc Matthews:

and yeah you do yeah, this I I mentioned this on an episode a while back and I actually I don't know if I mentioned it on air, but it might have been post interview, post chat and it was about how, uh, there is a platform I'm not going to name it, um that you put in your video. This is an audio, but you put in your video and it will. It's really good. It will select the best parts of the video, it will give you a feedback on what the video is in terms of like, hook, trend, all that sort of stuff, which is amazing, and then it will also put the captions in there.

Marc Matthews:

And probably about a year ago, somebody messaged me on social media and said oh, I've seen your content online. I can do this for you. And basically all they did was find my youtube video, put it into this platform, send it back to me, wanted to charge me for it, and then about a month later, I found okay, that's what they were using, so I'm just going to do it myself. Yeah, I said no, anyway, I was just like I'm happy with what I'm doing, but it just it's a bit of a roundabout way, but it's just saying you've got to be careful, you've got to be careful because people are out there to make a quick buck off these ai platforms, both in audio video imagery.

Ian Stewart:

Yeah, you gotta be really careful and I I think that's a perfect kind of place to circle back around to where we started with the ai thing, right, is that? Yeah, that, yeah. We do want to be careful about what the tools we're creating are, and not only how they can be used for good, but how they can be misused and exploited for financial gain or otherwise.

Marc Matthews:

Indeed indeed, ian. I think that's a fantastic place to end our chat today. It's been an absolute pleasure chatting with you on all things AI and mastering and whatnot and everything else in between. So, for our audience listening, where can they find you online? Where can they learn a bit more about what you do? And have you got any events coming up that you'd like to, or releases or anything along those lines that you'd like to push out to the audience listening? Yeah, absolutely so.

Ian Stewart:

I'm not on Facebook that much, except for a group called Mastering Engineers Worldwide. I'm one of the moderators there. I don't know. We've got like 10,000 or something members. If you want to come chat with mastering engineers about mastering stuff, that's definitely a great place to do it. There's a lot of big names that you've probably heard of in there. Hang out, and so that's a great place to get feedback.

Ian Stewart:

Other than that, instagram, ian Stewart Music is me on Instagram, and then my website, flotownmasteringcom. It's F-L-O-T-O-W-N. Masteringcom and that's where my blog will be. I mentioned that blog about streaming platforms. That's where I'll publish that, and I've got an FAQ about mastering and if you just want to reach out, get in touch. That's a great place to do that. And then, in terms of events, just on the topic of AI, since we're talking about it, events, uh, just on the topic of AI, since we're talking about it, um, there is the AES. Well, let's see, berkeley College of Music is hosting a, an international symposium by AES, the Audio Engineering Society, um June 6th through 8th, I believe it's. It's at Berkeley, no-transcript. Meet up in person, uh, find me on instagram, shoot me a message or on my website or something, and maybe we can link up there fantastic stuff.

Marc Matthews:

Um, I remember I was chatting to um jonathan about this yeah, I think I can't remember if it's on the podcast or not and I was saying I was going to look at flights to go over in june. I gotta look at the uh, the bank balance, as it were, and see if it fits in, because that'd be amazing. I've always wanted it's in boston, isn't it? Yeah, I just wanted to.

Ian Stewart:

Yeah, it's right at berkeley is right in back bay, which is like heart of boston, um, you know and you can get, you know the t which is like our, our subway there, out to all the different neighborhoods and um different neighborhoods and a lot of great stuff happening. It's funny I was actually I live a couple hours west of campus but I was out there last Friday for a concert on campus. That was absolutely amazing.

Marc Matthews:

Fantastic stuff. Ian, I won't keep you anymore today, but it's been an absolute pleasure and I think again, we were discussing this off air, but we'll get you back on to chat all things mid-side. So audience look out for that one because I think that is going to be incredibly useful as well. So it's been an absolute pleasure and I will catch up with you soon. Thank you Likewise, mark. Thank you so much. Cheers buddy.

AI-Assisted Mastering in Music Production
Generative AI in Music Future
Impact of AI on Music Streaming
(Cont.) Impact of AI on Music Streaming
Audio Mastering With Gain and Tracks
Reference Tracks and Loudness Normalization
Mastering for Optimal Sound Quality

Podcasts we love