Music Production and Mixing Tips for Beginner Producers and Artists | Inside The Mix

#226: How AI Is Changing Voices, Studios, And The Value Of Human Performance (Face Your Ears Podcast)

Music Production and Mixing Tips for Beginner Producers and Artists Season 5 Episode 51

A single take can now become a gospel run, a country croon, or even a convincing female lead, and it happens in seconds. Justin and Rich of the Face Your Ears podcast unpack how AI jumped from pitch correction to “auto-sing,” the cost breakthroughs behind engines like DeepSeek, and what tools such as ACE Studio mean when 80-plus virtual singers sit inside your DAW. It’s a fascinating leap for producers and a gut-check for vocalists whose instrument is their body.

They talk through real use cases: typing lyrics, drawing melodies, stacking instant harmonies, and round-tripping audio between ACE Studio and Logic or Ableton. Then we get honest about the trade-offs. If voices are trained from real singers, who gets credit and compensation? When sync teams can generate polished vocals in-house, how do independent artists compete? And as synthetic vocals become indistinguishable to casual listeners, does trust in what we hear erode, or do we simply recalibrate our norms as we did with autotune?

Beyond workflow, they go deeper into culture and craft. There’s a difference between pleasing audio and human expression shaped by effort, failure, and growth. The paradox of hedonism warns that chasing instant results can drain long-term meaning. They explore the risk of cultural flattening when machines remix the past at scale, and we argue for a practical middle path: use AI for drafts, demos, harmonies, and accessibility, while doubling down on live presence, story, and the messy soul of performance. That’s where artists can still shine brighter than any model.

Got thoughts on AI vocals—tool or takeover? Share your take.

Links mentioned in this episode:

Listen to Face Your Ears

Send me a message

Support the show

Ways to connect with Marc:

Listener Feedback Survey - tell me what YOU want in 2026

Radio-ready mixes start here - get the FREE weekly tips

Book your FREE Music Breakthrough Strategy Call

Follow Marc's Socials:

Instagram | YouTube | Synth Music Mastering

Thanks for listening!!

Try Riverside for FREE



Marc Matthews:

You're listening to the Inside the Mix podcast with your host, Mark Matthews. Welcome to Inside the Mix, your go-to podcast for music creation and production. Whether you're crafting your first track or refining your mixing skills, join me each week for expert interviews, practical tutorials, and insights to help you level up your music and smash it in the music industry. Let's dive in. Ho ho ho folks. Happy Christmas Eve Eve from the Inside the Mix podcast. A big welcome to any new listeners, and of course a big welcome back to the returning listeners as well. If you're listening to this podcast on YouTube, you may have noticed something slightly different. There is no video, it is now only audio for the podcast. And that will be the same going forward. This episode is the fourth and final edition of my favorite podcast. So I'm sharing episodes of my favorite podcasts with you. And this episode is taken from the Face Your Ears podcast. Now both Rich Bozic and Justin Hocello have featured on Inside the Mix. Now Rich featured on episode 194 titled I Asked a Pro Vocal Coach How to Prepare a Singer for a Recording Session. And we go over topics like how to prepare a singer for a studio session, best warm-up routines, microphone selection, and how to optimize the headphone mix for singers. Now Justin featured on episode 191 titled Every Logic Pro User Should Know These Hidden Tools. And we go through topics like breaking free from the third-party plug-in dependency, essential logic pro hotkeys for speed, and built-in instruments and sound packs. And of course I'll put links to all those episodes in the episode description as I always do. And this is a nice segue to the episode description where you'll find a link to a two-minute feedback survey for 2025. And what I want to know from you folks is what do you want to hear more of in 2026 from the Inside the Mix podcast? It will take you all of two minutes. You don't need to log in, it is just a Google form. And I would love to get your feedback and I want you to help shape the podcast in 2026. You've got just over a week to complete this now, so please do click that link in the episode description. I really do value your input as this podcast is made for you, and I want you to help shape it in 2026. So please do click that link in the episode description. Do it now and fill in that uh that survey and help me shape the podcast next year. So moving back to the podcast I'm sharing with you. So it is the Face Your Ears podcast, and it is episode 32 titled AI Vocals. And in this episode, hosts Justin and Rich discuss the rapid advancements in AI technology, specifically focusing on AI's ability to generate and manipulate vocal performances. They're debate the implications of AI-driven tools like Deep Seek and Ace Studio, which can now replicate and altered alter rather human vocals for a fraction of the cost previously required. So that's enough from me, folks. It is December the 23rd. I want to wish you a fantastic Christmas in whatever it is you are doing. And I will see you on the other side for episode 227, where I'm going to be sharing your wins. So I'm excited for that episode to drop as I love celebrating what you've been up to the previous year. So that's enough from me. This is episode 32 of Faced Your Ears, and it's AI Vocals. Enjoy.

Rich Bozic:

Hello everyone, and welcome to another episode of the Face Your Ears podcast. My name is Justin Hoshella, and I'm joined by Rich Ozick. Rich, hello.

Justin Hochella:

How are you? Hello. Good day to you, sir. Good day. It's a beautiful day today. It snowed slightly outside today. Ah, yes. It snowed here as well. Just a bit. Before we get into our topic, uh I will let everyone know that we do have a website, faceyourears.com, that has all the info about our podcast and even a podcast player if you don't have a way to listen. But I'm assuming you probably have a way to listen if you're listening to this podcast. Yes.

Rich Bozic:

If you like literally just dropped your phone in the toilet like five seconds ago, and as you were listening to this episode, run to your computer and go to faceyourears.com. You can check us out there.

Justin Hochella:

We're on social media. Look us up at faceyourears podcast. So, Justin, what brings us here today?

Rich Bozic:

The juggernaut, that is, AI, continues to plow through our lives, whether we want it to or not. And it seems that no area of life is left untouched. And we on the Face Heroes podcast have talked about AI. It was quite a while ago. And I think that AI has advanced something like a billion fold since that episode. It just seems like every day we're looking at some crazy new advancement in AI technology. One of the recent advancements was with Deep Seek, which is a just an AI sort of engine, whatever it's called. And the big hubbub about that is it is on par with OpenAI, which is one of the most premier AI engines on the planet. And this deep seek engine was able to perform at the same level as open AI, but they did it for like a fraction of the cost. And so I just mentioned that because it's like the advancements are insane. To be able to do what open AI does, but for a fraction of the budget has huge implications for all of AI and technology.

Justin Hochella:

That means more of it being produced faster in different ways.

Rich Bozic:

Just the advancement of it. If you can do something for a fraction of the cost, right? Look at what we do with music and technology, right? Like once upon a time, these ways of recording were so out of our reach and so expensive that we could only dream of being able to do it. And then one day somebody came along and made it far cheaper, or at least over the years, made it far cheaper. And here we are in a studio filled with a bunch of cool stuff, able to record far better quality than anyone was 20, 30 years ago. So it's it just advances, right? To be able to do it at that scale for that cheap. I think it was something like, I don't know. Um yell at me in the comments, listeners, but I'm gonna I'm gonna get these numbers wrong, but it was like open AI took something like holy, I'm just gonna make up a number, 200 billion dollars, and then Deep Seek cost something like 30 million or something like that. Just a fraction of what those guys spent. And so it's incredible. And it's it's also uh an app that you can download to your phone for free. Should mention that. Wow, so we have access to that right away.

Justin Hochella:

Oh man.

Rich Bozic:

I just again I just mentioned this to set the stage to be like this is where we're at in February of 2025 with AI, like it's just exploding.

Justin Hochella:

Actually, this is an episode that I've been dreading would come. Today, we talk about AI vocals.

Rich Bozic:

AI is advancing so much that we're now able to do something that I think Rich, you and I both thought was at least way off, if not even possible at all. But it's the ability to use AI to generate vocals. Hold on now.

Justin Hochella:

I gotta go on a little rant here for a second.

Rich Bozic:

Do it, all right. Warning, listener.

Justin Hochella:

Yeah. What the hell, man? Like, okay, so we developed this crazy technology that's amazing to do stuff for us, to make life easier, to do all kinds of tedious stuff for us. We develop all of this stuff, and the first thing we do with it is we oh, let's wipe out the visual artists and the the all the musicians and the people who write stories and the people who do graphic design. You would think that we'd be like trying to teach the thing to handle the laundry and do our taxes. Instead, oh no, all of the things that are innately human, the the greatest things that humanity can do that are unique to human beings. Let's get a robot or an AI, an artificial intelligence to do that for us. It's just something weird about it, and also things on the physical front, we'll call it that. Like what you want to replace that stuff? Like first, I don't understand. I don't understand. Please help me understand, Justin. Help me understand.

Rich Bozic:

I think AI is very powerful as a utility to assist in time-saving tasks. That's probably a very narrow thing that I think AI can do and that I appreciate that it can do, but I think when I see it start to take over the things that make us human, so the creative arts like you described, I do start to get concerned. And one of the things that I feel like AI is doing is flattening culture. And so we've got billions of people who are able to produce something based off of centuries really of historical creation. That's what AI is doing, is it's basically like looking back on the centuries of what humans have done and regurgitating something based on those parameters. And I think it's just it's sad that people look at that and herald that as like a good thing because to me it it it does flatten culture in that I can't imagine how like a robot could help perpetuate and grow human culture, right? Yeah, it's I don't know. That's a more deep philosophical view, but what about vocals though? Like to get to that topic specifically, talk a little bit before we rant about it, because I I do have a lot of rants about AI and vocals. But what are some of the things, just so people are aware, what are we talking about? What are some of the things that you've seen in regards to AI vocals that you didn't think would be possible?

Justin Hochella:

Sure. The first thing that comes to mind, there was a video that I stumbled upon in my algorithm. The algorithm throws me all kinds of voice-related stuff, of course. Listening to my conversations all day about singing. In that video, there was a guy singing and recorded a track, and then he's like, But I want my voice to sound more like a gospel singer. And so he was able to throw this AI plug-in, and suddenly the melody that he sang got transformed and had all of these uh this phrasing and tone that sounded more like a gospel singer. And then he's like, you know what? No, uh I'm I'm not satisfied with this. I want it to sound more like uh like a country singer, and so he switched it up to that. It was truly impressive, it was still his voice, and then he changed it up even more and was like, I want my voice to be a woman's voice, and then he was able to change it up to be a woman's voice, and then he was able to change more parameters of it to change the tone quality even more. It was truly impressive. If you didn't know what you were looking at, and you and just the average person was just listening, they'd be like, Oh, yeah, yeah, that's that's an RD singer, that's a woman singing. They wouldn't blink an eye, they wouldn't know the difference. And to imagine that's still in its early stages, because I know that some of the critiques of all that is oh, it'll never capture the phrasing, it'll never be able to have that nuance, that spark that humans have. But I don't know, it was a really convincing start. Have you heard any of this?

Rich Bozic:

I don't know that I've heard that where people are changing their vocal style. I think I've heard of it. Like now that you talk about it, it sounds familiar. But yeah, that's the thing too with AI is like any critique, just give it a week. Yeah, it's just it's it's moving so quickly. Yeah. Wow. You were looking at something that changes the person's actual voice.

Justin Hochella:

It's crazy. So literally anyone can walk in and be like, okay, I have a song idea, I can't sing well. Let me sing it the best I can, and then literally like use you could use something to fix the pitch, and then you could throw this plug-in on it to make it sound like something else to improve the tone quality and the delivery. Wow.

Rich Bozic:

So we've gone from like auto pitch to auto-sing.

Justin Hochella:

Yeah, it's what it sounds like. You know what though? When I see that kind of stuff, man, it starts to make me question what I'm hearing out there. Is everything that we're hearing what we think it is? Or have people already been utilizing this stuff? Because think about back in our day when Millie Vanilli got caught lip syncing. For those of you who don't know, tell them about Millie Vanilli.

Rich Bozic:

Oh yes, Millie Vanilli were a duo, a music duo back in the 80s. It's actually a very tragic story, and anyway, we won't go into that. But the big thing about Millie Vanilli is that they they were basically these two guys that were hired because they looked a certain way, they had that cool kind of model look, uh, but they couldn't sing like at all. And yeah, they would lip sync on their music videos, of course, and in their performances on stage, and then one day the jig was up and their backing track messed up and they were caught. I think it was like skipping live. It was sort of skipping like they were caught in the act of like lip sync, and it was controversy at the time. You know, at that time everyone just assumed like everybody was singing.

Justin Hochella:

I think they have to return their Grammy or I think so.

Rich Bozic:

I think they did, yeah.

Justin Hochella:

But yeah, it didn't end well with them ultimately. But Millie Vanilli, you're forgiven because of all the stuff that's happening right now that just happens in plain sight and no one blinks an eye anymore. Like I'm hearing now, even rock and metal bands getting busted for using that kind of stuff live. But this is on a new level. AI vocals is on a new level, like you could transform someone's voice completely to where it's not even it's beyond lip syncing. It's you're basically on the spot doing a complete transformation of their voice.

Rich Bozic:

One of the tools that I've seen just to kind of jump in on that topic, just to kind of flesh out what's out there, is Ace A C E Ace Studio. It's it's more or less like a DAW based on a whole bunch of different different singers, couple dozen, if not more, singers. So the idea is that you go into this and you can you know play in like a melody or something with your MIDI keyboard or draw it in, either one. Then you can type in lyrics, like whatever lyrics you want, you can type in, and the program will automatically contour the lyrics to the notes in your melody, and from there you simply select a singer, like one of those dozens of singers that they have, and that's it. These singing voices are based off of actual real human being singers. What the AI is doing is it's taking their vocal qualities and nuances and creating a virtual singer out of it, so you can type in literally you literally can not even be you can be someone who can't even make a sound. Yeah, I mean, you don't you're right, you don't even have to sing at all. It's almost like a word processor crossed with a synthesizer, is how it comes across. And they actually just released a new feature where it now links to your DAW. So if you're using Logic Pro or Ableton Live, you can you know have uh a melody or something from your DAW talk to this Ace Studio application because the way it was before is it was its own standalone thing. You would create your vocals, you would render it, and then put it into your DAW as like a WAV file. But now they the apps will actually talk to each other, and so it's basically like an instrument in your DAW. And so crazy, you know what crazy stuff.

Justin Hochella:

Singers should have seen this coming. Now singers know what drummers must have felt like when suddenly synthesized drums were introduced into the equation, or any instrument for that matter, that was starting to be replaced with the digital counterpart. I think what it is is because humans make the sound, um, there was this idea that oh, that can't be replicated, that can't be synthesized. But hey man, technology, technology, and look where we're at now. And now singers feel what all those other instruments felt.

Rich Bozic:

Yeah, so I'm on uh Ace Studios website, they have over 80 royalty-free AI singers available. That's just the beginning, and so they have different styles of singing, right? So they have like a soul singer, pop, edm singer, cinematic singer, opera singer. They even have a child voice, they have hip hop, ballad, RB, Latin pop, RB funk, soul funk, Latin pop, Latin folk, on and on. Crazy, like they have different styles now of all these different voices.

Justin Hochella:

Is there a plug-in that could add in a little bit of that singer ego into it?

Rich Bozic:

It's just like uh like a knob on it, level of ego, ego level of ego. Yeah, and there's like a like an icon of somebody's like head, and as you turn the knob up, the head just gets bigger and bigger.

Justin Hochella:

And then it you could have the level of room dryness that the singer takes into consideration. Like this room is really dry, and the single be extra like edgy.

Rich Bozic:

Yeah, hydration level, like how sick they are.

Justin Hochella:

Yes, all singers walk around with a low-level sickness. It's like the easy out.

Rich Bozic:

It's true, it's true. I so many singers that I've dealt with, it seems like that. It's like, are you ever not sick?

Justin Hochella:

And is it yeah? Speaking as a singer and one who deals with singers, yes, we are never not sick. We're always sick. We're always sick.

Rich Bozic:

Always a perpetual low-grade, like head goals at all times.

Justin Hochella:

But if it goes well, then of course, this is what I do. This is how you know how I am.

Rich Bozic:

But if it goes bad, you know, there's there's another thing I just thought about. I remember seeing a YouTube video about this where another aspect of AI and vocals is harmonies, a tool that allowed you to generate layers of harmony within like seconds. It was insane. So you would upload, I think in this instance you would upload yourself singing, or you would upload a take or something, and then it would generate like, you know, three, five, uh, seven part harmonies and stuff. Like it was wild. Um, you know, based off of Just one recordings. You know, we say all of this just to really describe like kind of what's out there today, and it's mind-boggling. And Rich, you brought it up earlier where you said, like, this is bleeding edge technology. It's very new, hasn't been around for long at all, maybe a year-ish. I mean, I think I started seeing this really start to blow up like six months ago, six, seven months ago. Tell me more about your view on this. Rich, you've dedicated your life to singing and being a vocal teacher. You've been doing this now for almost 20 years.

Justin Hochella:

25 years, actually. 25 years. 25. 25th anniversary. Right.

Rich Bozic:

I'm sorry. The Bozic Voice Studio is hitting 25 years. That's right. So yeah, like how does this strike you? And I'm curious too. Are are any of your students like talking about this or reacting to it?

Justin Hochella:

So far, I think that it hasn't hit like the mainstream yet. Okay. Um, my students aren't talking about it really that much yet. Then again, I deal with people who really they want to learn how to sing, they want to be able to do it themselves. They're looking into this kind of technology. I bet if we were to talk to a lot of producers uh and people who um are trying to build songs and music from the ground up themselves and are familiar with equipment, I bet more of those people probably have heard of these things and have dabbled. Yeah, I know pitch correction, auto-tune, all that business, melodyne, all that stuff has already been in the zeitgeist with singers for a while with varying opinions on that usage. But this is something different. I haven't had yet even like on the recording front, any client yet who's like, hey, can we try to change the sound of my voice completely with AI? Haven't had that come up yet. I'm waiting for that day to come, but nothing yet from the singer world that I'm hearing. I have been slowly putting on my feelers to some of my clients who are in other aspects of voice usage. So I have a client who does sync licensing stuff. And I asked her, what's going on in the sync world? How do how do they perceive AI in that world? I mean, not only on the vocal front, but just song generation altogether. I th and she was saying that there are some distributors or or agents or whoever handle that stuff who reject AI completely. But I know some of that infrastructure in that world is built on an old model. So I could see that being at first people resisting, but then I could see also eventually as companies who are using sync licensed music becoming aware of the possibility of oh, we could just do this all in-house, we could just generate that could start to catch on. Just like in film, all the AI stuff that's starting to come into making a film and all of the jobs that's eliminated.

Rich Bozic:

It reminds me of you mentioned autotune. I think we've talked about this on the podcast before, but when it first came out and auto-tune, like people lost their minds. Like they thought it was the devil and fast forward 25 or so years later, like because I think it really started to hit around the late 90s, early 2000s. And so now it's almost like of course we're gonna have autotune on vocals. Like it's just part of vocal processing now, it's so standard. And I just uh it makes me cringe to kind of think about like where are we headed then with AI? Like, I think that's like the the bigger fear with AI is like people feeling like they're they're getting replaced or going to get replaced.

Justin Hochella:

Yeah, on a lot of fronts, not just the music world, obviously in other areas as well. With music, I just keep coming back. I can't shake this idea that it's just such a human thing. It's like an expression of human emotion and human thought and ideas, but uh it's just rough to think that one day all of that may be replaced with just derivative kind of AI generated stuff. Because right now AI is all derivative from whatever knowledge you uh knowledge base you give to it. I think a few weeks back there was a whistleblower who was on the ground floor, I guess, in helping develop uh generative AI who was a whistleblower and saying they're using intellectual property to build AI. Oh yeah. Kind of sounding the alarm bells, and the guy was like found like in his apartment dead, supposedly a suicide after doing that. That's a whole nother thing, but conspiracy. Yeah, that's a whole nother thing. But if you listen to what he was saying, all kinds of things, you know, however you feel about copyright, intellectual property, all that stuff, like it all of that stuff gets called into question as well. Where is all of the information coming from? How is it being used? But then with voices too, are we gonna see plug-in packages eventually where it's like by the Beyoncé and Steve Tyler vocal plug-in to sound like these people? Is that coming? Think about the people who had to record all the samples for that whatever company you were talking about that has those A studio.

Rich Bozic:

Yeah.

Justin Hochella:

They had to pay someone to generate the sound uh for all of that stuff to be able to be used.

Rich Bozic:

It strikes me as sort of a hedonistic pursuit. And what I mean by that, by hedonistic, is it's it's a really narrowly focused pursuit of pleasure. It's like someone just fixating on the end result or the product. And that's what AI is able to do. And it's really dangerous, I think, for human beings to have the capacity to exist in a hedonistic way, because it introduces an idea called the paradox of hedonism, also called the pleasure paradox, and it refers to the practical difficulties encountered in the pursuit of pleasure. For the hedonist, constant pleasure seeking may not yield the most actual pleasure or happiness in the long term when consciously pursuing pleasures interferes with experiencing it. So the more you pursue that thing, the less pleasurable it becomes. You know, AI vocals might seem really cool now, but the further we go with it, the less it's gonna hit. If that makes sense, it's almost like the drug isn't having the same effect anymore, and it kind of becomes hollow and meaningless. And I think the reason that, you know, for musicians, whether you're playing an instrument or you're singing, the reason it's such a visceral experience is because you are doing it. You are literally feeling it in your body, especially with singing. It is your it is your body. Your body is the instrument. Yeah. And there's this there's something spiritual about that. It's almost like, you know, this is what my soul sounds like when I sing. When you remove that, you're sort of removing the soul from the music, maybe for argument's sake, you know, maybe there's an argument against that. But my biggest concern, the thing that makes me sad, is that people are oftentimes short-sighted because they're hedonistic. They're like, I just want the end result, I want the pleasure out of it, the quick fix kind of thing, to be happy in the short term. They don't realize that in the long term, they're basically going to feel hollow. And that's something uh I think AI can never replicate, which is the experience. AI can never have the experience.

Justin Hochella:

Yeah, we say that now.

Rich Bozic:

Until the brain chip comes out and can implant memories like in Blade Runner, or you know, until that date, you know, AI cannot have the experience for you.

Justin Hochella:

And why would we want it to? And there was something else you talked about, like as we were prepping for this episode, that I want you to talk about for our listeners. The concept of actively having the struggle and the work element of things. Can you talk a little bit about that?

Rich Bozic:

Yeah, I think that that is part of the experience, the experience of toil and failure and uh frustration and all of those things that shape who we are. So it's kind of a granular part of the experience is the suffering. I think that that ability to suffer is quintessentially human. Suffer in the pursuit of something that you love is a quintessentially human endeavor. I mean, think about your own experience as a music student. You know, I was a music student. There's a lot of suffering there, you know, late nights uh studying or practicing, you know, having to do performances and stuff and be graded on it. Like it was a lot of pressure, a lot of difficulty. And those experiences shaped who we are as people and as musicians. And I think that if you remove that, to me, it goes back to that flattening culture a little bit. Because like if people aren't having experiences which are very much random and unpredictable, like that can generate just interesting things, and that to me is a huge tenet of building out cultures, people just having the human experience in their own unique way. And if you can just go to a computer and type up like, I want this perfect thing, you completely cut out all of that toil and experience and and so forth. And it's it's another element of of humanity is that we as human beings learn best through experience. And so one might argue then that if you're able to do all this stuff without learning any of it, you're not really growing as a person.

Justin Hochella:

This was uh interesting thing to delve into again. I knew it, I knew AI would be coming back for us again, and I have a feeling we're gonna be back again, Justin, talking about this again.

Rich Bozic:

Yeah, I mean, there's gonna be something else. They've come for our instruments, and now they've come for our voice, and so what what's left?

Justin Hochella:

Yeah, well, the music creation, they've come for the music creation with Suno and all those kind of sides. Yeah, what is left? Maybe there's nothing left. I guess live performance.

Rich Bozic:

Yeah, robots performing. Who knows? Holographics. I don't know. It's always robots. This was a great conversation. Thank you, Rich. I know we've kicked this around, you know, catching up outside of the podcast. It was good to sort of capture some of this conversation on the podcast itself.

Justin Hochella:

Everyone has such varying opinions on these things and different ideas of things. What are some of your opinions, guys? Feel free to leave some comments on social media and let us know what you think, or shoot us a message, or just hang around and listen to the next episode. Thank you, Justin, for joining me today.

Rich Bozic:

Yeah, thank you, Rich. This was great. Really great to get your perspective as a singer and uh a voice teacher. Really good conversation, and I look forward to continuing this with our audience. So yeah, like you said, please let us know what you think of this episode. Let us know what your thoughts are on AI vocals. And with that, we'll wrap this up here. Thank you everyone for listening, and we'll catch you in the next episode.

Justin Hochella:

Thank you for listening to the Face Your Ears podcast. If you found this episode entertaining and informative, be sure to share and leave a review. Also, check us out on social media and be sure to like, comment, and subscribe. For more information related to our podcast, head on over to www.fateure.com. Link can be found in the show description. And remember, whether you're a musician just turning out or even professional looking topic type or

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.