ACP: The Amazon Connect Podcast

15: Quality Monitoring

Episode 15

Send us a text

Deep Dive into Quality Monitoring with Amazon Connect. Join Alex and Tom as they delve into the latest updates in quality monitoring for contact centers.

Discussing Amazon Connect's new metrics, APIs, and advanced quality management tools, they explore the tech that can transform the evaluation process, ensuring better customer interactions.

Learn about the potential of AI in quality monitoring, from real-time feedback to full automation, and discover how these innovations can free up invaluable human resources for more impactful roles. Whether you’re managing a large team or a small group, this episode provides practical insights on improving your contact center's performance. 

 

00:00 Introduction and Catching Up

01:14 New Metrics in Amazon Connect

02:23 API Updates and Flow Analytics

03:41 Introduction to Quality Monitoring

04:24 Key Components of Quality Monitoring

10:13 Challenges and Solutions in Quality Monitoring

14:52 Building Post-Call Surveys

15:46 Leveraging Speech Analytics

17:45 AI-Powered Performance Evaluation

20:30 Automating Quality Management

23:46 Real-Time Coaching and Compliance

26:35 Cost Considerations and Future Outlook

28:51 Conclusion and Final Thoughts

Find out more about CloudInteract at cloudinteract.io.

Tom Morgan:

It's time for another ACP and I'm joined in-person by Alex, Alex. Good to see by Tom could see. It's been it's been a little while since we've been in person, I feel like But but yeah, always good to get together. And I know you've got some interesting news items to go through. So without any further ado, let's get straight into things that you've spotted.

Alex Baker:

Thank you. Yeah. I'm always slightly unsure now where we're at in terms of the release date of the podcast. Compared to the, the actual. The place in a well oiled machine.

Tom Morgan:

Careering around the around the interwebs,

Alex Baker:

but, you know, I guess if we just cover them off at some point, then at least we've at least we've done that. The ones that I was going to mention today. The first one is that there are some new metrics available in the historical metrics page. Probably too many to go through one by one. So encourage everyone to take a look at the Amazon connect admin documentation, but for a couple of examples, we've got abandonment rate. At abandonment, if I can say it right. Is is one of the first ones that was mentioned and agent non-response without customer abandons. So the first couple of mentioned, I'm sure there'll be quite useful for a couple of people.

Tom Morgan:

Agent non-response without customer abandoned. So. I, why is that? I stayed on the line, waiting and waiting and no one's picked it up. That

Alex Baker:

one. As I understand it is. W trying to, detect. Sort of work avoidance perhaps from agents. So the agent hasn't responded and answered the call. But you can also tell that the customer hasn't abandoned the call. Whereas I think before it was a bit of a gray area. I see. But yeah, now it should be, it should be quite straightforward to tell that it was the agent that has kind of ignored a call that they should have been answering.

Tom Morgan:

Got it. That's good. Oh, that's helpful.

Alex Baker:

We've also got some new API APIs frames and connect to cases and they will allow you to upload attachments, check the details of attachments and then delete attachments on cases. And then we have. And this one. It has been in preview for a while, and it's nice to see it. Coming into general availability, but we've got some, some new metrics and reporting around a flow analytics. That should allow you to measure things like time in contact flows, flow outcomes. And just keep an eye on trends and perhaps emerging issues within your contact flows.

Tom Morgan:

It's interesting. So almost like the performance of your contact center configuration, if you'd like how your different places set up, whether some are way longer than others and stuff like that.

Alex Baker:

Yeah, absolutely. And. Before, before hand, you could do it if you built something yourself, but it's quite nice that it's, it's come into the Amazon connect portal. So it's just there. At your fingertips, if you haven't yet sort of developed a custom reporting solution.

Tom Morgan:

Well, that's a good Roundup. It's good to see things sort of progressing. It's always good. You know, to see the product evolving and, you know, new things being worked on and especially things like the API APIs, I think, cause that really helps obviously. Helps the ecosystem, right? Helps people build new and new things, especially for something like CA cases. Yeah, that's, that's good. All right. So this week we are going to be talking all about quality monitoring. But let's start off with what is quality monitoring because I'm not sure I fully understand.

Alex Baker:

Okay. So I guess quality monitoring, you might describe as the process of. Evaluating and assessing the performance of your agents. And kind of how that relates to interactions with your customers. So effectively being able to. Put a sort of quality score on how you, how your, your conversations between your agents and your customers are going.

Tom Morgan:

Okay. So it's less about how efficient your agents are and more about the quality of the interaction they're having with your customers. Yeah. Okay.

Alex Baker:

The next thing I was going to mention was some sort of key components of quality monitoring. I was going to come onto KPIs and metrics. As you mentioned, you might have, you might have metrics things like how long did it take for a call to be answered? Which is important. Possibly not necessarily within the, the, the quality monitoring context. We're talking about. So I guess different companies will adhere to different KPIs, key performance indicators. I'd see the, these, the quality monitoring context is sort of around things that you would measure the agent against. Perhaps traditionally things like average handle time. Yeah. First call resolution. Yup. Well, arguably, and I think we've spoken about this before average handle time. It's a bit of a sort of blunt instrument by which to measure your, your your agents on, because you know, is it. Does a short call mean that it's a good call. Yeah. Possibly not in the low situation. Yeah. Some of the key components of sort of quality monitoring process would be called recordings. Also maybe chat transcripts or other records of some kind of interaction, maybe an email thread, you, you might, might run to a quality monitoring processes on all of those. Hmm. I Q M could be done in real time, but you'd probably find typically that. A supervisor or a quality analyst would be carrying this out sort of post the interaction happening.

Tom Morgan:

Okay. Almost like a sample, they take a sample of the day or something or a sample of the week or sample of a person or,

Alex Baker:

yeah. Yeah. What you might find is that. The QA role would have their own KPIs by which they're measured and they've, they might have to listen to and score maybe four calls per agent per month. Right. And that might vary depending on how many agents you have, how many QAs you have and the types of KPIs that you have against them. But yeah. So the key thing that you need. You need is the record of the interaction to have a look or listen through. So a call recording, probably being the one that most people associate with it. You might also have And you might call it a scorecard or an evaluation form. That's an. An important component of the process. And a business might have maybe one or maybe many different forms. The, the quality analyst type role would be able to listen to an interaction, but at the same time, work through the form. Which would typically contain various sections with some questions about aspects of the interaction.

Tom Morgan:

Right. So this is taking the fluffy. Conversation the audio, the chat transcript, and codifying it, essentially putting metrics around it, like turning into a, a common set of key metrics. Yeah. Things like comparison or scoring or

Alex Baker:

yeah, exactly. That. I'm making it. As you've implied making it sort of making it structured. So if you're measuring. A person you want to measure all your other people by that same set of criteria. So it's a fair. And a uniform. Evaluation.

Tom Morgan:

Got it. Go ahead. And that helps with training, I guess, as well and all these sort of things. Yeah,

Alex Baker:

yeah, absolutely. Yeah. And that was, that was going to be something that I was going to come onto was the you're right. There's there's the sort of, I guess the technological aspects of it. So things like your call recordings and the, the forms. But yes, if you're gonna, if you're gonna bother to do this and the score, the forms, you would expect that there would be some kind of feedback loop involved as well. So you want to, you want to take the feedback and you want to act upon it somehow. So perhaps you want to highlight some great interactions. Praise agents that have, have driven a particularly good outcome for a customer, or I've seen it where particularly high scoring calls have then been used as a sort of a benchmark for training. So you might say that Tom has done a great call. We're going to train some of our other agents because of these responses that he's given. In a particular interaction. That was a couple of other points maybe to talk about on the feedback loop, but in terms of the. Those sort of technological items for gathering the feedback. He might want to consider things like post-call surveys. So if, if you want to seek the feedback of the customer, rather than The opinion of the, the scoring. The QA analyst. Then you, you need to do that somehow. And that might be by offering your, your customer a survey at the end of a call. And you, you know, the kind of thing is. Right, right. From one to five, how happy you were with the interaction, that kind of thing.

Tom Morgan:

Yep. That's just another set of data that feeds into the overall. Process. Right? Cause it's not that, you know, the customer might be already happy, but they could have been told a load of rubbish. You know, so yeah, yeah,

Alex Baker:

yeah, exactly. Yeah. Around the feedback loop. Yeah, we mentioned that. Highlighting of great interactions, but equally, if you identify some knowledge gaps, so maybe an agent isn't saying a certain compliance raise that they absolutely must say, you know, but. For legal reasons. You can highlight that kind of thing. And you can give a bit of coaching and make sure that, that adhering to the, the right processes going forward. It would also be quite common. Within the quality team to have. Calibration sessions as well. Where you get the quality team, maybe listening to a bunch of calls that have been evaluated and checking their. Checking the homework effectively. Making sure that Tom is marking calls in the same way that Alex is marking calls. Yeah, that's fair. You back across. At your agent base? Yep.

Tom Morgan:

Yep. Yep. Yep. Yes, that makes sense. I mean it does. It all sounds. Important. And it sounds like a thing you should be doing. Especially if you've got, you know, lots of different agents in there. Yeah. Maybe different levels of experience and, and all the rest of it. It does sound like. Thing you should be doing, but it also sounds like quite a lot of work. Like, it sounds like a job for an entire team of extra people. Who. Who's yeah. A set of checkers, if you like. So yeah. Okay. Yeah, it sounds like quite a lot. Yeah. Absolutely.

Alex Baker:

Yeah. We were talking to customers at the moment that have entire teams of people that are dedicated to this, as you might imagine. Right? Cause you've got agents whose. Sole job pretty much is. Is taking customer service calls throughout the day. If you're, if you're going to have to evaluate say four or five calls per month for every single agent. Yeah, you're right. It takes quite a, quite a sizeable sort of basis of. Personality to do that.

Tom Morgan:

Actually, I suppose those people have to be quite good as well. They have to be quite knowledgeable. You have to be. Almost very good agents as well. Like they have to know all about the process, the systems, like they have to know when answers that are being given a right or wrong or appropriate or inappropriate. So. Yeah. Yeah. That's not easy to find people either. I imagine.

Alex Baker:

Yeah. That's a really good point. And I think also when it's a consideration, when you are perhaps. Building out those evaluation forms that we talked about., do you have some information against the question to try and explain what it is that you want to get from the question? Just so that everyone can look at it and there's, there's no ambiguity around what it is that you're, you're trying to discover it and trying to score. Yes. The other thing to mention as well. It sort of in the legacy type environments. Quite often, you might find that your call recordings are in one system. And then the, the evaluation form is in a different system. So the QA might be doing a bit of application switching, possibly a bit of copy and pasting between different systems as well.

Tom Morgan:

Hunting down the core recording. All of that stuff. Yes. Yes. Okay. So this is the point at which we come in and say, Amazon connect solves all of this. Somehow. So where are we at with Amazon connects? Cause they have, they have some quality management stuff built in don't they. Or at least as a, as a purchase for add-on. Yeah, absolutely. At some point, but so yeah, what. What does connect have?

Alex Baker:

Yup. So connect the, I guess the fundamentals, those kinds of technological components that we mentioned. It includes most of them out of the box or kind of makes them easy to build yourself. For example, the call recording. We know that's native within connect. You can just turn it on it's there there's no extra cost for it or anything. The core recording search portal and connect gives you quite a flexible. Way of searching for calls. So, for example, you can search for calls by agent name by time and date by customer phone number. Call durations. Many others. And we have seen it with a customer recently where the QA is, are supposed to be evaluating calls that are over a certain duration. So they figured that. And if a call is only 20 seconds long, for example, that there's not really much that sort of worth listening to and evaluating. The reasons that it's quite short, there's no sort of valuable interactions there. Yep. But what they can't do in the current system is they can't filter for those columns. I have a certain duration. So immediately that they're pleased with the idea that connect, you can search for calls sort of between 30 seconds and 10 minutes, for example.

Tom Morgan:

Yes.

Alex Baker:

The next point is that the evaluation forms are native within connect. So you can build out your own forms with your sections. Your questions within them. The instructions to the evaluator that we mentioned that probably quite important for that. That's sort of uniform approach to the scoring. And the scoring itself. So you can add scores to the questions and you can have waiting around the scoring.

Tom Morgan:

Okay. That's nice. There's a place for evaluators to go and start a new form or whatever. Start a new evaluation. Okay.

Alex Baker:

And it's right there in the Sophie. If you have evaluations enabled. If you go and search for a call is right there and the call recording search portal. So you can just click in there. And stuff.

Tom Morgan:

So you can go from the recording to immediately go and evaluate that. That's nice.

Alex Baker:

And so the, the scoring, just to explain what I mean about weighted scoring. Perhaps you might have one question which is particularly important. So something around compliance that the agents absolutely must say. You can have it weighted such that. That question. If it's a fail, we'll fail the whole form. Despite them doing quite well in the rest of the form because they didn't do that really key thing. It's a fail and therefore it's, it's sort of flagged immediately. Yep. That's nice. The next thing is I mentioned. Post-call surveys. We touched upon. So, if you want to go down that route of offering a post call survey to the customers. You can build it quite easily and connect. Maybe then with a bit of a bit of integration, perhaps using some Lambda code. You can link the survey results together with the form output. Maybe do a bit of custom reporting on it and kind of get a feel all in one place for how one affects the other, you know just taking the quality form. In isolation, maybe doesn't give you that full picture of what the customer has then fed back about the interaction afterwards,

Tom Morgan:

especially where they're very different. I suppose those are the interesting ones around. Whether,. The thoughts of the evaluator are very different from the thoughts of the customer. Yeah, definitely.

Alex Baker:

Yeah. Or. Yeah. The agent. The agent thought it went well, but the customer then gave a negative feedback. Yeah. You definitely want to follow up on that kind of thing. The next thing to mention. So because of the. The speech analytics capability that Amazon connect contact lens brings. You might want to actually move away from that. The post-call survey approach of getting customer feedback. Or at least. Sort of supplement it with the, the speech analytics feedback. So because contact lens allows you to do things like sentiment analysis. You could. Include the overall customer sentiment throughout the call. Or you could look at sentiment trends. So for example, That a call start off with quite a negative sentiment. But an agent managed to turn it around to a positive sentiment. That sounds like a really good outcome or that the reverse of that, obviously not so good.

Tom Morgan:

Yup.

Alex Baker:

You can also set up rules. So maybe things like compliance phrases that we mentioned, or other specific words or phrases that are being mentioned in an interaction. Maybe. Customer mentions that they want to make a complaint. For example, you could automatically flag that with a rule. And you can take an action off the back of the rule.

Tom Morgan:

This is quite interesting this, because it's the first steps of. And it's quite, it sounds quite enticing. Like we don't have to bother our customers anymore because we can use contact lens to. Get that. The essence of what they were. Feeling what they were thinking and what their general thoughts were. And I think that's kind of interesting to lots of people. I wonder if I think. I think we're probably out at the moment. I don't know what you think, but like maybe customers are doing a little bit of both at the moment and using, and. A bit of comparison of. Like continuing to ask the customer and then see what contact lens says and almost validate the contact lens output before they are ready to completely take away asking the customer and relying on. You know, contact lens.

Alex Baker:

Yeah. That's a really interesting point. Isn't it sort of, where do you see the demarcation between what the AI side of things can do versus what the agent can do? Definitely. And of course, you know, mentioning AI. You can you can do some other stuff, which some of which has been introduced recently. So there's a preview running at the moment for example of performance evaluation, but using gen AI to kind of give you suggestions about what the answer to a question might be. So as you're running through your evaluation form for it, for some of the questions, and I think you can add this to up to five questions in a form. You, can you get an ask AI button? And it will give you a gen AI powered suggestion for that particular answer. And then a bit of context and a bit of justification. Why. The answer has been suggested.

Tom Morgan:

So based on the transcript, based on the. Okay. And the sentiment, all those things. That's interesting.

Alex Baker:

And it kind of this leans on, we mentioned those evaluator instructions for the question. So you, you need it's these, that the gen AI suggestion is, is based on. So you need to be quite careful that you include quite well. The more structured instructions. Yeah. There, but yeah, it should, it should take a look at the instructions and it should evaluate based on the transcript, what it thinks the answer is to that question, which is, which is super cool.

Tom Morgan:

It is an a maybe. Is that the future? If you can really well codify your evaluator instructions. To the point where. You don't need to be an experienced evaluator to undertake them. Is this a thing you could hand off to AI? Yeah. You know, combination with contact lens and then. The reason for saying that is not just because then you don't have to maintain a bunch of evaluators, but actually you could evaluate a hundred percent of your calls. Can you raise at the moment? I imagine. I don't know, 10%. I dunno what, I dunno what a realistic thing is for a company, but it's not a hundred percent, is it? So the. I said, that's an interesting idea. The

Alex Baker:

exciting application of it for me, it's, it's the yeah. Being able to, because it's quite a scatter gun approach. Traditionally, isn't it. You're kind of, you're picking calls. Quite often, basically at random. Whereas with these, you know, the contact lens analysis and some of the other parts of the tools that you can be a lot more focused around what it is that the human QA analyst actually spend their time looking at. And

Tom Morgan:

without any of the unconscious bias that I'm sure feeds into these systems, unless they're really well controlled, unless the evaluators are given calls entirely at random. Bye. Some algorithm. It's going to be very hard to avoid the unconscious bias of like choose, you know, avoiding your friends, picking on people. You don't like, like all the stuff. Yeah. You know whether consciously or unconsciously, so actually, yeah, having AI do it is. It's good for those reasons as well.

Alex Baker:

Yeah, definitely. The other thing to mention around the. Sort of the approach towards full automation of it. It's that you can use your contact lens analytics output combined with a set of the rules that we mentioned. To have your form questions automatically completed. So almost. Aside from that. The gen AI suggestion, which kind of gives you an idea of what the gen AI engine thinks. This will, if you set the rules carefully enough, it will just automatically answer the questions and the way. The way we've demoed it in the past is something like that. The compliance phrase one. So if you've got something that's really that important, and we mentioned it in context of the, the, the scoring weightings, if you've got, got one question that you absolutely have to do, like an FCA compliance raise or something, you can set up a contact lens rule to detect whether that was whether that was read by the agent in a particular part of the call. And if it wasn't, then you can automatically have that, that question scored. Yeah. It's not even a suggestion. It will just kind of say contact lenses. This automatically, you kind of course override it if you want. But I think that's really neat to kind of. Pay the heavy lifting out of those ones where it's very cut and dry, you know, they didn't do this and it's absolutely vital to have it done.

Tom Morgan:

Yeah, absolutely. And. I think the combination of those two things together. Does give you like a very comprehensive way of evaluating all your calls. Which is Really interesting for the coverage. I think it's really interesting for that. I think there's a value as well to taking these evaluators that are very experienced in what good calls look like and actually freeing them from this. What they're doing at the moment means that they can go and be really good trainers. They can be excellent. They can be the kind of top tier. Call handlers as well. I think that was really interesting and. I don't know where this is going to end dry because you've got at the moment. This is post-call analysis. You're evaluating the things that have happened. There's this, you know, Let's take it to the next level and say, well, actually these, the compliance phrase one is a really good example, right. Because it's so cut and dry either. It was said, or it wasn't. The things like the compliance phrase. Well, let's put some rules around that and say like, not only is it required, it's required in the first five minutes. So then what happens after five minutes where you've got you know, You've got a hundred percent coverage. Absolutely something. Could, you know, maybe just barges into the call and be like, sorry, we've got to start this call because we're missing a compliance phrase and you definitely could do that now. You could do that. If you've got real time. And contact

Alex Baker:

lens analysis. If you have your rules set up for that compliance res type thing. If it is the start of the call, then there's no reason. If the agent didn't say at the start of the call, you couldn't have a Lambda function or something, send them an automated message to say, by the way, you haven't done your compliance raise. Make sure you, you mentioned that.

Tom Morgan:

Yeah. And for some industries, it is so important that like, they would probably be okay with like the hit to the customer experience. Of an automated Biogen to be like having a robotic voice, telling you that. We're stopping this call because there's no compliance method. So we're going to read out to you now. Yeah. Yeah. If it is

Alex Baker:

financial services or pharma. All sorts of Potential use cases for that. Definitely.

Tom Morgan:

Yeah. No. It's really interesting. And, and there's, there's other applications as well, like with. We touched a little bit on training. Training and coaching. You know, absolutely. This is, this is a great way of doing it to get that feedback early and fast. Do you know to have all of your. All of your calls evaluated and you get the results in real time before the next call is a great way to do coaching. Well, it's just like sitting, it's like having somebody experienced sitting with you and like debriefing you after every call and maybe during the call, maybe that's what's coming next. Right? Real, this real time evaluation doesn't need always to be after the call. Let's real-time coaching. You know, why not? You know, we've got, like you say, we've got the real-time transcription, we've got the set of evaluation rules. We know what good looks like. Yeah, we could, we could real time. Which probably not solution high-speed products that we could build. Like on, on forecast, but like it's, it's why not, right? Like it's yeah.

Alex Baker:

Maybe we'll come back with a followup podcast. Once we figured something. So many good use cases for it. There really are. Yeah. Yeah, I'm just, I'm really excited about the. I think you mentioned it. Some people might see it as. Maybe we can get rid of some of our quality analysts. Unfortunately that that might, might be something new. I could foresee. But I think the opposite is true in that. Give you a quality analysts, the tool set to be able to really use their time.

Tom Morgan:

Yeah, I think it's about freeing them up from the drudgery of work. Like, you know, evaluating other people's calls does not sound like the most fun thing once you become, it's not a great jumping off point from the, for an experienced agent to be like, now you get to evaluate less experienced people that I don't know, like that doesn't sound great. Right? Yeah. To be given the holiday, the challenges of dealing with the trickiest of customers or the most important of customers being able to train new people up. Look at the process and improve the process that those feel like valuable things to do. So I think this is good and this is a good use of it. This is the right use of AI. Then I think of. Freeing people up from, from repeatable work. So. It's good.

Alex Baker:

I think you mentioned it earlier, but it's interesting. And we've got some insight. To it from our customers, but is that level of trust there in the full AI tools? Let's let it deal with a hundred percent of interactions. And I

Tom Morgan:

think we're on that journey. I'm not sure we're there yet. So, yes, I think. What we're seeing. I think what we'll continue to see as people, people dipping their toe in, by doing it alongside and using their current process to evaluate, you know, the new and seeing if it's good enough. And it might not be right now. And it might be next year that it is in the air after it's better. And yeah, you. At some point, every company will be different because the trade off that is the spend. On your non AI way of doing it. And actually, but also there's the spend of the AI way doing it as well. It's not free. Of course.

Alex Baker:

Yeah, we haven't really mentioned cost and of course, yeah, there is some cost to it. We've mentioned a couple of the components that are optional and sort of have a cost associated. So there's the contact lens analytics. That is like most of the connect billing it's consumption based. So if you turn it on, it costs. A few cents per, per minute, per transcribed and analyzed contact. Then you have the evaluation forms themselves, and this is where you start to incur a little bit more cost. So you, you want to. I guess we'd always suggest, you know, doing a smaller proof of concept around this kind of thing, and really nailed down that, that business case for it. The, quality forms when you start to use them for each evaluated agent. So if you use. QM form on an agent in a month. It will cost$12. So you can start to Mount that up is similar to, in our chat, around the AI. Where you've got things like queue in, in in Amazon connected, slightly moves away from that sort of consumption-based. You know, is kind of consumption-based, but it's more of a sort of one time hit of quite a big chunk of cash rather than just the small, a minute

Tom Morgan:

it's value gets better. The more you use it, I suppose. It's one of those. Yeah. Okay. Yeah. Well, lots to think about. For quality management people. And let's, and. But it's exciting as well. I think because I think probably for a lot of people, quality management is a thing that. Big companies doing like it's, it's it's unfeasible for to think about when you've got two or three agents and everybody's maxed out on. You haven't got the money to hire somebody just to listen to all their calls and stuff like that. I feel like some of this newer stuff and the AI driven stuff is going to be really good for those people. Because it allows them to do things that they can't do today through because they just don't have the resource to do it. So I think that's kind of exciting for those people as well.

Alex Baker:

Yeah, totally agree. Cool. All right. X, Y area, and lots of scope to make some improvements with the tools that you get with.

Tom Morgan:

Absolutely. It'll be fascinating to come back in 12 months time and see what the state of QM is versus today where it's, it's still quite manual, but we've just got the beginnings of automation there. I think give it 12 months. It'd be really interesting to see where it's at. Yeah. Agreed. Well, we could talk all day, but I think it's time to bring this episode to an end. Thank you very much, Alex, for your expertise around quality management. I knew nothing about. And thank you for listening. Today we discussed quality monitoring in the contact center. Be sure to subscribe in your favorite podcast player. That way you won't miss it. Whilst you're there. We'd love it. If you would rate and review us. And as a new podcast, if you have colleagues that you think would benefit from this content. Please let them know. To find out more about how cloud interact can help you on your contact center journey. Visit cloud interact.io. We're wrapping this call up now and we'll connect with you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

AWS Podcast Artwork

AWS Podcast

Amazon Web Services