ACP: The Amazon Connect Podcast
This is The Amazon Connect Podcast - the show that focuses on Amazon Connect and related technologies. Find out more about CloudInteract at cloudinteract.io.
On ACP our experts meet once every 2 weeks to discuss the latest news and deep dive into topics such as CRM integration, AI, Scheduling & Forecasting, Training & Development and lots more.
If you're a contact centre supervisor, a service owner, and IT Admin or an AWS Developer there's something for you here. Increase your knowledge and understanding of Amazon's popular customer service application.
We'd love to answer your Amazon Connect questions on the show so if you have something you'd like our experts to discuss email us at podcast@cloudinteract.io.
ACP: The Amazon Connect Podcast is created and produced by CloudInteract and is not affiliated in any way with Amazon Connect.
ACP: The Amazon Connect Podcast
14: Nopaque
In this episode of ACP, we are joined by Phil Smith and Steve Mew from Nopaque. They dive deep into their journey through cloud computing, the inception of Nopaque, and how they're tackling challenges in telecommunication testing with innovative tools. They discuss the significance of IVR mapping, integration of AI, and the future of proactive customer experience testing. Tune in to hear about their unique approaches, challenges they've overcome, and their vision for the future.
00:00 Introduction and Guest Welcome
01:07 Phil's Journey to Nopaque
05:33 Steve's Career Path
09:35 The Birth of Nopaque
12:40 Nopaque's Unique Offering
15:45 IVR Mapping and Challenges
20:34 AI Integration and Future Plans
28:12 Closing Remarks and Future Episodes
Find out more about CloudInteract at cloudinteract.io.
It's time for another episode of ACP and we are joined this week by Phil Smith and Steve Mew, both from Nopaque. Gentlemen. Hello. Alex is also with me as usual. Alex, I'm very glad you're here. Cause you know quite a bit more than I do at this point in time. So I'm hoping to learn and I know you've already spoken. with these fine folk a few times before.
Alex Baker:Yeah. Hi guys. I'm pleased to have you on. And I guess I don't know that much about Nopaque. So really looking forward to hearing a bit more in the next half an hour.
Tom Morgan:Let's let's dive in then. So Phil, should we start with you? So. What's what's the kind of background that's led you to being a part of Nopaque?
Phil Smith:So I guess what happened i'd say around the middle of my career I started working in more senior roles. I went from networking into cloud with my good compadre steve He gave me my first kind of real job in cloud computing And from there on, I kind of progressed into a CTA position with ECS, who are now Global Logic. And while we were there the Amazon Connect platform was released. And, you know, just everything fell into place at the right time. I was able to work with a team of about 30 or 40 people in the end. To create a kind of consultancy offering and due to their great network that they had across financial services one of the first implementations we did was with RBS, now NatWest to implement Amazon Connect for their credit cards lines. So there's about a thousand agents and then going on from there, we won business across, you know, multiple other financial service institutions. And it meant that kind of we got into, you know, design and build deployment and that kind of stuff. So it was really good and then we started talking to Barclays. And I think you know, they were already looking at the product Myself and the team went in and kind of started to show them different types of architectures different types of things that they could achieve With it and I think somewhere in between then and then me, you know, me asking to be the the engineering lead for them COVID hit and so they kind of you know jumped on to the platform and migrated About 35 000 agents in the space of three weeks Because ultimately, you know with COVID and everybody having to stay at home. They can't really take calls and serve as customers and I guess in a roundabout way you know where i'd been involved in You know the convincing them that it was the right product earlier before The the head of contact center reached out and spoke to me And then I went and did around two years. I think it was with the team. So I was the head of engineering for the amazon connect implementation and the contact centers team in general. So a lot of winding down of old technology. I think while we were there, we implemented video using chime, but obviously quite related to what we were doing. We also implemented chat and voice ID across, you know, all of the calls, the voice imprints that they had, migrated out call recordings and stuff like that. So a lot of the periphery things to Amazon Connect that are needed as well, which is quite good. While I was there, I discovered that Yeah. The the testing of anything that you change is very important. There are a couple of minor outages, you know through I wouldn't have said there were careless changes, but it's a complex system So things go amiss now and then and we found that we didn't really have the capacity to do as much the testing that we needed and I felt like the products on offer weren't really very scalable And so it kind of came about it was like right i'm sure we could do this in another way And so that's what's brought me to Nopaque, which is now a company that we've brought into existence to kind of solve that challenge and make the testing of these platforms more flexible and on demand
Alex Baker:just quickly on on Barclays filled it. Do you feel like? COVID accelerated the move to Connect for loads of people, I'm sure. Were, were Barclays kind of about to move to it pretty quickly anyway? Or was it sort of a longer term thing? What are your thoughts?
Phil Smith:I think, I think they would have been able to spend a little bit more time architecting. They if we've not had that kind of emergency, having to get people into contact centers that could support remote workers. There wasn't the option to do it for many, many organizations, including Barclay's. They just didn't have the technology. So I think they would have moved anyway, but I think it might have been done in a slightly more iterative way rather than kind of everything all at once and then have to go sort of backwards and You know improve parts of it.
Alex Baker:Yeah.
Tom Morgan:Yes Such an interesting origin story that chimes with like we've obviously we've talked to other people as well And there's there's kind of a common theme of people doing other things And then get a little bit involved in seeing what amazon connect is doing And thinking actually this is different and exciting and like this is I need to jump on this ship because it's going somewhere So it's kind of interesting that you had that same. I love I love what you said like the I guess that sentence you said about you know Like testing is important essentially like hides probably quite a few stories. We should try not to get into but yeah so steve, is it the same? How about you like how have you ended up here?
Steve Mew:So my journey is a little it's quite a bit different. I actually started out as a pretty low level engineer in that You know, by background, by degree, I'm actually an electronics engineer. So I started out, you know, hardware and then moved up into the kernel. I started out on the OpenVMS kernel team at Digital and you know, a few years, well, a year there and then moved into the OS team at Microsoft in Redmond. So spent time with the SQL Server team and, you know, over the years, slowly moved up the stack from being a C guy, real time to, you know, NET and all that good stuff. So I was one of the first users of NET. So some time in the U S and then got itchy feet, went to Australia you know, general consultancy there. There's also predominantly as a dev Yeah, general business solutions. In 2010, I then moved to Malta and got involved with the private equity, you know, high speed, high frequency, low latency trading system you know, project. So I built a few platforms there. Malta is interesting, right? Because it's an island. And we had traditional, you know, data center infrastructure. And when you're running, you know trading platform private equity, you know, needs a constant velocity. It's up 24 by seven. And the problems we had, because often project velocity would stop because you have to wait for an HP guy to come from the mainland in Italy, and that can take weeks. So the challenges around just it. Development infrastructure you know, really became apparent to me at that time. And, you know, the best we could do at that time was, you know, VMware, you know, I was thinking that there's got to be a better way than this. Anyway, so I did my time there. I went back to the UK. I think it must have been 2012, something like that. And my prayers were answered when I discovered the real cloud, you know, AWS. And that's when I got involved early on as an early, you know, joiner to Cloudreach, you know, who were the poster child, you know, at the time for AWS. So I did some pretty interesting work there as head of data and analytics and head of ops. So lots of managed services. I think, you know, one project that really sticks out on my mind, and this was really for me in my mind, the early, the early seeds for what the Amazon AI story is now. I was working at Burberry and we built some of the early predictive models there. Which is pretty good. So yeah everything changed, you know, infrastructure was programmatic. Now everything was was an app in my mind. This is great. So yeah, so cloud reach was good that opened up the opportunity at Nord cloud. I met this good man here and so yeah, you know I got a call out of the blue from somebody in Finland. We don't need to go into that, but they said, Hey, how do you wanna, how do you feel about building a cloud company from scratch? So I got handed a checkbook and off we went. So Phil and I built out Nord Cloud UK in London for the Nord Cloud group. And that was a great adventure. Lots of great projects there. I left Phil became CTO there. You know, I got tired of the British summers again, three winters was enough for me. I was at four. I can't remember, but that's when you go back to Australia again. I turned an AWS partner around in Brisbane. I did that for a while. And then I went to Hong Kong and built a bank from scratch on AWS. You know, just you mentioned, you know, covid, that was during the covid years. Ironically, in Hong Kong, we, it was business as usual, you know?
Tom Morgan:Yeah.
Steve Mew:One of the highest population densities in the world, but we didn't have the the lockdowns and all that kind of thing like you had here. So That's
Tom Morgan:interesting.
Steve Mew:Yeah. So did the, did the project in Hong Kong came back to uk with the Barclays. With the Barclays opportunity. Thank you, Phil. It was Phil that set that one up for me. And here we are again, you know, so I'll pass across again. And here we are, you know, it's you know, I think we've had a conversation that's overwhelming through the years, you know, we, we've had a lot of experience building businesses from scratch for other people. We're pretty good at, you know, I think reading the technology landscape, how it, Evolves and we're also pretty good at recognizing opportunity spaces. And so, you know, we just thought, you know what now, now it's a good time as any let's jump in. So, you know, that's
Tom Morgan:great. Yeah, absolutely. And you, you found like a partnership that clearly works through the years as well. So that's, that's a really important thing as well, isn't it? So, all right. So that leads us to Nopaque. Sorry, go on Alex.
Alex Baker:I was just going to say, yeah, and leading on to Nopaque, it sounds like you guys have found a super interesting niche that I think it's fair to say there's Not too many people are doing what you're doing. So, yeah, I mean, over to you a bit more about what does Nopaque do, first of all.
Phil Smith:It is a fascinating time. And I think at, at the moment what we've looked to do is we, we've bootstrapped this ourselves. We haven't taken any investment yet. And we're on the cusp of being able to land customers onto the platform from through AWS marketplace. And I think, you know, but both of us have studied kind of startup culture methodology and how to get there via lean principles, that kind of stuff. And for us, really, it's a case of. We have built an MVP. It does what it says on the tin. It is more simplistic than its competitors out there in the market at the moment. But one thing it does different, which is you can buy on demand. You don't have to lay down the big, big, you know, money to pay for stuff that you then don't really use in an optimal way. So it kind of takes away that large budgeting that you would have to do if you wanted to do the testing. And the reason it's I think vitally important. We, we see a lot of organizations that, that have absolutely, you know, epic telemetry. They can tell you every little thing that's going on inside their platform. You can, they can show you the logs, they can do all of this, but they don't have a customer perspective. They can't tell when a customer can't do what they need to do because that doesn't come through to their telemetry often. And I think in, in the way we've seen, or certainly the way I've seen things happen over time, you know, You could be the most diligent engineers and you can still make mistakes myself included and I think that was one of the reasons why we put this in place because If you can trigger that as a test After you've made those changes from a customer perspective Then it's like, you know dotting the i crossing the t you can actually prove that the changes that you have made haven't impacted anything else and I think in that sense that being able to you know Point our testing tools at a telephone number traverse the journey and then approve or or you know Assure that that journey is up and running and working That's going to be a vital thing for things like regression testing and if you think about an organization You Let's say a larger organization that probably has componentized their, their contact center. If they're making a change to a quite, quite crucial part of the system, there's a massive chance that if there is an impact, it's a much, much wider impact. And then we felt like the tools that were there at the moment, you know, you'd have to pay 10, 20, 30, 40 times as much as they're paying already. To be able to get that test coverage. And so that was really what we kind of did with that And then the other side of it was mapping which I think steve needs to talk about because he is the The mastermind behind it
Tom Morgan:mapping king
Phil Smith:the mapping king indeed emperor wasn't it emperor of mapping
Tom Morgan:That's that's so interesting. Like it's fascinating actually how you've differentiated by essentially copying the amazon connect playbook In how they've disrupted contact centers you're doing the same thing in your own You niche, essentially as that's really interesting. Would you say Nopaque is a tool for development teams or kind of like contact center owners or the business or like, where do you see it fitting?
Phil Smith:I think right now where, where it stands, with its current available tools and available features it really helps two, two types of, of teams, I guess. One, one team would be. If you're a consultancy that's doing a lot of migration work for customers, you're helping them move from A to B, that kind of stuff. You know, it's validating the changes that you've made. I really want to leave the mapping discussion to Steve, so I'll let him answer that side of it. And the other side of it is engineering. You know, we heard only the other day from a potential customer that, you know, actually they want to use the tooling to help them with an acquisition that they've made, because the Obviously the inbound organization that they've acquired has already got a bunch of servicing and so when you think about engineering teams that don't really have the tools because they're too expensive when they're making changes, they're often having to do really really lengthy regression tests and you know that could be for example that they've only got the capacity to do five calls at once which means they've got to then make them sequential. And so if they wanted to do 100 tests, you know, then they're gonna have to use those five lines divided by five, and it's gonna take forever to actually get it done, which means actually then what they do is they put further stress on the engineering teams to make manual telephone calls, so on and so forth. And you know what? We still miss things even when I was leading that engineering team, even when we did try and do everything to make sure that we've got it right. So that's really where I think it lies at the moment. For the future though some of the things that we're planning to do, leveraging generative AI and other stuff, it should allow us to actually start talking about the quality of the experience. You know, if if this is being measured against What humans expect from the quality of a journey Then it might be able to start introducing things like ratings, for example and you know Maybe a new standard that talks about how accessible it is for both vulnerable people and for for people that are not vulnerable And how well it responds to people that you know saying things to a voice driven ibr, for example whereby Like at the moment, it's just not possible to test even just all of the accents in the uk You And a lot of the events often get missed. And so really giving them a tool that broadens the ability to test all of the different accents, different voices, you know, whether someone's mumbling or not mumbling. I think as, as the ability to create voice from text evolves, we'll be able to leverage that to start really pushing the different types of tests through and start giving more kind of guidance on, How well they've built and designed their journeys how well how well they respond to You know a customer that instead of saying i've lost my card starts talking about how they moved the sofa the other day Now they can't buy anything, you know, and it's it's difficult for a machine to work that out but I think the mapping is a great case as well for what it does and I think steve can
Steve Mew:yeah So mapping, you know, what is it and why why is it important? Well one of the biggest problems is that ivr configuration documentation It's often lost, out of date, never existed, and so on. It's an aspect of the IVR life cycle that, you know, we see and hear about on a regular basis. Keeping track of a changing IVR historically has been an aspect that doesn't get as much attention as it probably should. You know, it's just like human condition we all have, right? Because it's perceived to be, you know, either self evident, you know, you just call it and then you know what it does, right? Or it's perceived to be slowly changing or documented inherently by the IBR system itself, right? So it's, it's the poor cousin in the environment that often is bumped down the priority list. And so what happens is inevitably without having an accurate map, right? Increasing problems start to occur. As time passes, you know, drift, drift inevitably increases, right? You have this divergence.
Alex Baker:I think that's so common, isn't it? We've probably all got, you know, anecdotes of how we work with a client or sort of a business unit that They maybe just don't have anything documented, and the time has passed since they put it all in on. It's really difficult to get anyone sort of own that process of, you know, tell us what you've got at the moment. Yeah,
Steve Mew:absolutely. I mean, it's
Alex Baker:useful.
Steve Mew:There are hundreds of stories. Everybody has felt this pain in the past. You look at any large enterprise. You know, blue chip. It's maybe, you know, a legacy culture and you're drowning in documentation internally on a wiki and none of it's up to date because they just don't have the culture for that, you know, dynamic self documenting code or, you know, other mechanisms, right? So, you know, specifically to IVR mapping, you know, because of this cause over time because of this drift and other secondary effects. You know, calls get rejected or, you know, misrouted, prompts start to fragment, break up or repeat it in journeys. You get this fragmentation in the language structures. Inconsistent delays, message formatting, you get overspill, you know, how often have you gone into an IVR system, you press the button and it's still giving you the broadcast from the previous. You know node before it actually clicks to the next node or it gets confused and goes somewhere else for a while Then comes back, you know, i've seen that often. So this fills a problem. Prompt continue sometimes with lag after selection has been made Various convoluted journeys that could be easily optimized if you've got a proper map because you can see exactly not only what's going on With the content in the visual plane, but you can also start to see usage as well You know, there's a there's a heavier, you know You On the heat map down certain paths. Well, why is that? Well, we could restructure all that, right? And yeah, often people on the inside. They're too busy, right? Engineering is too busy doing the day to day. They don't experience it. It's one. It's
Tom Morgan:one of those. It seems like it works, like, like, from the outside, it works with like, as in, there are no errors, which is quite different from, like, it works well, you know, or it's optimal, or even when
Steve Mew:there are errors, right? You're usually handed a rap sheet by management saying, look, these were the complaints coming from the outside, but there's no, Tangible, visceral, instinctive feeling or understanding what that is really like from the customer's point of view to the engineering team. It's just a list they've got to get through.
Alex Baker:Yeah, I was going to ask about that. I don't think it's ever or rarely is it really proactive that people are looking at this. So I can't think of that many companies that are sort of actively Calling into their their lines periodically to see what the customer experience is, they probably do rely a bit on customer feedback. So something's broken. The experience is right. Maybe an agent gets it in in the year when they speak to the customer. Exactly.
Steve Mew:So I think I think we've made some really good progress here. Market reception has been very positive. You know, at the end of the day, how do you reset and trust your view of the system to make good design? And optimization decisions, right? And migration teams love this as well. You know, they want to trust that, that snapshot of, you know, what does the word look, okay, we trust this. And now we can, you know, redesign the information schema and so on, you know, the tool, you know, so we, we're calling, you know, the tool set total path. So this is the suite of not only the mapper, but obviously, you know, load testing, testing and various other things which are coming down the pipe. and it's all with interactive UI. So it gives you that comprehensive view. You know, the detailed method, you know, data package that can be consumed by automation as well. So, yeah, we can output JSON and you can, you know, through a few small transforms, put it straight into programmatic infrastructure to spin up a new eight, you know, AWS Connect call center. Well, why wouldn't you? Right? So, you know, day to day operations, faster, safer, easier, more effective, and of course, you know, one of the key tenants of Nopaque, you know, 10 years ago, it was about being cloud native born in the cloud. Well, you know, we're looking, you know, 10 years on since then. So, you know, I think the paradigm now is really are you AI native? Yeah, I
Tom Morgan:want to come back to AI actually, because I'm really fascinated to see how you guys are using AI and, and, and how you're thinking about AI. So I know, so that's a very useful, like, I didn't know anything really about customer journey mapping. I didn't really sort of think about it at that level. How, how do you test for that stuff? How do you, how do you, how do you actually do that? Like you sort of touched on what's important to do, but how. How? Like, is it like fake calls? And is it like
Phil Smith:magic,
Tom Morgan:magic coins
Steve Mew:from an end user point of view, you know, you simply put in the telephone number, you hit, it's a one click thing, right? And I think we're, you know, the first I think to do it the way we're doing it. I don't know of any other offerings out there really quite the same. They tend to be a fragmented tool set for the most part, you have to have a lot of interaction with it. Yeah. You know, you set up some basic parameters, you put the number in, and essentially the dialer, the mapper will just go out and dial, and it will just go through discovery, through the whole tree structure. And so
Alex Baker:does it kind of you set it off with a, you know, set it at an IVR and it will go through every single branch until it reaches an outcome and it will document every single outcome until it's got that full sort of tree of the IVR.
Steve Mew:Yeah, so there is some, I think, interesting IP that we've developed in there that we can get into. We can really talk, you know, more to that when we get into the AI questions. Do we want to go into
Tom Morgan:Yeah, let's do it now actually, because Yeah, no, absolutely. Let's do it now because this is a great I think, yeah. How are you using AI to make that better, I guess is the Yeah,
Phil Smith:We, we talk about what we can do now. We're already developing the next set of features. So for, for example some of the things that we've thought about, if you give your mapping a context. So we're thinking about, you know, contexts like financial services, for example, and then a secondary kind of context like credit cards, you could use generative AI to, to probe a voice based IVR. Because really that that's, if you've got, let's just say with, we've had quite a bit of experience with prompting. And obviously we're seeing some good results with things that where we can create. Let's let's call them personas and if we were to use those personas in a in a place like that, for example, like with testing We can do the same kind of thing. So if I want to say I've lost my card. I can use generative AI to create variants of I've lost my card. I can, I can sprinkle lots of things in there. We could go right to the level of, I've just moved my couch and I can't make a payment, you know? And I think where we need the evolution in the industry at the moment, because I'm not sure that we'll do it ourselves is. What about if I want to make a voice that's angry? What if I the voice is fast and I want them to have a Glaswegian accent, right? Not particularly Glaswegian. It could be, you know,
Tom Morgan:it could
Phil Smith:be Welsh or whatever, but the reality is, you know, having seen a lot of cool record or having heard a lot of cool recordings and having seen how those Intents can be missed even when someone is actually saying something, you know that fits the the design But doesn't fit the demographic voice that's where we start to look at using those and so, you know one of the evolutions of the product would be looking at having a marketplace of that would be taggable. So things like you know, person could be it could be suffering from debt or it could be you know, 30 to 40 and age, it could be male, female, it could be all of these different things. You know, and then. What what a person driving a test could do would be to enter some of those tags to build themselves a cohort of Personas and then as we go into the testing for that They're given the subject matter and then generative ai creates what it's going to say to those Those elements of ivr.
Steve Mew:Yeah, so to expand on that. You in in the mapping context, I would say that, you know, you have seen hallucination Coming back, you know, because this thing is crawling in real time, you know, there are You Some pretty heavy computational algorithms running, you know, it's multidimensional array with pointers flying around in real time. We've got real time drift correction as well. So it's had its moments. I can tell you we have seen hallucination, but it's pretty low. I would say the order of 5 percent for mapping. We get around this in a few different ways, right? We've had great success so far with large language models by way of You know, at the beginning of the interaction chain where, you know, we found a pretty good way of constructing multiple prompts that are highly differentiated not only in the language construction and querying, you know, from a verbiage and noun usage and an argument point of view. So, you know, we, we write them such that they're highly differentiated, but we designed this so they will hopefully bring back the same results each time. You know, like the space shuttle and they designed the space shuttle. They asked five different manufacturers, gave them specs to build five different nav computers, totally different software, different algorithms, different hardware. So that when in flight, there was always a quorum decision, at least three machines had to agree. And then they took that as true. So we've done the same thing, right? So there's a quorum voting system.
Tom Morgan:Okay. Okay. That's cool.
Steve Mew:You know, you can break the app if it suddenly elucidates or comes back with junk. So that's the first bet. That's the first guard, rather the first boundary point. Right. So essentially, you know, you think about it in a court, right? You have the, you know, bring the witness and then bring him into cross and be cross examined and it's the same kind of thing. So these prompts cross examine each other. So that gives you, you know, a lot of confidence just on the get go on the on the initial interaction with any LLM, right? And then beyond that, in post processing, we have various guardrail mechanisms, you know, to ensure that what has come out of the Votum quorum is high quality, is, you know, meaningful and accurate within the context of the query domain. So, and going forwards, I think we're going to lean on, you know, retrieval augmentation generation, so more specifically serverless. That's going to be key for us going forwards, where we can inject, you know, high quality reference data and knowledge in the knowledge base material to further ensure that, you know, it is super high accurate and hopefully, you know, we can take a position as, you know, thought leaders, you know, in this space as well, you know, back to the, the, the wider, you know, tapestry of being AI native and so on. So, obviously we're working great, very closely with Amazon right now.
Alex Baker:I was going to say, are you guys using sort of solely the AWS tool set for this, or are you having to sort of. Go out into into other.
Phil Smith:No, not, not, not yet. And the simple reason for that is, you know, we're running a very tight line between our own bootstrap finances and market product fit, right? And so when we get market product fit, there's optimizations that can be done to the platform as well. But in that sense, you know, it's like, right, can I build it or buy it? And in that sense, it was well, it's really cheap to buy right now. And so as Steve said, lots of guardrails and stuff that we've got around there, but we don't believe that's the strategy for the future because obviously you can't control what they do with their APIs. One day they might turn it off. And that's the there where we've also kind of engaged with Amazon on two of the The new initiatives for aid for funding and also for support where we'd be looking at going down the route of leveraging our own foundational models in Amazon and then building on the top of those instead, rather than relying on a third party.
Steve Mew:So we are using third party LLMs at the moment. It wouldn't probably come as any great surprise to you that one of them is chat gpt but obviously, you know, we're in early conversation with amazon We're looking at the ai, you know competency paths and we're looking at bedrock and then we can bring it, you know More in house and get into you know, augmented injection and things like this. And when you make it serverless And it's in house then you know, your your cost optimizations are in order of magnitude for sure
Alex Baker:Yeah, it sounds like there's a whole a whole episode of a podcast here just around what you guys are doing with with gen. ai Absolutely Really interesting stuff.
Tom Morgan:It is. We should stop talking really but I just want to get this one last question quick So go for a quick answer. What is there anything on your wish list for amazon connect that you are kind of like hassling the team to put in or something that would really help or or just stuff, you know, is coming, I mean,
Phil Smith:they've done it in the, in the, in the last few weeks, I'm not aware of, but one of the things, obviously when we're building out the UIs, we're, we're, you know, got a lot of data flying around and APIs and DynamoDB, that kind of stuff. And then we've managed to create cool reports, but there's obviously the ability to get them from lens and stuff. But when I did that and it was like, oh, the audio is not there. Well, that was pointless. Wasn't it? Cause that was the main thing I wanted. Right. So. Having it, having it where you can just extract out the call. Because when you think about what we're doing, we're doing this totally the wrong way around for Amazon. We're not a contact center, but we're using Amazon Connect as an outbound, like carrier dial or whatever you want to call it, that enables us to do the things that we do. And obviously then with those background tools, which are geared very much for contact centers, it'd be really nice if we could just hit that API and extract the whole thing. And then we can disseminate or store it elsewhere, disseminate it and present it onto the UI.
Tom Morgan:Yeah, that's interesting. Excellent. All right. We should stop talking because we're completely at time. It's time to bring this episode to an end thank you gentlemen very very much it's been really really interesting and as alex says there's there's probably a whole other episode in there somewhere talking Just about the ai story. So it's really interesting. Thank you very much alex and thank you everyone for listening be sure to subscribe in your favorite podcast player that way you won't miss it whilst you're there. We'd love it If you would rate and review us and as a new podcast, if you have colleagues that you think would benefit from this content, please let them know to find out more about how cloud interact can help you on your contact center journey, visit cloud interact. io. We're wrapping this call up now, and we'll connect with you next time.