ACP: The Amazon Connect Podcast

17: Performance monitoring and experience management with Operata

Tom Morgan

Send us a text

In this episode of ACP, hosts Tom and Alex delve into the world of contact center optimization with John Mitchem, co-founder and CTO of Operata

John shares insights on his two-decade career in contact centers, leading to the establishment of Operata. The discussion covers the evolution from traditional contact center monitoring to modern observability, the role of AI in enhancing customer and agent experiences, and the unique benefits of using Amazon Connect. 

Learn about the challenges of ensuring high-quality calls, monitoring agent behavior, and how Operata leverages machine learning and real-time data to drive CX improvements.

00:00 Introduction and Catching Up

00:55 Guest Introduction: John Mitchem

01:20 John's Background in Contact Centers

02:58 The Genesis of Operata

03:39 Challenges in Audio Quality and Load Testing

05:54 Transition to WebRTC Monitoring

09:02 Importance of Core Quality in Customer Experience

12:38 From Monitoring to Observability

18:07 AI in Contact Centers

22:32 Agent Experience and Real-Time Feedback

24:49 Operata's Integration with Amazon Connect

26:40 Conclusion and Wrap-Up

Find out more about CloudInteract at cloudinteract.io.

Tom Morgan:

It's time for another ACP and I'm joined as usual by Alex. Alex, how's it going?

Alex Baker:

Yeah, it's going well. Thank you, Tom. Good to talk to you. It's we haven't had the chance to catch up much lately. It's it seems.

Tom Morgan:

it has been that busy that I think, yeah, literally the last time I spoke to you was recording one of these. But it's good to see you again. And we're also joined by John Mitchem who is the co founder and CTO of Operator? Operata?. John, I'll turn it over to you. You

John Mitchem:

Yeah, let's go with Operator. Great to be here. Tom and Alex. Thanks for having me.

Tom Morgan:

Fantastic. And thanks for making time in your schedule and with the confusing time zones to make this happen, because I know it's probably kind of either the middle of the night or certainly the evening for you. So yeah, really appreciate you making the time.

John Mitchem:

Absolute pleasure.

Alex Baker:

Yeah. Good to have you here. Thanks for joining us, John.

John Mitchem:

Sure.

Tom Morgan:

So we, before we get into what you're doing at the moment, I think it'd be good for folks just to kind of take a step back and sort of what your history is, where you've come from and what's led you to this point.

John Mitchem:

Yeah, yeah, sure. So I've spent the last, let's say 15 years or so, perhaps 20 years. Man, it's getting on now in contact centers for my, for my sins. I don't know what I did in a past life, but it's led me to contact centers. And I've really been in the operational space. So I started off as a as a voice engineer. Actually started off as a cabler, truth be told. And and then and then found my way into PABX land and then and then finally into VoIP and then into, into contact centers. And through the course of my career, I've been fortunate enough to work all across the world and and in, In different contact center environments for, you know, retail and and banking and worked at some carriers as well. And so I've sort of seen a good breadth of contact center delivery optimization, things going right. Things going wrong. And and certainly had my fair share of of, of managing those voice and and data networks. And and so that that sort of led me to understand, have a deep understanding of the, of the problem space of that sort of plagues Getting CX right, you know, and and so I I left my last job at a, at a large Australian regional bank and and started Operata with with an old colleague of mine from, from the bank and and also a new, a new founder and I should say new founder, a gentleman who's who's had a couple of successful startups in the past. So we, we joined forces and, and created Operata. So we've been going for about six years now. And yeah, so it's, it's been a fun time.

Alex Baker:

Was it, was it something that coincided with Amazon Connect introduction and, you know, it sounds like it was around a similar time, was that coincidental or was it, was it a planned thing? Okay.

John Mitchem:

I get the genesis of, of Operator was around load testing and you know, looking at how we could. Perhaps disrupt that market that had been led by, you know, one particular player for quite a long period of time in the genesis space. And we were looking at other emerging platforms and looking at how perhaps load testing could be done slightly differently. But when we, when we started doing that and having conversations with, with some really early customers, it became apparent that there was a need for, Sort of deeper objectivity when it came to audio quality. And so they weren't only just asking about load, but they were asking about voice quality. And that led us to a conversation with with the folks at, at AWS. An old colleague of mine from, from the bank had just moved over to AWS and he just joined the australian Connect team. And we were having a very similar conversation to this, you know, talking about about load testing and what people are after and and he said, you know, funny, funny. You should mention that. We've got a large telco here in Australia that, you know, there's a lot of subjectivity around that audio quality and it's actually blocking the deployment. Do you think you could come in and have a chat to those folks? And so I did. You know, we came in and this is very, very early on in the piece. No, no market fit yet, really. And we sort of spoke around, you know, sure you could do load testing, but does that actually solve the problem? What is the problem? And they were arguing around whether their old platform sounded better than the new platform you know, Amazon connect. And they, they, there was some questions around, around the voice quality. And so we, we very, very quickly behind the scenes, you know, went like, like, like ducks swimming on the water and paddling like

Tom Morgan:

of course we can do this thing. Of course we can do this. Yeah, yeah,

John Mitchem:

Yeah, of course, easy, you know, and we built a we built a an objective quality assessment tool based on a protocol called, or a standard called POCA perceptive perceptive objective listening quality assessment and that was the sort of gold standard, modern standard for WebRTC, and our competitors weren't when using that spec that we're using an older spec called pesk getting, getting cut kind of audio nerdy, but it just wasn't, wasn't cutting the mustard. And so, you know, we implemented a tool and we're able to sort of unblock that project and that got us chatting to the folks at AWS about. Because of the wider opportunity and then as we sort of started going down that rabbit hole, it became quite clear that, you know, actually the, the, the tooling that large modern organizations were, were looking for just didn't exist. Not only was it around audio quality, but it was around the larger job of monitoring. And we'll get onto observability and the differences there a little later on I'm, I'm sure. But when, so we, we then moved into web RTC monitoring. That was sort of the next steps. I was like, okay, well we've done load testing, we've now moved into do some audio quality assessment. We can see that these platforms are, are are, are working as expected. Now, what's the, what's the next sort of step in, in that journey? And it really came down to, to the lack of, of observability or the lack of monitoring in the, in the environments and weather. This new sort of WebRTC, so just call it new, but six years ago when, when Amazon Connect launch, it wasn't exactly widespread like it is today. And

Tom Morgan:

a great, it was a big shift, wasn't it? I suppose from lots of on premise networks and like very closed systems and like a different type of different set of protocols, a different meaning to core quality and then WebRTC being much more distributed, I guess, and many more things that can go wrong and, you know it's different as well, I suppose, you know.

John Mitchem:

no, no, you, you're absolutely right. And, and in fact, that was the kind of core, you know, at the very core of, of the issue here is that you had, you know, large organizations that were quite. You know, well versed in SIP and they were well versed in, you know, on prem telephony and, you know, still had PSTN and, you know, some, you know, we're using basic rates and primary rates and all that sort of stuff. And then they were moving to this world of, right, this black box in the cloud Absolutely no idea really what's behind the wizard's curtain there. We don't manage our telephony. There isn't anything like PSDN anymore. We get these bunch of DIDs and then it pops out on a soft phone, on a browser, you know, what could possibly go wrong? Well, a lot. You know, a lot can go wrong. It's a great, it's great in principle. And of course it's a, it's a, it's a technology stack that's completely revolutionized telephony in the contact center. No doubt about it. As we found out in 2021, right. But yeah, I mean, so, so that sort of lack of tooling around this emerging tech was actually preventing a lot of of pilots to sort of, you know span out into, into full scale production deployments. And so we started focusing on that. And again, like back to my, my history I guess I felt this. Problem even on prem, you know, and even with the tool, the kind of stacks that I was used to back then and sip and, you know, standard old school telephony there was still a lack of end to end monitoring. It was still a hard job to do when things didn't go right, which was often You did have to rely on talking to agents. They had to go off cue. You had to, you know, talk to supervisors, perhaps in the Philippines or perhaps in, in India and all across the world really. And and that took time away from their day and what they were good at, you know, and it was hard for me to do my job. It increased the time to resolve it. You know, it was it was tough even then, and now we're moving into this fully managed cloud stack. You know, I could understand why enterprise were, hesitant and adoption was kind of waning. And so we, we really started to, to lean into that problem on the product space and we haven't stopped.

Tom Morgan:

think that that hesitancy is driven from kind of companies feeling like they're losing control of the core quality, like the ability to influence core quality, I wonder sometimes like with this. Do we, do people still care about core quality as much as they did? Like, do you know what I mean? Like this stuff happens and sometimes, I don't know, you, even a calls at home, you'd like WhatsApp calls or FaceTime calls. Like sometimes the quality dips and we're like, Oh, well, like, you know, whereas it feels like 10 years ago, we were all aiming for like a hundred percent I guess he's a hundred percent achievable and what's the sort of perception from what you're seeing kind of in the market for like how people approach core quality now.

John Mitchem:

Yeah, it's a really good question. I think consumers do care a lot because it's it's another representation of the overall customer experience for that brand. Some would say it's the experience, you know, especially as callers are genuinely. You know, in typically in strife when they're calling a contact center, they're not calling for a good time. Let's face it, you know and and so something's gone wrong. There's something complicated. There's something that can't be achieved on another digital channel. Typically, and and so they're using this, this other medium. And so they're already perhaps in a heightened state. When they call a contact center, and I think that's reflective of consumer behavior to contact center agents, especially over the last three or four years. So, yeah, I think, I think they do care quite a lot. And also there's, I guess there's a, there's a soft side, if you want to call it that, of customer experience. And then there's, there's the harder side of that. Cost optimization that comes with improved core quality as, as well. But I think, you know, it's really the tip of the icebergs into what customers actually care about and customers being callers you know, as, as well as operator, operator, customer, but you know, people, people care a lot about the overall experience. So it might not. necessarily be just the audio quality, but then perhaps they're, you know, rooted through to an offshore agent who is doing their very best, but can't necessarily help the customer and maybe under some unrealistic pressures from a commercial perspective and they'll take some shortcuts, right. Onshore or offshore? I shouldn't say just offshore. It's onshore as well. Any, any agent. And those, those sort of traditional KPIs and the pressure to, you know, reduce handle time leads to some pretty substandard behaviors. And and so both on the, the customer side in the way that they respond to agents and, and on the agent side. But what, what you're left with then are agents pulling out some old tricks, right? So, you know, Disconnecting a call while they're on hold or staying on mute for the first 30 seconds of the call, hoping the customer is going to drop off you know, they might be, they might be taking themselves off queue and putting themselves back on queue frequently to, to, to sort of try to cheat the, the the, the routing algorithm. And so that, you know, there's a, there's a few different behaviors that can be employed by an agent that's perhaps under, under pressure from their business to perform unrealistically.

Alex Baker:

like, it seems like the, the audio quality is almost like the, the, the absolute baseline that you kind of want to strive to, you know, try and at least make that bit not an issue and then deal with some of the, the interesting sort of agent behavior related issues that you mentioned there as well. So. Can, can the platform obviously can do the sort of quality monitoring side of things, but it can also do that, that bit around the, the overall customer experience and things like monitoring agent behavior as well.

John Mitchem:

Yeah. And I guess that the, you know, the reason I bring that up is, is really around the progression of the, not only the, the Operata platform, but the customer need and where we switch from pure monitoring then into observability. And that's a sort of natural progression of the tooling, but we're also now in terms of some of the cloud benefits, we've got access to APIs that we never used to have access to. We've got access to data streams in real time that we're just. Never possible before. And so all of a sudden, with the right tooling and the right focus on that tooling and knowing what to look for. And, you know, using machine learning, especially to kind of understand behaviors, you can start to paint a really incredible picture of not only the customer journey, but the overall customer and agent experience as well. And so, you know, You know, you can start to correct some of those behaviors or actually point out to the business that perhaps some of their SLAs and their KPIs are perhaps a little unrealistic. And that's in turn, you know forcing agents into it, into a corner. And so, you know, I think that's the, that's what we found to be the natural progression of pure monitoring into observability and, and that stemmed from You know, what is now to me a pretty obvious kind of space that, that you know, conversations with a customer that where everything was going well, so we've been, we've been in monitoring for, you know, two months, maybe, maybe a little more and the customer said, look, we love the tool, but. It's not showing us anything special. Our contact center is great. Like our networks are cool and they're, they're working as expected. The VPNs, the, you know, proxies, the firewalls, the IDPs, everything's working great. We've had a couple of blips, which we picked up on, but aside from that, it's cool, but, What else can you show us? And then in that conversation with the customer, I was like, well, here's some interesting logs that we've started to pick up on. And you know, not every issue is a technical issue. And as soon as I use those words, I got the nods from around the table and the old school telephony engineers, like I've been saying that for years, you know, it's often, it's often the user is just really hard to

Tom Morgan:

Yeah, but you don't have the data to prove it or like to show it or to highlight it. Yeah, absolutely. Yeah. That's

John Mitchem:

truly understanding one, the need for observability in the customer experience and agent experience space and, and the fact that that need was unmet. So, you know, we've been, we've been dedicated to being the best at CX observability in the same way that Datadog or Splunk or New Relic are are incredible at what they do and have evolved over the years. So we've still got a way to go, but that's our, that's our path.

Alex Baker:

Is this something so for people that are using connect maybe know a bit about what you do? Maybe they don't. Is this something that you can sort of start to get out of the box from Connect? Are there API calls for, for getting things like the latency metrics or other sort of audio quality metrics, or is it, you know, you've kind of developed your own secret source around it?

John Mitchem:

Yeah, a little bit of both. So we do and we do an integration using cloud formation templates and terraform. If the customer supports it and we deploy a stack into the customer's environment to help extract some of those traces, logs and metrics. But at that point, they're just that. You know, and those are available in cloud watch as well. And you know, we certainly recommend to customers that are just dipping their toe in the water to to look at those as well in cloud watch and see what you can do out of the box. Certainly, you know, A. W. S. Has done a great job of evolving those. But at the end of the day, it doesn't really paint a picture of the experience. It's giving you a bunch of metrics. Which is useful in some circumstances, but you really do need to know one how to interpret those and two to tie them back to an individual experience. And that's that's the hard bit. And so that's what we focus on. So it's not so much, you know, in some cases we generate our own special source. And we do that at the at the edge. We do that at the at the agent desktop. And in other cases, we then, you know, collect the, the, the metrics, traces and logs from the CKs platform, you know, Amazon connect nice genesis. And and then we, we, we pull all that into operator and that's where we start to really do our magic of helping, you know, piece that together and the overall experience and start to pay to a timeline of the, the agent. You know, experience and the customers experience along that journey. And then, you know, the job is not I mean, it's part of observability. We, we kind of employ a method called the OODA loop. And I think it's from the. Maybe from the mid seventies. It's a it's a U. S. Air Force kind of methodology. Observe, orient, detect and act. And that's the way we think about the problem. Observe, orient, detect and act. And so the observe, orient and detect is the bit that we're. That we're, that we've sort of got down pat now, now, now it's, now it's the acting that, that's really our next stage. And that's where our customers are starting to invest with Operator, Operator to to I guess, you know, explore what. Orchestration off the back of observability looks like, you know, and there are some really fantastic use cases coming out that can repair broken experiences or save experiences from from, from going down the wrong path because of the real time nature of, of what we do. So some really, really fascinating orchestrations that we're starting to see, especially on Amazon Connect.

Tom Morgan:

It's really interesting. And I think it'd be good to kind of bring in a discussion around AI at this point because it feels like a lot of that you can see some quite tangible benefits in a way that, you know, maybe in some other sort of businesses and work streams, it's sometimes a bit of a struggle to see how AI can be applied. But actually, I think there's some real. Like some of the things you've been saying, you can kind of see how that yeah, like the, the taking the knowledge of everything that you know, so far, you can start to apply that. But there's probably other places you're thinking about AI as well in operator, I imagine.

John Mitchem:

Yeah. No, I mean, it's, it's, it's a really interesting time. And I think like, like most businesses, we're still trying to find the most applicable use case and and really make it really make it work to our customers favor rather than just releasing features for the sake of AI. And, and I think, you know, I think one of the most pronounced use cases for us is to actually help those who. Don't have telephony knowledge and perhaps aren't as strong in contact centers to understand what they're looking at and what the next best action is. And so that's been you know, an early success point for us with with AI. So the interpretation of the of the overall customer and agent experience and then a recommendation on the next best action for that. And that. That really speaks to the shift in personas in cloud contact centers as well. I mean, you don't always have the crusty old voice engineer like me in your team anymore, right? You've got, you know, you've got shared services or cloud services that are now deploying contact centers, which still are, in my opinion quite, Specialized in their nature, you know, customer experience is both an art and a science. And, you know, and I think it's, you know, and so is the art of troubleshooting as well. And so you can buy that you need to kind of specialist skillset. And if you, you're lacking that from a competitive standpoint, you can, you can really lack behind your, your competitors who do have the right staff, the right skills and the right implementation of CX tooling. And so We can use AI to level that field a little, level that playing field and you know, really help skill up those who aren't first in, in, in contact centers and, and the, and CX delivery.

Tom Morgan:

interesting.

Alex Baker:

So similar, similar application to we've seen in maybe this reporting space where people that aren't data scientists or don't have that kind of background, but you can use AI to say, interact conversationally with your data and say, what, what happened in the contact center last week? Did we hit our KPIs? What can we do this week around our, our scheduling to try and mitigate that? And I guess it's the same. What are the top three impacts to our core quality that we can we can, we can try and do something about, yeah.

John Mitchem:

Yeah, absolutely. Or, you know, what were the key drivers behind, you know, the worst performing customer experiences over the last week? You know, we can then drill down into things like maybe it was aligned to a specific ISP in a specific region, or maybe there was a, you know, subset of IP addresses which were impacted, which clearly show, you know, some sort of a choke point in a, in a specific region in the world. But, you know, again, helping these You know, our customers to further interpret this and then be able to take meaningful action. I think it's really, you know, really great play place for us to start. And we've certainly got some, some more more use cases that are, that are coming up that are in, in testing at the moment. But, you know, this will evolve. And I think just like the story of of Operator and, and our evolution, you know, what we'll continue to do is listen to our customers and we'll, we'll continue to evolve and apply the right tooling where it matters. To do the job that's in front of us. But you know, we're on a, we're on a mission to, to be the very best in CX observability. And and so that's what we're, we're hyper focused on.

Tom Morgan:

Yeah, it's really interesting. It feels quite empowering for agents as well, to, to get closer to that, being able to through a mixture of what you're doing, but also applying AI to that as well, being able to kind of just not always need an network engineer to help diagnose problems, to be able to vocalize. What was just experienced and, you know, not necessarily know the right words and the terminology and the language, but just to be able to describe the experience and then with, through a mixture of telemetry and just, AI and knowledge and tooling and all the rest of it, turn that into something actionable. So that's, that's really interesting.

John Mitchem:

Yeah, absolutely. And look, you know, it's, it's, it's a good, that's a good segue, I guess, into the into the agent experience itself. That's become a really important part. Of our business of really important part of our product. And you know, we see the, the, the agent as one of the end users of the operator tool. So we deploy a widget on the on the agent desktop. It generally floats on top of the soft phone that they're using in, in whatever you know, agent desktop. They happen to have, whether that be custom or out of the box, you know, CCP or Amazon workspaces or, you know whatever flavor Salesforce or Zendesk you know, we, we apply this via a Chrome or Edge extension a little floating widget on their, on their desktops so they can position it where they want and, you know we remember the state that it's in. So we remember their preferences and, and all that, you know, agent desktops are pretty, Crowded, you guys know that very well and and finding real estate on there is pretty hard, but we've managed to, to, to do that in a way that doesn't impact the agent agents overall sort of experience. And I guess that is our channel to the agent and their channel back to us. And so we'll notify them of things that are impacting the customer experience that they can do something with in real time. You know, unlike maybe some of our competitors that are saying your moz is low. That doesn't help an agent at all, but if we can tell them that, you know, three o'clock every day, your, your internet seems to dip, you know, perhaps try and have a look. If there's some somebody else streaming on your network, that's something a little bit more meaningful to them or your network's bad, but your customer can still hear you thumbs up, you know, or if we detect things like abuse and the agents being abused, we can pop. You know, pop a help article to them and we can then enforce business rules like timeouts and that sort of thing. So that's how, that's our kind of window to the agent and the, and the agents window back to us to report issues. And, and also to signal when things aren't, aren't going so smoothly. So again, we can use that OODA loop. We can use the orchestration and we can use the you know, the, the overall, overall action ability of the platform to be able to, to, to hopefully. Make a difference to their day.

Tom Morgan:

that's

John Mitchem:

hard being an agent, right? Like it's a, it's a hard job. So

Tom Morgan:

It is a hard job, like, and we, we don't ever talking like on this podcast, we very rarely touch on the business side of actually answering people's questions, you know, which is also what they have to do, where, you know, there's all these other things as well to manage. So, yeah, no, it is, it's There's a lot going on. We're kind of coming towards the end of time, but I think it'd be good to just talk a bit about Amazon connect and how operator works with connect. Cause I'm not sure I know the answer to this was operator only ever designed for connects or just does it also work with other contact centers solutions as well, or like, how, how does that work.

John Mitchem:

We found market fit with with Amazon connection. We've got a, a brilliant relationship with the guys at AWS and that, that continues to strengthen every day. And so, you know, it is our leading platform. The one that I would say, you know, a large, a large majority of our customers are on. But we do also support our customers that are, that are using multiple platforms, operators works across a lot of large enterprise. You know, private highly regulated industries government organizations, federal, state level government all around the world. And so a lot of those have complex contact center environments. And so we now support other contact centers where, you know, where that makes sense for our customers so they can have a central point of observability across multiple platforms.

Tom Morgan:

it, got it. All right. Thank you.

Alex Baker:

Anything, anything sort of in connect that is upcoming or, or any, any wishlist items. For example I think we, we quite often find that it would be good to have slightly better API availability for some of the newer features. And anything from your side that you're looking forward to?

John Mitchem:

Yeah, no, I would, I would say the same. I mean, more, more programmatic access to to all of their features. I'd love to see some more APIs around their contact lens space as well. I think that would be really useful. And and also, you know, their auto summarization availability sort of making it easier to get the data out. I know the guys have done a lot of work around the the data lake which has been really, really beneficial I think is a sort of central point. Of data assembly. So I think they've done a really, really terrific job there, but I hope they continue to invest in liberating the data for all, you know,

Tom Morgan:

Yeah, excellent.

Alex Baker:

Silence.

Tom Morgan:

Wow. There's so much stuff here. I think we could carry on talking all day, but I feel like we should wrap this one up. Because the time has flown past thank you ever so much, John, it's been really, really interesting. And and best of luck with, with what you're doing. Thank you, Alex, as well. And thank you everyone for listening. And thank you for joining us to talk all about everything we've been talking about today with Operator. Thank you for listening to the podcast. Be sure to subscribe in your favorite podcast player. That way you won't miss it whilst you're there. We'd love it if you would rate and review us. And as a new podcast, if you have colleagues that you think would benefit from this content, please let them know. To find out more about how cloud interact can help you on your contact center journey, visit cloudinteract. io. We're wrapping this call up now and we'll connect with you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

AWS Podcast Artwork

AWS Podcast

Amazon Web Services