Can we solve AI’s “deer-in-headlights” problem? (with Dan Miller, founder of Opus Research)
Summary
The conversational AI market has exploded: ~650 companies, even more tools, and way too many promises. For business leaders, navigating all this can be paralyzing. In this episode of Deep Learning with PolyAI, your host Nikola Mrkšić sits down with Dan Miller, founder of Opus Research and the analyst who coined the term “conversational commerce,” to unpack why the AI ecosystem feels more crowded and confusing than ever. Dan calls it the “Deer in Headlights” moment: a point where too many choices, too much hype, and too little clarity leave enterprises frozen on the road to AI adoption. They discuss:
- Why the AI boom has created confusion instead of clarity
- The Botenfreude phenomenon — why people still enjoy watching bots fail
- How most AI “failures” come from poor design, not bad tech
- What it will take for AI vendors to cooperate instead of compete
- Why success now depends on defining clear goals and working together
Dan has been tracking and shaping this space for decades. Watch the full episode to learn how we can move past the “deer in headlights” moment and start driving real progress.
Key Takeaways
- From DIY to “Deer in Headlights”: The conversational AI market has exploded to 650+ vendors, leaving buyers overwhelmed. Success now depends on defining clear objectives and understanding what problem you’re solving before choosing technology.
- Automation isn’t the point — orchestration is: The real challenge isn’t deploying AI; it’s integrating it. Dan and Nikola emphasize the need for solution providers to “work well and play well together,” building interoperable systems that connect people, machines, and data.
- Voice is evolving, not fading: Once seen as the future, voice is now part of a broader omnichannel reality. PolyAI and Opus both see voice as a “classical instrument” in a larger AI symphony — vital, but most powerful when orchestrated with text, chat, and visual interfaces.
- Defining success, not failover: The biggest risk in AI projects isn’t technology — it’s design. As Dan notes, 95% of failed deployments stem from unclear workflows and goals, not bad models. The focus must shift from handling errors to measuring ROI and defining success up front.
Transcript
Nikola Mrkšić: Hi everyone. Welcome to another episode of Deep Learning with Poly ai. Today with me I’ve got Dan Miller, who is an emeritus analyst and the founder of Opus Research, a real story in this field. Dan, it’s so good to have you on the podcast. Yeah,
Dan Miller: it’s great to be here.
Nikola Mrkšić: Good to
Dan Miller: see
Nikola Mrkšić: you. It’s great to see you as well.
I think, uh, we first met in 2019 and, you know, have, uh, stayed in touch ever since and I’ve always really enjoyed your takes on the industry. Its [00:01:00] evolution, kinda like the, both the kind of like the, the visionary takes and the very realist one. So really excited to speak to you today.
Dan Miller: Yeah, back in Hudson Yards, which, um, is, yeah, it’s coming back to life in New York.
I, I was living there then. I’m back in San Francisco now, a lot to talk about there. ’cause what, what you were bringing up back then was the sort of the transition from the, uh. Heavily trained, almost, you know, hardcoded ways of having these agents behave like a, a person, a voice agent.
Um, and you were well ahead of the curve. And, and now I was looking at a, at a thing on LinkedIn where Nicholas de Kokoski, um, said there are 650 firms. In conversational AI and, um, it, it’s a very crowded space now. And, and, you know, part of [00:02:00] what we need to talk about is how companies differentiate themselves now and what brings real value to the companies that wanna Yeah.
Use these tech.
Nikola Mrkšić: Just for the benefit of the audience. I’ll, I’ll pull it up. Um, I, I saw it as well and I was like, this is wonderful. You know what I found particularly interesting about this one? There’s conversational AI platforms, you know, I guess this is the elite top tier where there many others are there?
So many are there alone? I mean, what is it like 1, 2, 3, 4, 5 times what? 20 rows? A hundred platforms. Then we got virtually, yeah, six 50. What would these be? I guess like solutions rather than, you know, things that have self-serve, but then you reach the conversational IVR and voice agents again, so virtual versus voice, that’s always been a distinction that was clear to no one at all.
I’m guessing these are them. Chat versus voice, I don’t know. Then my favorite is, and then we go here into like customer support automation and it’s like, well. That’s kind of what all of these are for. Right? And then the [00:03:00] final nugget here is like the BDRs versus SDRs. My new VP sales, he was like, well, I guess those that follow up on leads versus those that don’t.
And I’m like, do really need to even make that distinction with ai. But yeah, it’s insane.
Dan Miller: When you think about what’s different now, You see all sorts of descriptions of what you, you mentioned the word solution. So what comprises a complete solution?
And it gets to the question on the buy side of, Hey, what are you trying to solve? Are you trying to create a better customer experience? Do you have specific objectives for, um, you know, helping employees complete a task, helping customers complete their tasks? Is there a way is, you know, is there a way to do it?
Um, where. Interacting with a voice agent is the accelerant. Um, and you know, it, it, it gets down to, um, slow realization that this isn’t about automation anymore. This, this is about the use of [00:04:00] tech, of ai. This part of the solution stack is, is an AI.
There’s a, a flavor of AI to assist anybody with anything and what, what the benefit you get from it is, um, strongly correlated with how much you understand of what you’re trying to do already. It’s not where you can just sort of say, oh, okay, an agent’s gonna come in and, and take care of this.
So we’re, we’re learning in the three years since chat, GPT made, its made the appearance of a conversational interface for an artificial for an LLM essentially. Um, just. The, the full spectrum, uh, from Hey, I’m gonna ask you to write a paper for me to, yeah. Hey, I’m really good at what I’m doing and I, I put some thought into when and where I wanna lean on one of these things to, to make me more efficient.
And that’s, um, a good thing, I think. [00:05:00]
Nikola Mrkšić: Yeah. No, no, no. I mean, absolutely. And you know, I think it’s, uh, it’s kinda like saying what can we use electricity for? And you know, is it like, uh, um, you know, to build new appliances, are we gonna rebuild the old ones? And yeah, it’s kinda like we’re rebuilding the internet or.
Mobile. Absolutely. So no wonder there’s six 50 there should be more. Right. And I’m not sure that this is all conversational ai. It’s really all becoming an AI landscape. And you said like a really interesting thing and uh, like as we were like dialing into this thing you said that people kinda like need to learn a lot, learn to get along a bit more Right.
And not compete. And I feel like as Oh, oh yeah. We’re all just competing and like everyone is everyone’s competitor. Although in the grand scheme of things, as the TAM increases and we get to do more of these things, there’s space for more than three of these 650 companies to succeed, but they’ll just succeed at different things.
Right? Yeah.
Dan Miller: Technology providers are solution providers to, you know, and this is something I’ve written about [00:06:00] for 13 years now, is, is about, um, the API, the emergence of APIs. Back then we had to describe what they were, you know, because application program interface, what, what does that mean?
Um, yeah. And back then we’d, we’d say, well, it, it isn’t just that you, you set up. Like a standard way for different systems to talk to one another. You have to document. What their purpose is, how you use ’em, what the, um, best use cases are and that sort of thing. And back then everything was fragmented there wasn’t so much leaning on, on the cloud. Uh. To to be this area where you had mass storage, you had mass compute, and now you have these big LLMs and, and whatever, um, whatever potential they have to, to do almost anything. Um, and we need to, well, [00:07:00] what we were talking about before the, the show start is that all these.
Solution providers, um, you know, there’ll be incumbents in enterprise settings and, and these, you know, we, we try to deny this, but not everything moved to the clouds. There’s still mainframes and local computing. There’s a lot of intelligence. On, on laptops, in smartphones, whatever. There’s a lot of intelligence around the, so for sure these things have to talk to one another and the solution providers that are trying to um, help.
Well just gain more business or, you know, yes, the tam gets bigger, but you have to define what, what you do well on behalf of specific customers. Um, and then that, that means you have to work well with others. So that, that was, you know, sort of the stating the obvious point that, that I was trying to make here.
Nikola Mrkšić: No, I mean, there’s so much, there’s so much to do. And, [00:08:00] um, you know, maybe just to kinda like go, go, go back to um, a few of the points we were touching on as well. You know, when you look at kinda like voice AI back in 2019 when we started talking and then, you know, all the way through, how do you feel like your expectations, ’cause you know, I was looking at your profile just briefly before, before we started, and you know.
Like, I think I’ve been working on this longer than most, but, and compared to the people working at it, I have, but I think, I see you’re coming up to, you know, near 40 years at Opus and it’s like when you see your expectations and how they’ve changed throughout time, like, are you disappointed? Are you.
Surprised by how far we’ve come or like, rather maybe what surprises you in a good way and what surprises you in a negative in terms of like, oh, we still haven’t cracked that.
Dan Miller: Well, I’m, no, I’m, I’m gratified that that, um, it, it was like two years ago. I, it was almost scary ’cause like back in, whenever I, I was [00:09:00] define, I was talking about conversational commerce and conversational technologies and just saying, Hey.
These are technologies that improve the conversations, people to people. So that was, you know, any almost telecom, it’s at the telecom layer, the network layer and that sort of thing. Um, people to machine. And that’s where we got into what you and I were always initially talking about, which was natural language understanding and, and, um, well the, the core things around.
Automated speech recognition, um, all the components to that and, and then the machine to machine stuff that conversations that have to take place. Uh, back then that was just to speed up, you know, get rid of latencies. ’cause, because if what we were talking about was essentially improving the user interface and, and observing that voice had a very important, uh, role in a conversational user interface.
And, and, you [00:10:00] know, I always thought voice would prevail. Um, and that was naive because I, I didn’t, uh. Recognize the importance of, of SMS and texting and messaging and email all becoming part of the conversation as well. So we’ve, we’ve embraced that, um, you know, whether you call it multi-channel, omni-channel, opti channel that’s happening, um, and.
I feel like I foresaw a lot about that, but had no idea when, and spent a good deal of my early career just explaining how things ought to be. And then about five years ago, it, it went from the, what are you nuts, what are you talking about to how do I do this? My slogan la two years ago was, wow, everybody wants to be do it yourself.
And this, this makes, do it yourself possible. I mean, um, in, in. Now you what? They were gonna [00:11:00] call it prompt engineering, but now you don’t even prompt engineer. You think about what you want done. You tell your agentic AI or whatever. This is what I want, and it sort of does it for you. And so it’s a, it’s new enablement for do it yourselfers.
Um, but then the quality could be all over the place. And, and, and I had, I had this slogan that said, well, um, they’ve gone from DIY. To DIH, which is deer in headlights, because there’s so many alternatives out there that, you know, those six 50 companies that Nicholas is describing, they aren’t all doing the same thing.
And, and, um. How you choose the vendor you wanna work with? Depends on how well you can articulate what you wanna accomplish on whose behalf. And I, you know, I hung out in CX and [00:12:00] I, you know, I think it, it’s a CX story, you know, it was, or, or a user, it’s a UX story, so it’s a user experience. Um, and, and that’s where I think things are improving.
But it, it’s just spotty.
Nikola Mrkšić: It is, it is. I mean, you
Dan Miller: brought up in, in the notes that, that MIT study that said 95% of of these AI projects fail. Yep. And or don’t achieve ROI, I don’t know exactly how they articulated it, but when you either do the deep dive or asked your preferred LLM to tell you the sum, you know, do the TLDR part of it.
It’s not a failure in technology. It’s a, it’s a failure. These technologies work better than ever, you know, and, and, you know, things like hallucinations will never go away, but you need to design workflows or an understanding of how [00:13:00] you want to use these things that are, that are the most beneficial. So where we’re at is the, the technologies are there to.
Help you accomplish your tasks, but, um, rather than thinking of it as prompt engineering or you know, saying the right prompt, you have to sort of zoom out and, and say, um, here’s, here’s my ideal outcome. Here’s my speculation of what, what, what the resources I’m gonna need. And, and then, um, sort of monitor results and iterate and that sort of stuff.
Nikola Mrkšić: I mean, yeah, you, you have to be the meta level manager. It’s like, how do we define success? And like, if you then look at the whole thing, it’s like, you know, where do you need to automate? And I think, you know, maybe back to kinda like how you see the field evolving. Like I think the, the interesting bit and kind of like how you spoke about it is like.
It always bifurcated into these partisan viewpoints and it kind of like touches on your like, let’s get along a [00:14:00] little bit. One thing that I’m disappointed by still is there almost seems to be like voice, like our world. Although we now do like, I dunno if you know like, but I think about six months ago we started doing even like chat only deployments and stuff.
Just ’cause you know. The world is large and people come to large omnichannel deployments through multiple ways. It’s easier to start with chat. It’s really hard to do voice well, so when people are serious, better to do voice than chat is easy. But I think what I find really interesting more broadly ignoring poly ai is it’s almost like music, right?
The. Like voice is like classical music. It evolves slowly. Sometimes there’s composers and it’s like small niche vendors like us that try to take the art form to a whole new level. And maybe, you know, each century you get a few composers and things and you know, like you look at like voice mode and chat g pt, it’s phenomenal.
It is very non omnichannel. It’s non multimodal and that really hurts it because. It is in the [00:15:00] mix of the, because look, I, I think like, you know, it’s not that people love text as much. The conversational form is about like a lack of structure. So you can do many things without discovery if you believe that it can do it.
I think it’s really like the gui. That is like advantageous in many situations, right? You mentioned conversational and commerce, right? And the truth is, if you’re buying a clothing item, shoes, whatever, watch this, you’re looking at it and you kind of wanna see them, right? So you can’t really do it over voice.
And I almost feel like that, you know, classical music level obsession of voice vendors. Whether they’re like doing it on the phone for enterprises like we do or like a voice mode which could do more on, um, the multimodality, but doesn’t really happen ’cause it just makes the assumption that it’s gonna be stuck on like a screen.
’cause it mostly is when you’re driving and talking to it or something like that, where really. Um, I think meta a few years ago when they were really trying to push it and they calmed down a little bit, but they’re still working on it. You need an ar, vr kind of thing to really join it all up. And until we have that, I almost feel like we’re like [00:16:00] perfecting rock and roll or classical music and we won’t get this like final, you know, universal, universal symphony that, that we need out of this.
Wow.
Dan Miller: I, I can see that that’s probably not a bad way of thinking of the sort of attention that’s being paid to orchestration.
So it, it went, um, discussions of. That started, made with voice AI and did address voice agents, you know, your classical music here. Um, uh, kind of morphed into, um, you know, as, as it became very real. There, there were more questions about, oh, what’s the ROI of doing this? And, and, and. And which, which led to sort of a, an operational or, um, dis you know, discussion of how you orchestrate the use and manage both people [00:17:00] and Yeah.
Artificial people and, and we’re, and we’re just at, I, you know, I still think we’re at the early days of that, but there is much more common awareness of that. That’s where the solutions have to come in. So like, if you’re. And that was hence my, hey, these companies have to learn to work well and play well together.
And, and then for vendors of solutions to enterprises, um, it, it becomes. Uh, one of two discussions. It’s like, oh, what, what, what is the ecosystem I’m working with here? Are we, are we like, um, you know, can we, you know, can be as peculiar as like, are we, you know, are we with Amazon or Microsoft Azure, or, or, or Google Cloud?
And that’s where, um, you know, there’s some really. Interesting amalgams of co companies that are coming together to [00:18:00] solve, you know, specific problems.
Nikola Mrkšić: Yeah. Yeah. And when you think about like, solving the problem and back to maybe just kinda like the customer of conversational systems, do you think we’re approaching the point where, and you’ve written about the whole like botenfreude and kind of like, you know, the glee that people, that people feel like, haha, you see it didn’t work.
I mean, like, when will that get old? When will people stop? Like, ’cause a, they work better, but like, what’s in the way? Why is this happening?
Dan Miller: I, I think part of what. Part of human nature is that, that people feel better, you know? Well, when will it get better? I, I think it, well, it, it’s wor the, the things are working, but there are implementations that are worthy of ridicule, which means that maybe it never goes away.
And it’s part of the vetting process for what solutions are gonna survive. Um, yeah. But, uh, the, [00:19:00] you know, the, the, the sign curve of, of, you know, oh, you know, this is just starting to work and it builds high expectations. It doesn’t meet those expectations, and it takes its little nose dive. Um, those have become, those are becoming fewer and or less severe.
And, and there’s been a steady rise in, in the quality of solutions. But there’s two aspects of human nature that strike me, are never gonna go away. That there, there are always bad actors that use the technologies in ways that. Are detrimental and there’s a natural human tendency to wanna break things that as they come along.
So the first thing you hear about when a new solution comes along is, oh, I, I cracked that. You know, I, you know, this group on Reddit, you know, already figured that out and, and made it do these awful things. So I, I don’t think that ever goes away. So we have to look at the. [00:20:00] Solution it set, you know, and this goes back to the printing press where, you know, it was the Bible and it was pornography and, um, so, so there, there’s, there’s the dark side of these things that, um, you, you.
You sort of, there’ll be a whole industry around, you know, batting that down. Um, yeah, and, and you know, trying to prevent fraud. And you read about the deep fakes and all that. And, and yeah, that, that there’s a dam for, for, you know, fighting the, the off the, the negative side. And then you just have to pay a lot of attention to the technologies that, that actually improve.
Life for us humans. And, and, and there will always be a set of, of those that, um, we wanna nurture. So yeah.
Nikola Mrkšić: Yeah. That’s where we’re at. Yeah. When you, when you think of like, kinda like the future in that context, when do you think we get to the point where, you know, [00:21:00] people really, truly on average prefer speaking to AI over humans in say customer service scenarios?
Dan Miller: I think it’s more likely that, that you may not even know,. I think the consensus has come, you know, well, people kind of recognize it, so we don’t have to like explicitly say, I am your, you know, agent, blah, blah, blah.
Even though that’s probably still a best practice. Um, but I think we’re there. I think, I think that, um, in, and it’ll be a very individual decision. It’s like you and I cannot tell. A user, what their preferences are. So part of better listening and understanding is providing the end user with, with the tools explicitly or implicitly that their preferences are known.
If. I, I will have my preferred way of doing it [00:22:00] quickly. It might not always be voice, but um, when it is voice there, you know, I, I just assume, um, do it, you know, I just don’t, I’m indifferent to whether an age, if an, if an automated agent does it well, that’s what I want. Um, if I have. If I have some qualifiers in my head that I, you know, I have a particular solution, um, you know, I have a, a particular provider that I wanna talk to, and I’m, and I, and I have like a lot of ifs, you know what, you know, what is happening in healthcare right now is, uh, you know, oh, you know, um.
Stub your toe. Don’t know whether you’re broken, don’t want to go to urgent care. You know, you, you, you talk to Chacha. Find out. I mean, I mean, so in a funny way, all the, oh, well you saw this when, when, uh, when OpenAI went to GPT [00:23:00] five and people were going, wait, no, I worked out my deal with four, that I talk to it all the time.
That, that’s pretty organically saying I’d rather.
Nikola Mrkšić: I was, uh, maybe, you know, the boring, kind of like, you know, getting absorbed by enterprise software. I was in awe. For just going, you know what, I’m gonna like rid the company of that dependency and move to the next one. ’cause it’s the right thing to do because it is, you should not be choosing which model you’re gonna use.
It should be good enough to know, and you should subject your team and a bit of your users to enough pressure until your team cracks how to do it for them. And I was like, oh wow. There. They’ve done the right thing again and then he like backpedaled. And I was a bit disappointed, but, uh, because it doesn’t really matter if it’s better or worse, what matters is to evolve to a better one.
That was clearly the better form factor decision that many in this space have often shied away from and then never gotten anywhere. I was disappointed to see him backtrack, but hopefully they managed to like figure it, figure it out with like mid-level steps. But yeah, it’s just crazy. It is, [00:24:00] uh, people get used to it.
They learn how and they’re complicated tools to use. So it’s really not simple to just, you know, take them away and the emotional reaction and the outpours, but have been unbelievable. Yeah, people care. Deeply, deeply, uh, the Anthropomorphization, even though it takes every chance to tell them that it’s an AI agent, like, doesn’t matter.
You’re still one of my own. Right? And, um, you know, I, I mean, and you might remember, I mean, I used to be very militant about not saying that your ai, and I think that was like. The juvenile, you know, where I really, in an aspirational way wanted to make sure that we are working towards making them good enough that you don’t feel like you’re talking to a clunky robot.
Honestly, like we’ve done so many experiments as they improved and it was true before, and it’s true now that they’re better, it doesn’t really matter. Right. Right. Um, and, uh, you know, whether you say it or not, it changes the reaction when they figure out it’s not, or vice versa. Overall performance on behalf [00:25:00] of the enterprise, conversion rates, satisfaction, et cetera.
It makes much less of a difference than I feel like, you know, our society is now such that we must always be polarized about every issue. And I feel like, you know, there’s the believers in like making it anthropomorphic and not disclosing. You’re not human. There are people who are like militant about like ethics and like, please.
You know, disclose it immediately and full clarity and have a robotic voice so they don’t accidentally think it’s human. Like we’ve done so much research about it. Plus minus like 1% Bob, like a container. That’s amazing. Yeah. Yeah. It’s literally not relevant. ’cause people either catch on or not. They called you ’cause your internet doesn’t work your, you haven’t delivered something to them.
And honestly, you should probably be thinking about how to fix that problem at root more than you should about how to design the failover experience, which is what a lot of customer service is.
Dan Miller: That’s a really good point. So, and, and the failovers are, are [00:26:00] less and less frequent in, in those highly repetitive, you know, where’s my package?
When’s the guy coming? I mean,
Nikola Mrkšić: wherever. We’ve got a good environment now we’re well over 90% success rates. Right. So, and that wasn’t easy. That was like in the days of like the early solutions for this, I think. Getting to that was an absolute masterpiece of those old timers and it took like cus customers that really had to be bent and forced to stay in the interaction to learn how to use it well enough to get to that performance.
, Maybe like for, for the final bit, when you look at these kinda like AI deployments and everything that you’ve seen over your career, you know, what do you think is the most kinda like underestimated variable?
That AI deployment, is it like the performance of technology? Is it the data pipeline, the governance collaboration between vendors and, and, uh, enterprises or, or something else that I’m not thinking of. And has it changed or lack there?
Dan Miller: So the fact that the, these LLMs can do all these things so [00:27:00] well and, and fail unpredictably is, shouldn’t dissuade people from figuring out for themselves how to best. Use them, um, and the designers of solutions from gi, from giving those end users the, you know, the power to, to use them.
You know, my, my son’s a college professor and, and this comes up in education a lot, you know, given that, um, you can’t put the genie back in the bottle or, or whatever it is. They’re, they’re, you know, people are using, um, some form of conversational LLM. To, to help them with their schoolwork. And, you know, we have a, we have a decision to make, you know, is, do, uh, you know, do you set up mechanisms to detect.
That, you know, when it’s being used for writing and, and there are, you know, there there’s ways to do that. Or do you just say, Hey, I don’t care how you [00:28:00] use it. Know as a person that you’re not learning anything. If you say, oh, here’s my assignment. Write a 50, write a, you know, 500 word essay about blah, blah, blah.
And it does it for you. Yeah. Um. That, you know, we’re, we’re going to, we are gonna, it’s our responsibility to define how to best use this on our own behalf. And, and that’s, um, it’s a variant of, you know, what was going on when you and I first talked and, and there were. Um, you know, developers of pretty highly scripted spoken interactions.
Um, who, and, and then there were a community of solution providers that, that just said, oh, you can do this with a pull down menu and click on things, and nobody, and, and you didn’t even care. What was going on? Did you write a script? Um, what was going on at the telephony layers? [00:29:00] We, we sort of took for granted that, hey, you know, this, this, the phone system works and stuff like that.
But I, so I think there, uh. I think there’s gonna be a return to understanding all the five layers of the iso STA solution stack because Oh yeah. Things are gonna fail in ways that you don’t understand unless you sort of know what, what the root cost was. It’ll be.
Nikola Mrkšić: You know, in every conversation with you, you reach like a topic that, you know, I’ve, uh, Sean, my co-founder, has gotten me to not go to like the, the, you know, five or six layer stack in every conversation.
I’m like, but it’s only really the only one there is. ’cause you just need to map it differently to human, human, human, computer, computer, computer. ’cause it’s just uncertainty and different levels of failures. Right. Right. So, so, so,
Dan Miller: um, so there’ll be equal. Uh, well, you talked about, you know, how, how preoccupied, [00:30:00] uh, we used to be for the fail, you know, failovers and like, I don’t know what percentage of so of solutions that, that were written where, oh, when this fails you do this.
Um, and, you know, I think the pendulum will swing more to, Hey, we. We want to define success and this has a lot of, uh, and we’re getting better at that. And people are realizing if they’re gonna get an ROI. And that was the message in this MIT study is that a lot of it has to do with understanding all the workflows that have people and machines the way you, you just articulated that.
And that it, and it becomes a, um, it’s, it, the, the technologies are failing less and less. It’s, it’s, uh, how we use them and how we design sort of the back and forth. And so it, the attention has to go towards, um. [00:31:00] Defining success. Yeah. And some of that can be, and its impact on ROI and then sort of detecting where failure happened and, and sort of addressing that almost on a, um, case by case basis.
And we’re, I don’t think
Nikola Mrkšić: we’re there quite yet, but we’ll see. Awesome. Well, Dan, look, I mean, um, always enjoy our conversations. Thank you for Yeah, thank you for being on our podcast. Um, you know, I think you’ve been like chronicling and shaping this industry for decades and, uh, it’s always kind of grounding to hear your, your point of view.
Um, I hope we manage to live up to the expectation slowly, you know, and to, to do, to, you guys have been great. Yeah. How long do you, how long do you, how long do you think, uh, before, before, like, you know. With the current rate of technology before it lives up to your expectations. Do you think it’s soon or do you think it’s completely impossible
Dan Miller: to say?
That’s a never, I, I, I used to think, oh, there’s this rolling five years, [00:32:00] you know, the perfection, you know, getting there is like five years away, but it’s like a rolling five years. I, I think that the, the pace of, um, technology refresh has gotten so short. Maybe it’s not five years. It’s like a rolling three years or two years now.
And you know, the stuff that we’re visualizing and saying, oh, this is, this is, you know, perfection. We’re all living in harmony and, and, um, these machines understand not just what I’m saying, but what I mean to say, and then fulfilling on its understanding of what I mean to say. Um.
Nikola Mrkšić: Yeah, yeah, yeah. There never,
Dan Miller: yeah.
Nikola Mrkšić: We’ll be here in five years saying, oh my God, it hasn’t read my mind again. And you know. Yeah. Perfect. Alright, well Dan, thank you so much for joining us and for being best on the podcast. To everyone watching Please, you know, like share, subscribe, and we will see you in the next one.
[00:33:00]