Skip to main content

From Lagging to Leading

Navigating AI adoption challenges

Generative AI roadmap

In this episode, AWS Enterprise Strategists provide a roadmap for organizations who may feel they've missed the first-mover advantage with generative AI. They share practical advice on overcoming common barriers like analysis paralysis, managing risk, and upskilling teams. Listeners will learn how to unlock the true potential of generative AI by starting small, lowering the cost of failure, and leveraging the scalability and flexibility of the cloud, and get insight into a realistic, action-oriented perspective for organizations determined to catch up and thrive in the generative AI era.

Transcript of the conversation

Featuring AWS Enterprise Strategists Helena Yin Koeppl, Tom Godden, and Jake Burns

Tom Godden:
Welcome to the episode of Conversations with Leaders. My name is Tom Godden. I'm an enterprise strategist with HAQM Web Services and former Chief Information Officer of Foundation Medicine. And I'm excited today to talk about how customers can get moving with this new exciting technology, generative AI. I'm joined today by two of our enterprise strategists on my team. Jake, welcome.

Jake Burns:
Thank you, Tom. It's a pleasure for me to be here, and I'm also an enterprise strategist with HAQM Web Services, so thank you for having me.

Tom Godden:
Yeah. Helena.

Helena Yin Koeppl:
Yeah. It's a pleasure to be here. I'm Helena Yin Koeppl. I'm also a director of enterprise strategy for HAQM Web Services.

Tom Godden:
So as we talk to customers, one of the things... They're all wanting to get going. They're trying to adopt this technology. Jake, what do you say to those customers who might feel that they're late adopters? That they're late to the generative AI party and they're maybe missing the boat?

Jake Burns:
Yeah. Well, I'd say in terms of generative AI, if you're just starting today, you're not necessarily late. I mean, it's never too late to get started. And this is something that's going to have a long runway. This is a technology that's here to stay. That much is clear. I will, however, say if you're starting with cloud, then you might be a late adopter, but that might be something else entirely.

Tom Godden:
Well, let's double-click into that for a second. So do you need to be on the cloud in order to do generative AI? Is it a precursor to get going, or where does it fit?

Helena Yin Koeppl:
Well, it depends on what are you talking about. So a lot of people are experimenting with generative AI. For that, you might not need to be on the cloud, to do so.

Tom Godden:
Sure.

Helena Yin Koeppl:
But as soon as you're starting to think about productionizing generative AI, especially long-term in terms of getting the value, getting the impact, getting to the customer directly, you really need to think about the adoption of cloud to enable that.

Tom Godden:
Yeah, it's something that we say all the time to customers, right? Scale breaks everything.

Helena Yin Koeppl:
Yes.

Tom Godden:
So if you want to have some small proof of concepts, you want to do that on premises or something, sure. Can you do it? I suppose, in the most basic sense of the world. But when you really want to run these things at scale in a secure and reliable type of way, man, access to the cloud sure winds up making a difference to that. So as customers are like, "Okay, so maybe I'm not late. I want to get going." But what are those steps? What are some of those foundational things that you need to put in place beyond just being in the cloud to say, "Okay, how do I get started? What do we do?"

Helena Yin Koeppl:
Well, I think number one is to really learn about what is the possibility of using generative AI. What can GenAI enable which was probably difficult before or not even possible before? So really learning about what could be possible. Where we start is not to start from the technology. Hey, what can GenAI do for me? But more, okay, going back to what we normally start. What are the business and customer problems you try to solve? And then work back to see whether GenAI is the right piece of technology to enable that.

Tom Godden:
Yeah. That's a big mistake that customers often make is rushing into have a generative AI strategy as opposed to a business strategy informed by generative AI. So learning what it is is important, but not getting too far out in front of it.

Jake, what else do they need to look at in order to get going with generative AI?

Jake Burns:
Yeah, I mean, I totally agree. But I also think there's certain things you could do before you figured all of those other things out. Like communicating to your teams that this is something that's designed to help you do your job better, perhaps help you design a better job for yourself in the future, like cloud did for many infrastructure folks.

And I also think training is something you can get started on very quickly. And as we learned with cloud as well, combining hands-on experience with the training is a real way to get that training to stick. And actually, I think there's three ingredients to get training to stick. One is to first give your team the training. Second, give them the hands-on experience. But perhaps most importantly, have them have a goal going in, like what are we trying to accomplish?

Because if you think about it, as somebody who's going into training, if you're just going to training, I mean, if we're being honest, for a of leaders, it's a way to give PTO for an employee that has no PTO left. You don't really expect them to learn anything. But if you have a mission going into it, if you have something that you really need to accomplish, then you're really going to pay attention when you're in that training because you're going to think, I have to do this at the end of this training. I'm going to have to execute on this. So I think having those three ingredients tends to make the most use of the training.

Tom Godden:
What are the blockers though? So those are some of the enablers, but what's holding them back? What is preventing them from being able to execute on this and go forward with it? People certainly might be one, right?

Jake Burns:
I think a lot of it is analysis paralysis.

Tom Godden:
Yeah. But what's causing that? Is it just analysis paralysis? Is there something else there?

Jake Burns:
No, that's a great point. There's always a root cause, and that's not the root cause. The root cause, a big one, is going to be, I think, ignorance. If we're just being really perfectly honest here, leaders don't even want to talk about something that's going to make them sound stupid, right? They may not even know the vocabulary. There's a whole new set of words that you have to learn with generative AI, and it can be intimidating for a lot of executives, especially if they don't have a technical background, if they're not close to the technology. So I think that the way to combat that is with education. So I think that there's a big opportunity right now for executive education around generative AI to help solve that and get folks unstuck.

Helena Yin Koeppl:
I think other blockers we can think about is... I mean, fully agree, right? So there is really the fear of unknown and that fear. Additionally, that I think there has been a lot of talk about generative AI and AI, so how much risk there is. So people probably even haven't experienced with generative AI has heard about the term hallucination. Heard about that, how much that AI could, and how much risk there is associated with adopting AI and really giving the wrong answers and give risks of biased answers or really, really invasion of privacy and et cetera. All of this risk categories people are talking about. And of course, that there are a lot of governance and... Sorry, governance. Sorry, let me repeat that sentence.

And of course, there are governments and there are regulations coming out to regulate AI. So people are also thinking about, "Wow, there is even higher risk." It's great that we're thinking about responsible AI, and it's very important to think about it, but don't let risk to say completely blocking. We're not going for it.

So really the important thing is to really assessing the risk in the right way, a granular way, to look at really what you want to achieve, and going back to what customer problems you try to solve. And then go down to look at it more granularly on what risk categories you are most worried about, where you have the highest risk, and where you have actually lower risk, to start learning with low risk categories and learn and repeat trial and repeat to learn about categories. That can... Sorry. That can mitigate the fear of unknown because when you start learning with lower risks, experiment, learn from your mistakes, but with lower costs of making the mistake, and then you can basically get rid of that unknown and fear of unknown, and then really moving forward with higher and higher valued use cases.

Tom Godden:
Yeah, I think breaking the risk down to a smaller factor. What I see a lot of companies doing because of the unknown, because they're worried about it, is saying, "Well, everything needs to be double diamond-encrusted, platinum, highest levels of security," because they're concerned. And good. Let's not go into this naive. Let's go in eyes open. But you don't need to have the same risk profile if you're summarizing meeting notes with generative AI as you do when you might be doing diagnostics looking for tumor on an X-ray. We can have different approaches. And so I see that customers sometimes are finding that.

The other one is going back to your point, Jake. I think sometimes that fear of that unknown that customers are using this risk as a way to slow this down.

Jake Burns:
Sure.

Tom Godden:
"I'm being good. I'm being responsible. I'm being deliberate. And all I'm trying to do is I'm scared at this pace of change, and so I'm going to slow it down by asking questions." So we got to get those people comfortable with that. And if you don't bring them along this journey, you're not going to get anywhere.

Jake Burns:
I think it's a great point. And I think what you're alluding to here is that they're forgetting the risk of not doing it.

Tom Godden:
Yeah. Well, and thus, maybe the late adopter, right? Maybe the reason why they might've found themselves a little bit one that's behind the curve on something on it, because well, they've been worried and they need to wake up and find out there is a risk in not moving. Maybe even more so than moving.

Jake Burns:
Absolutely.

Tom Godden:
Because I think you can manage that risk.

Helena, at Thomson Reuters, where you were previously, talk about how you approached data. We talk about this all the time, right? Data is your differentiator. Part of our strategy is to make sure that everyone has access to the world's greatest large language models. Well, that sounds great until you realize we just leveled the playing field. So how do you stand out as a customer? What does good data look like?

Helena Yin Koeppl:
It's really interesting question. So number one is, yes, it's true. Generative AI can help us to accelerate a lot of use cases because it's trained on large amount of data. On the other hand, those data are really normally what we call general knowledge. So when you're trying to adopt GenAI as foundation model and trying to make a more powerful product for your specific sector, field, or customers, the best way is to leverage your own data to customize it so it can give more accurate answers with the specific use cases and customer problems you try to solve. And that is the power of your data combined with generative AI.

So for example, Thomson Reuters is playing in some very specific specialized field like legal, financial services, and news and media. So each of these domains basically have expert knowledge, curated data, which can be leveraged. And to customize the general generative AI models into more specific models, can solve specific specialized problems. And we see that really in production especially, and very, very powerful to, for example, help lawyers to solve specific... Sorry. To help lawyers, for example, do legal research. So it's a very specialized field and you need the answers to be very, very accurate. And that you really need to customize your general knowledge generative AI models into more specific legal knowledge models. And we've seen that being very, very powerful.

Tom Godden:
Jake, any specific guidance you give people on building that productive data strategy to help with this?

Jake Burns:
Well, okay, being in the cloud helps. Thanks for the softball. But yes, I think being in the cloud, definitely... If you're going to do generative AI in the cloud, having your data in the cloud is going to be the easiest way to do it. There's so many other benefits of having your data in the cloud and having your systems in the cloud. So maybe this is the final reason that is actually going to stick to get organizations to really commit to adopting cloud technology, and then allowing them to realize all the other benefits that they'll get from it.

Tom Godden:
Yeah. I always tell customers, "You need to get good at managing your data." So now's the time. If you haven't been before, now you need to. Data lineage. Where did it come from? Why did it come from there? You need to have better data dictionaries and data ontology around how that is. And really, now's the time to automate it. If a human needs to touch the data to move it from A to B, that doesn't count. Now's the time to automate it. Because I need it to be repeatable and I need it to be consistent.

And then the other one that I say is, "You got to start treating data as a product." And really what I mean by that is you got to version it. I need to understand that this model was trained with this set of data. Because I need to understand why is the model behaving differently? Well, because feeding it different data. And if you don't have control over that data and you can't repeat it, automate it, then you're going to wind up having a lot of trouble around it. So now's the time to step up the game and to get better at a lot of these things.

Tom Godden:
So let's pretend I'm a late adopter here, Jake. Give me some advice. How do I get ahead? How do I catch up? Swing for the fences, big home run idea? Lots of small incremental improvements? What's your suggestion?

Jake Burns:
It's interesting, from the outside looking in, when you look at companies that do spectacular things, it always looks like a big bang. It always looks like they had one fat bat and they swung and they hit that home run. What really great companies do is they fail a lot, but they make those failures really invisible. And the most important-

Tom Godden:
The power of the cloud.

Jake Burns:
The cloud helps you do that. Of course, yeah.

Tom Godden:
Come on. We had to. You were saying it before.

Jake Burns:
I thought that would be obvious. But really what they've done, really what they've mastered, and I think this is the critical skill, is they've mastered lowering the cost of failure. And that's really it. Because the problem is these big great ideas, the HAQM Primes and all of these great products and services that each of us use and each of us know about that look like they kind of just came out of nowhere and were just instantly successful, behind those are dozens, maybe hundreds, maybe thousands, in some cases even more, of very small failures. So you need to be experimenting all the time.

But the problem is if the cost of failure is what it is in a data center, for example, then it's very hard to afford those failures. So you can do very few of them, maybe none. And so that's why I say that reducing the cost of failure is the key to those big successes. Because it's not about having the smartest people. Of course that helps. It's not about coming up with the best ideas. Of course that helps. But what it's really about, it's about iterating as much as possible and being able to fail as quickly and inexpensively as possible, so that when you finally hit that home run, when you finally get that great idea that just works, that you can really double down and invest in that, and then show that to the world.

Tom Godden:
Yeah, I like to say I like to have my failures have decimals, not commas. So keep them small on this.

Helena, what's your suggestion on how to help customers view that catch-up play?

Helena Yin Koeppl:
Well, it's interesting. It's such a new breakthrough piece of technology that everybody's learning. So even what you view are the early movers, and actually they've learned from making a lot of experiments, and therefore, that actually a lot of mistakes too. At the beginning of this journey, for example, let's say 18 months ago, everybody thought having the newest, most powerful model is the most important thing. And what people have learned from that is actually that for different use cases, and you need actually different types of models, and some might be much, much smaller, but you might need to... Sorry. You might need to customize more with your own data. It depends on what are you trying to do and what are you trying to productionize and what impact do you want to have. So that learning can benefit, for example, what we could call the late movers to start already knowing that, hey, model choices are actually more important.

So there are many, many more. Actually, what we would like to share with our customers, as an example, and what we are sharing with them and what we are developing are really that full, comprehensive, full stack of three layers of solutions. And you need the computing power and you need the model choices and easy and secure and with easy guardrails to enable you to mitigate risks. And you need basically, if you want to just get going, and there are some application level to let you actually experiment with your data as of tomorrow. So there are many, many ways that you can catch up very quickly.

Tom Godden:
Yeah. And don't underestimate the value of small, sustained incremental improvements. And it also has the benefit of this is a technology, and this is true probably with many, but it feels more the truth with this one, you learn by doing. So, yeah, iterate, experiment, do it on the cloud so that you can have those low-impact mistakes so that you can learn from them and you can add value.

We've all had this experience I'm sure in our careers, it's easy to build version one. You slap it out there, you put it out there, it's all good. Now I need to sustain it. I need to operate it. I need to do version 1.1 or version 1.2. How do I build that infrastructure around it? How do I look at generative AI and find a way that I can build it in a scalable, flexible fashion so that I can actually live with this and not just have that 1.0 type of release?

Jake Burns:
Well, we've been saying even long before cloud, that version one you always throw away. Version one is just practice, right?

Tom Godden:
It normally deserves it. Yeah.

Jake Burns:
But I would go much farther nowadays. When you've gotten to become a high-velocity enterprise, when you start thinking in the economies of speed, I think you throw away the first 100, throw away the first 200. You're no longer counting each individual experiment because you're doing so many of them. And so I think it's really about getting to that point, getting to where the cost of failure is that low when you're experimenting at scale to where you can't even count all the experiments.

And then, to your point as well, learn by doing. Because I think the longer you're in the analysis phase, the longer you're in the planning phase, you're not really learning much. And as we all know, you're going to throw that plan away anyway because it's going to be wrong. So there's some merit in doing some planning, but try to get through that process as quickly as possible, and get to the doing as quickly as possible. And then by doing, you'll figure out the right way. And by the way, the first way you try, it's not going to work very well. Second way, third way, fourth way. But eventually you're going to gain real expertise, and that's where real expertise comes from. Combine the theory with the practice, but don't forget the practice.

Tom Godden:
Yeah. Helena, thoughts?

Helena Yin Koeppl:
Yes. Additionally, we are talking about AI machine learning. It's always learning. So the first version, get out there and continuously with new data and with basically MLOps and managing that drifts coming from new data. But at the same time, you know that with reinforcement learning, with reinforcement... With reinforcement learning with human feedback, and all of this can help you to make the result better. So the most important thing is get started. And like I said, machine learning is always learning.

Tom Godden:
And I like this. This comes back to a lot of the conversations we have around cloud transformation, around digital transformation. You got to run the code that you wrote. So when you built that proof of concept as 1.0, and it's a little bit shaky, but you felt like I built it and now it's Jake's problem, that's a little bit different. But now when I have accountability for achieving its security, reliability, privacy, responsible AI metrics, you're going to, as a leader, as an owner of that asset that's being built, look at it and view it differently. So it comes back to that two-pizza team type of mindset to really invigorate that, to get that.

Helena Yin Koeppl:
Yes, exactly. If you build it, you run it.

Tom Godden:
Yeah. Well, it's simple. Yeah.

Jake Burns:
I was just going to say, ownership is the key, right? And I would say it's more than just the leaders who are owners. Everyone in your organization, when you get them to feel like owners, then they will have that same attitude. And this is one of the reasons why I think it's so important to have your own team do the work when you're migrating to the cloud, when you're implementing generative AI, as much as possible to have your own team do it, because look... Even have them be part of the planning process. Because they care a lot more about their plan or our plan than they do about your plan. Your plan could succeed or fail, and then we can talk about why it failed, but our plan must succeed. And so I think being inclusive in the ownership conversation really gives you so many benefits.

Tom Godden:
I love this question. I get asked this all the time. How do you measure ROI of these initiatives? What's the right way for a company that's getting going on this and trying to implement to measure ROI? What's a good ROI?

Helena Yin Koeppl:
Well, it's important to know that basically now we are talking about generative AI, and the cost part is getting a little bit more complicated. Because not only you need to build it, we're running it and inferring the result, there are quite a lot of cost buckets. And again, this is one of the topics that, as I'm saying that, everybody is still learning, that the eventual actual cost, it's involving not only the compute and building or customizing the models and inferring the result and continuously running it, but also additionally talents, training your organization, re-training the organization, and actually changing some parts of the workflow. Because now people are working together with AI and as part of the human in on part of the loop as well. So in total, that's the cost part.

And as we are talking about return on investments, so the return part, the value. And as usual, we need to think about that again, that what benefits are we providing? Is it more efficient? So efficiency is one big topic. But eventually we also want to talk about that from efficiency gains and what are the top lines that we can drive? More innovation, quicker innovation. And we have seen that, right? And not only the pace of innovation, but new innovations. Not only do things different, but do new things.

Tom Godden:
Yeah. And I think it's so important, you captured so many of the cost aspects so well, but to me, I'm just going to drive that point. It's business value. If you can't hang something on this create a business value, it's just a theoretical exercise that happens all too often inside a technology department. It's got to be driven for business value.

Helena Yin Koeppl:
But additionally, that I think when we are starting from experimentation, even when you are shaping the ideas, and we should not just think about experimenting to test how the technology works. We should starting to see, okay, what value it really drives, to your point, business value.

Tom Godden:
Jake, I know you spent a lot of time on the topic of training and skill development. There's so much to learn here, and the good news and the bad news is tomorrow there's going to be even more to learn. How do you recommend that customers look at skill development for themselves as leaders, but also for their teams? What are some of the things you've been working on?

Jake Burns:
Yeah. Well, first of all, there's no better ROI than training your organization on something that they're going to actually be able to do something with. And the great thing about that is that training, there's like a compound interest effect on that. So the earlier you get them trained, the more quickly they'll be able to learn new things and so on and so on.

There's also a morale factor to that as well. Folks generally appreciate being trained on high value things like generative AI and cloud. And so, the higher morale in your organization, the more productivity you're going to get. And that compounds itself as well.

I'm a big fan of utilizing existing employees and training them, reskilling them, and all of that as a first, second, and third choice, really. Going only externally for bringing people in who are experts, who can help instill best practices and transfer skills to your team. But also sometimes to take the undifferentiated heavy lifting off of their plate, because oftentimes their first excuse is going to be, "We're too busy." So kind of taking that excuse off their plate. So I have a lot of thoughts around on reskilling, but I think really focusing on it, the organizations that do that tend to be the most successful.

Tom Godden:
Yeah. Jake, Helena, thank you so much for the conversation. I really enjoyed it. I always learn when I spend time talking to you guys. And thank you to all of you for joining us today in our conversation.

Helena Yin Koeppl:
Thank you.

Jake Burns:
Thank you, Tom.

Tom Godden:
All right. Welcome back, everyone joining our conversation here today. Talking about how you get going with generative AI. Welcome back, Jake and Helena. Thanks for joining me in our conversation today.

Helena Yin Koeppl:
Thank you for having us again, Tom.

Tom Godden:
Yep.

Jake Burns:
Yeah, it's our pleasure.

Jake Burns, AWS Enterprise Strategist:

"When you look at companies that do spectacular things, it always looks like a big bang. It always looks like they had one fat bat and they swung and they hit that home run. What really great companies do is they fail a lot, but they make those failures really invisible."

Listen to the podcast version

Listen to the interview on your favorite podcast platform: