Me, Myself, and AI Bonus Episode

Bonus Episode: Generative AI Trends for 2024 With Tom Davenport

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

While the Me, Myself, and AI podcast is on a seasonal break, we hope you enjoy this bonus episode. Tom Davenport, President’s Distinguished Professor of Information Technology and Management at Babson College, joins Sam Ransbotham and Shervin Khodabandeh to talk about their predictions for AI trends in 2024.

Find additional studies and resources mentioned in the episode below:

Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.

Transcript

Allison Ryder: Hi, everyone. Allison here. As our show remains on winter break, we’re dropping bonus episodes for you in our feed. Today, Sam and Shervin speak with Tom Davenport, a professor at Babson College who weighs in on what he sees as the most interesting and compelling AI trends for organizations in the coming year. We hope you enjoy this episode. And as always, we really appreciate your ratings and reviews as well as comments on how we can continue to improve.

Tom Davenport: I’m Tom Davenport from Babson College, and you’re listening to Me, Myself, and AI.

Sam Ransbotham: Hi, everyone. We’ve got a bit of a different episode for you today that we’re excited about. Today, Shervin and I are talking with Tom Davenport, Distinguished Professor of Information Technology Management at Babson College. The fun thing is that through his work, Tom has vast insights into what’s happening with artificial intelligence. And Shervin also spends a lot of time talking with companies working on real projects. So we’re planning to have a more general episode about the state of AI and the directions we see. Tom, thanks for joining us.

Tom Davenport: My pleasure. Thanks for having me.

Shervin: Hi, Tom.

Sam Ransbotham: All right. Let’s start easy. What are we all seeing that companies are excited about right now? Tom, what are people excited about?

Tom Davenport: I don’t know if you’ve heard of this thing called generative AI that people seem to be quite interested in. In a way, I think they’re too interested in it because there are obviously more traditional forms of AI — what some people call legacy AI — that are still quite valuable. I know you talked about it for years on this podcast before generative AI came along.

And for many companies, I think it’s still as relevant or maybe even more so than generative AI. So oftentimes you have to try to persuade them this is not the only form of AI that’s out there.

Sam Ransbotham: It’s funny, Shervin and I were just talking about the phrase “traditional” or, as you now said, “legacy AI.” How have we gotten to the point where we can talk about this gee-whiz, newfangled thing as traditional or legacy?

Shervin Khodabandeh: Well, you remember, Sam, at the World Bank event a couple of days ago, somebody reminded us that AI has been around since the ’50s. So in some ways, it’s not that wrong to refer to it as legacy. But Tom totally resonates with what you’re saying. That is, generative AI has made such a big splash that some folks have forgotten about the bigger splash that was already there that maybe they’d ignored, which is AI.

In our work at BCG, I see three paradigms. I see those companies that have been building and investing in AI for some time. And Sam and I have done a ton of research, as I know you have, too, Tom. For these companies, this is a continuation of their investments, though not necessarily a linear continuation, but they do have a fair amount of capabilities in terms of data ecosystem and technology ecosystem and ways of working.

So they are more willing and able to bring in generative AI and make it work with AI. Those, I would say, are the winners today. Then there’s a group taking small steps around AI, and they’re seeing an acceleration with generative AI. And they’re thinking, “Oh, we need to really, really think hard about our AI strategy and how it fits together.” And so this is a wake-up call. It’s no longer a theoretical thing.

And then there is a group that’s unfortunately thinking generative AI is going to come, so we don’t have to worry about AI anymore. So everything is now [about] generative AI and we don’t need data scientists or data engineers and everything. You just ask it [to do something] and it will do [it], which is unfortunate. I think that group is slowly learning. So that’s what I’m seeing. Tom, what are you seeing?

Tom Davenport: Well, it’s interesting. I hadn’t quite heard of that last category. I agree with you that the early adopters have an advantage relative to making generative AI real in their organizations. It’s funny, I just did a survey with MIT CDOIQ [Chief Data Officer and Information Quality Program] and AWS a month ago or so. Everybody was very excited about generative AI, which was the focus of the survey.

Eighty percent said they thought it was going to transform their organizations, but only 6% had a production application that they had deployed. So I think we’re really in the early experimental days for the vast majority of companies, even when you talk about the ones that are really quite advanced.

I was talking to a bank in Asia that I’d written about. It’s DBS, the biggest bank in Southeast Asia in Singapore. They were quite early adopters of AI. And I’ve written in MIT Sloan Management Review about the CEO and what a great example he is of a leader in AI. And I was talking to them the other day about generative AI and they said, “Yeah. We’ve got like 35 use cases.” But none of them are in production yet, largely for regulatory reasons. So some organizations have constraints about putting things into production that are understandable.

Sam Ransbotham: Was that the danger that you were worried about? I mean, I think when you led this off, you said, “Hey, you’ve heard about this thing called generative AI.” So one argument is, “Oh, that’s great. It’s getting people’s attention toward artificial intelligence that then perhaps we can channel.” But I think you had a bit more of a tone of, “Hey, this is distracting from legacy or important stuff.”

Tom Davenport: Well, companies will ask me to come in and talk about generative AI, and initially, I would do that as they requested. But I started feeling guilty about it in saying, “By the way, in your business you’d be better off exploring some of the more traditional stuff.” Now I don’t want to be one of these old guys around saying, “Remember what it was like. …” Cautions or mind expansion is necessary.

Shervin Khodabandeh: By the way, that’s the third group in my view. There’s a real danger that you think I’m going to sidestep this or forget about AI and all that. Because, yes, I’m behind. We never invested when we should have because we thought it’s not for us or that it wasn’t a priority. We maybe did some pilots that never got to production, but now we have this new thing, and we don’t need the old thing anymore.

Tom Davenport: I think it’s quite conceivable that a lot of things we did previously will eventually be replaced or will have generative AI as the interface. Like Sam, I spent a lot of time with analytics before AI came around. And I think these systems can do amazing work in analytics and machine learning. Just a two-line prompt will get you three pages of machine learning, model creation, and analysis, and feature engineering, all that stuff. It’s just mind-boggling.

Sam Ransbotham: OK. But don’t tell people because I’m giving an exam on that right now.

Shervin Khodabandeh: This is an important point. Many people think of generative AI as [being able] to write poems or text or summarize things or make movies — which it does — but it could help you sequence tasks and write code and debug itself. I mean, that’s really, really powerful as you were saying.

Tom Davenport: I guess that all works. The advanced data analysis part of ChatGPT works by writing Python code to do all of that stuff. It’s really quite astounding. Although, even in the tech space, I was just talking with an old friend this morning who has a company that does mostly analytics work and now they’re doing AI. We’re talking about the opportunities in text, whether you’re talking about customer conversations or employee comments or legal documents for all the lawsuits that have been brought against you or sentiment analysis online.

There’s just a mind-boggling amount of stuff that generative AI can do to make sense of all that in a much better way than the previous approaches. One of my favorite examples is sarcasm. I used to love the fact that traditional sentiment analysis could not deal with sarcasm at all. And my favorite example was one that some people at Marriott told me.

They said, “Somebody wrote on TripAdvisor, ‘The pool was too cool.’” How do you interpret that? AI was not capable of it at the time, but generative AI can say they probably think that it’s a pretty great pool. That’s much more accurate, and it can figure out that that particular comment should go to the local hotel manager and not to corporate customer relations or whatever. So it’s just quite astounding all the things it can do.

Shervin Khodabandeh: I think you’re hitting the nail on a very important aspect that has been a hindrance in adoption of legacy AI, as you called it. Sam, we talked about this. The research all culminates to what I would call a 10/20/70 rule of thumb that I use and many of my colleagues use, which is to get value at scale. Ten percent is about the model and the algorithm, and 20% is about the technology backbone and all the connectivity and everything that’s related, but 70% is about embedding it inside the fabric of the organization and the adoption.

Sam, you and I with our colleagues framed it as human AI interaction and different modes of interaction or process change and all of that. Right? That 70% has traditionally been really hard. It’s why those examples you mentioned, Tom, haven’t been able to get things to production.

What generative AI could do is make the 70% a lot easier. It can make AI more explainable or make the users have the ability of being able to interrogate it or override it or work with it in a much less clunky way than before. And I think that’s a real advantage of generative AI. As you mentioned, it could be a very nice interface that’s much more intuitive and hence allow a lot more adoption and usability of traditional optimization and prediction tools.

Tom Davenport: My next book is about the citizen movement and nonprofessionals in system development and automation and analytics and AI. And it’s pretty clear that it’s going to introduce a whole new realm of people to those capabilities who didn’t want to bother to learn what data was available or how to use the visual display tools or Excel or Power BI or what have you. But they can say a simple English sentence about what they want, and I think it’s really going to open things up.

By the way, maybe both of you know who Randy Bean is. He has this consulting firm that does an annual survey of data leaders, and he has a new survey out. One really depressing thing has been questions about being a data-driven organization and having a data-driven culture and so on. [The results] have bounced around in the low 20% area for years, and it’s even getting worse in some cases.

It was in the 30s a few years ago. He’s been doing this for 12 years now. In the latest survey, which I think will be published by the time this podcast is out, it more than doubled these figures to organizations’ data leaders saying, “Now we have a data-oriented culture. We have a data-driven organization.” It had to be generative AI. Nothing else changed so much in the past year. It’s amazing how it’s opening up not just the possibilities for people participating but already the interest in these issues at every level of a company.

Sam Ransbotham: That seems like it’s going to have a great trickle-down effect. But let me take the counter on that just to be argumentative. I think planes could fly a lot faster if we would take away those pesky safety protocols that they put in place. I mean, that just slows things down to have all those redundant pilots. We could go a lot faster and be a lot more efficient without that.

Let me expand that to software development. All that testing and quality assurance take way too much time and resources. We could go a lot faster without that. How are we not setting ourselves up for that with your citizen developer world that you’re talking about in the future? People care about features first, security later.

Tom Davenport: It was interesting. I’m writing this book with Ian Barkin. He comes more from an automation background, and people don’t worry as much about little workflows and so on that are created by citizens. But both on the application development side and the data science side, people are more worried about it.

We have this idea that a lot of IT organizations were still thinking, “Oh, this creates shadow IT or rogue IT, and my people aren’t going to want to look at it and see if it actually works.” Some of that is still out there, but it’s amazing how many chief information officers we found who say, “We’re really encouraging this. We can’t do all this digital transformation on our own.”

Sam Ransbotham: They’re overwhelmed.

Tom Davenport: And the lines at IT for developing applications, they’re getting longer, not shorter. So I think it’s really going to have a big impact. BMW training 80,000 white-collar workers on how to do citizen development — it’s just mind-boggling.

Shervin Khodabandeh: But there’s a real danger of paralysis and inaction here because as you’re talking about this, AI was complicated and complex, and GenAI is making the implementation and the adoption also more complicated because you have all these other things you have to think about.

It reminds me of very early 2000 with the internet phenomenon, and so many established and famous organizations, public and private, basically said, “Why do we need a website? Why do we need e-commerce? This is a complex technology. Nobody could build a website. We don’t even know what it is. Why do we need it?”

Those organizations paid heftily for being behind. I still remember incredibly useless and clunky websites of companies trying to sell things. [These were sites of] very established companies where you’d have to wait for 30 seconds for a page to load, and then you’d go somewhere else that [immediately loads].

So I do also worry that there is a real danger. Again, my three groups of the leaders are going to be fine. They’re going to continue to innovate. Then you have the middle group and the bottom group, and there’s a real danger of these companies really, really falling behind and just waiting for things to settle down. What do you think about that?

Tom Davenport: Well, I agree. I coauthored a piece with Vikram Mahaldar in MIT SMR a number of years ago about AI not being a good area to be a fast follower in because it takes a long time to accumulate the data that you need, and it takes a long time to hire the people you really want. And the longer you wait, the fewer of those people there are going to be. I think it is going to set a number of companies back semi-permanently if they don’t get moving fairly quickly in this space.

Sam Ransbotham: That bothers me though, because I think what you’re arguing for is that this hegemony of technical giants is only going to get bigger, stronger, more powerful. And if following is so difficult, how do we get ourselves out of the Middle Ages, where we have these feudal lords that we’ve got to pledge allegiance to in order to get our models that we want if it’s all concentrated?

Shervin Khodabandeh: The question is, What does make following so difficult? I think a lack of understanding of what the opportunities are, what the technologies are, and a fear or a belief that, well, this is not for us and we can never figure it out, or a belief that things are going to get much more simpler so that I go to one vendor or one solution that will do it for me.

So I’m arguing that some of that difficulty is a mindset and, for lack of a better word, a reflection of either fear or ignorance versus a priority. I guess part of that difficulty in my view is just the lack of understanding in the management and senior management at some companies of what is really required and, to Tom’s point, why it doesn’t make sense to be a late adopter or a second follower.

Sam Ransbotham: You could have all that you wanted to, but if you didn’t have data, Tom was saying data was a big part of it.

Shervin Khodabandeh: Everybody has data.

Tom Davenport: I think for startups it can be quite challenging to get the data that they need to create a minimum viable AI product or whatever. Big companies, obviously, have lots of data. But Sam, I think we can distinguish between the vendors in this space, which are mostly big giants. I mean, OpenAI was an interesting example because they had 300 and something people employed there compared to what, 80,000 at Google, or maybe it’s even bigger, but they still beat them to market generally. Of course, now they’re in bed with Microsoft in a big way.

Sam Ransbotham: I was going to follow up with that.

Tom Davenport: Exactly. You do have to have a lot of processing power, and that generally requires a big company. But I think that among the users of this technology, that’s where I was really arguing. You don’t want to be a fast follower because it just takes too much time to catch up.

Sam Ransbotham: Let’s switch back to, you mentioned the 80,000 people at BMW learning and citizen developers. Why stop there? Do we need much more public awareness about AI/ML? I’ve got a 13-year-old and a 15-year-old. Should we be having fireside chats at night over the dinner table about artificial intelligence?

Shervin Khodabandeh: I don’t think it’s that group. I think it’s the generation that’s making the decisions now. Tom, what do you think?

Tom Davenport: Well, it’s interesting. I might’ve agreed with you until yesterday. In one way, [what I saw] was comforting. It was a study out of Stanford saying that there’s not much cheating happening because of generative AI in schools, but the percentage of people who seem to be even aware of it was much lower than I thought.

I forget the exact numbers. And sadly, minority kids seem to be substantially less aware than White kids. I do think that it’s going to be incumbent upon schools at every level to teach people how to be productive and effective with these technologies and not to ban it as some did early on.

I think that’s receding fortunately, but this is the most powerful technology we’ve seen in generations. Most people seem to agree with that, even grizzled veterans like ourselves. And so schools are going to have a heavy load to bear in letting people know how this stuff works. And parents, too, for that matter. Sam, I guess, yes, you should be talking about AI with your kids.

Sam Ransbotham: Actually, Pew Research had a study out recently that was talking about who had heard of ChatGPT and who was using it. And it was perfectly inversely correlated with age. Their survey cut off at 18, but it was pretty clear what the trajectory was. And based on the sample of the kids hanging around our house, they are all over this technology.

Tom Davenport: I’m sure you’ve seen this too, Sam. It’s got to be really hard to engage faculty in this. I mean, some of the faculty at Babson are very gung ho, but some actually went to the academic technology office and said, “Can we shut off AI on our campus?”

It’s a big organizational change to get everybody bought into the fact that this can really make for better learning and not easy even to figure that out. I’ve done it in some of my classes on AI, and in some ways, I make them show their work, they show their prompts, they show the edits that they’ve made, and so on.

There are all sorts of problems. One, when they edit, they introduce grammatical and spelling mistakes into what the [large language model] has done, and they forget they’re supposed to show me their prompts. And then I tell them they need to look it up in Google to make sure it actually is true, and they forget to do that. And several of them said, “It was easier when we just went to Wikipedia and copied some stuff down.”

Sam Ransbotham: Like in the good old days of copying Wikipedia.

Tom Davenport: Yeah. Exactly.

Sam Ransbotham: I think that’s an interesting, different take on things. And part of it comes down to what you’re teaching. I mean, Tom, you’re teaching a class in AI. The idea of banning it clearly doesn’t make any sense there. But if I was teaching people math, I think I would ban a calculator, at least until people knew adding and subtracting.

Shervin Khodabandeh: And I would argue instead of doing that, make the problem a tougher problem and allow the kids to use more imagination, more creativity with a tool. I mean, because the analogy would be, sure, you want kids to be able to do timetables and all that, right? But do you want folks to be able to multiply three-digit numbers? Maybe.

Some schools would glorify that and say, “It’s great that you’re doing that.” I also feel like banning AI is like saying, “Let’s just cut communication lines of the internet because we don’t want any improper content to show up on our TV,” or “We don’t want our kids to be on YouTube. Let’s just make sure we have no internet.” Whereas it is a reality of our lives, so let’s do good with it.

That’s why it’s hard. So I think that my caution and my fear — which was my earlier point — is that this inaction or this fear or this paralysis would lead people and societies and companies to treat this as a black box, probably an evil black box — let’s have nothing to do with it. Whereas I think you need to just go into the lion’s den or the circle of fire and open it up, and it’s not going anywhere.

It’s part of our future. Let’s figure out how to do good with it. Let’s figure out how not to do bad with it. Vis-a-vis what we do in schools or what we do in companies or in societies, humans plus AI could do better together rather than on their own, so that in your example, make the problems harder or make the assignments more creative and allow kids to go use whatever resources they want. Aren’t those the right skills anyway that we’re going to need 10 and 20 years from now? Not the addition skills anymore, but the ability to integrate all these technologies into doing something that the technologies on their own cannot do.

Tom Davenport: Well, I think we have to figure out what is the best type of technology for working with humans. There was a presentation last week with the MIT Initiative on the Digital Economy that I attended, and a doctoral student was studying copywriting. And there was an experiment with three different conditions. One was just a word processor, no generative AI. The second was an advanced type-ahead system where you could accept or reject, and it [was] clearly maybe 50/50 human and machine.

And the third was generative AI creating the full output. And in general, the copywriters liked the intermediate stage more, and it produced higher quality work, and that got more click-throughs online. So I think we need to figure out what’s the best way to use these tools, and it’s not just generate my essay for me based on one prompt.

Sam Ransbotham: Yeah. I think we’re probably pretty early in figuring that out. We only had a year now. I think we can give ourselves a little bit of slack to try to figure out how to work with the tool. If the tool just quit changing on us, we could figure it out, right?

Tom Davenport: Well, that’s the thing. It’s a full-time job just keeping up with how the tools work and what new things have been developed and so on.

Sam Ransbotham: Tom, it’s really been quite fascinating talking with you. We’ve covered a lot of different topics here. And I think maybe we’re ending up with learn more, do something — that’s kind of a running theme of it’s going to affect everybody and we need to be doing something versus passively waiting and seeing. As you say, this fast follower might not be the right approach here. Thanks for taking the time to talk with us.

Shervin Khodabandeh: Thank you, Tom. This was really wonderful.

Tom Davenport: My pleasure. Really enjoyed it.

Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe like you that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR, and BCG. You can access it by visiting mitsmr.com/ai for leaders. We’ll put that link in the show notes and we hope to see you there.

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/