ChatGPT AI hype cycle is peaking, but even tech skeptics doubt a bust

The arrival of OpenAI’s ChatGPT and generative AI only a few years after the hype cycle over the metaverse has attracted both the AI bulls and bears as tech pursues its next big thing. The metaverse came with NFTs, an extension of cryptocurrencies and the blockchain, and for now, it’s all looking like the hype cycle warning is a good thing to heed. One thing is certain: Silicon Valley needs a next big thing, as the industry is seeing a contraction unlike anything it has experienced over the past decade, with tech leading layoffs in the economy and cost-cutting now the norm for the one sector which has been accustomed to operating with a blank check from investors.

At a CNBC Technology Executive Council virtual Town Hall on Thursday, we gathered technology executives at companies across the economy — specifically, many at companies using AI but not creating it, for example, in retail, media, legal, agriculture and logistics. We gathered a roughly equal number of AI enthusiasts and skeptics, and broke them up into groups to discuss the sudden explosion of interest in ChatGPT, and to separate as best as they could the hype from the reality.

One of the main points made by several executives was that whereas the metaverse remains a nebulous concept to many, what’s happening today in AI is the acceleration of technology processes that have been in use for up to two decades already across a wide range of corporate functions, from software engineering to finance, operations, legal, logistics and creative. Even the skeptics of the latest hype cycle recounted during the Town Hall numerous examples of how AI is already embedded in more efficient business processes. The discussions are conducted under Chatham House rules so executives can speak freely.

The market is clearly taking the latest AI advances seriously, maybe nowhere more clearly than this week’s battle between Microsoft and Google over competing AI for search engines. Google’s shares dropped by roughly 13% over a two-day stretch after its attempt to respond to Microsoft with its Bard AI was deemed a “rushed, botched” effort by its own employees, which may have only served to magnify the risk to its search dominance. Microsoft CEO Satya Nadella was clearly enjoying his rival’s anxiety this week, telling the FT, “From now on, the [gross margin] of search is going to drop forever. There is such margin in search, which for us is incremental. For Google it’s not, they have to defend it all.”

Microsoft’s AI advances are occurring quickly. Microsoft board member Reid Hoffman told TEC members last year, “There is literally magic happening.” The particular AI example Hoffman was discussing, Copilot, is now taking over as much as 40% of code writing from human software engineers, but “AI will transform all industries,” Hoffman said. “So everyone has to be thinking about it, not just in data science. Across everything we are doing, we will have amplifying tools, it will get there over the next three to 10 years, a baseline for everything we are doing,” he said.

The latest TEC Town Hall discussion, too, made clear that generative AI, and AI more broadly, is about a lot more than just a new era of internet search, even if those headlines right now may have the largest market cap consequences for the biggest companies.

“This is a major, major, major revolution,” said one cloud executive on the call. “People compare it to the cloud revolution, or the mobile revolution or the internet revolution. I think that it is 100 times bigger than all of those combined. I think that it is as big as the Industrial Revolution. And I think there are a lot of parallels to the Industrial Revolution. And I think that companies that do not embrace it immediately, existing companies that will not embrace it immediately, there is a chance that they will become not relevant.”

“Even within tech to be honest, most of my peers are taking three or four years of development work and just throwing it out in terms of training neural nets for what they’ve done over the last few years,” said another executive. “Because out of the box, we’re getting higher accuracy … and then the ability to train it further. And the domain you’re in is just increasing performance.”

Here are some of benefits and risks on the minds of tech leaders most likely to incorporate generative AI — many of whom already have adopted AI — in their operations and with consumer-facing services and products.

From algorithmic mastery of one task to mastery of all

Many AI advances in recent decades have demonstrated that computing power can be devoted to mastery of a single complex task, a chess match or a Netflix or TikTok recommendation algorithm. The lesson from ChatGPT is different in a key respect: its redefining the limitations on what a machine can learn, and most executives seemed to agree that another Industrial Revolution-like process is underway.

The deep language learning models that are being developed and launched have use cases that ultimately cut across all sectors and all functional teams that today do things manually.

To put it in CNBC context most narrowly, one executive gave the example of stock analysis. “We use it in financials. We will take 5,000 balance sheets, read it within seconds, be able to extract all the financial information, calculate a risk score, and be able to make a decision on the risk of a portfolio.”

“If you can train deep language learning models, the level of sophistication and solutions you can solve has deep implications,” said one executive said. And whereas traditional AI has only solved problems in “deep analytical spaces,” this new AI brings those capabilities into the creative economy.

“How do you think about retaining the creative economy, not just the deep data scientists?” the executive said. “This has profound implications in functional jobs, as well as creative jobs.”

Costing jobs, creating jobs

While the classic argument pitted against AI is that it will be a job killer, executives across industries have contended for years that this won’t be the case, and that AI will take over repetitive or mundane tasks humans should not be doing in the first place, allowing humans to do more important jobs. The jury is still out, but that was the mostly the message from this group of executives, too. Though not without some examples of job losses.

One executive gave the example of manual laboratory operators “laboring over microscopes and images” which they have already seen replaced during the past decade.

Another executive who works with lawyers and accountants said the sentiment right now is that AI is not to replace lawyers, but “lawyers using AI are gonna replace lawyers.”

“There’s this perception right now, especially in a professional industry, that if professionals like lawyers and accountants don’t use AI, they will be replaced by those that take advantage of the tools because those professionals are going to be more effective, more efficient, they’ll be able to do more,” he said.

From within the tech sector, one chief technology officer noted the results of experiments they began running with generative AI for service requests four years ago, which has led to about 89% of the company’s unplanned service requests now being handled fully autonomously. As that front-end interaction has improved over time in its response, the executive said there has been zero attrition on the team.

“We haven’t laid people off. People haven’t lost their jobs. Human resources, ops people, salespeople, facilities and legal people in our organization, now all use this tooling,” the executive said, and she backed up the point made about the legal profession. “Those people, you know, augmented with ChatGPT have replaced those who aren’t.”

But another executive worries that the vision of people being freed up to do higher order work is pollyanaish, and the opposite effect inside the world of business is possible, or as they put it, “the challenges that people start to, people get dumbed down, because we just ask the computer for everything. And even business processes get built on these things, and they become these black boxes.”

Retail is building on top of existing AI store intelligence

A retail executive said current technology including robotic process automation and machine learning applications in forecasting are building blocks already in motion “in pretty much every industry right now” and demonstrate that the next level AI is not focused on replacing jobs.

“The way I see it, there is a deep desire from pretty much every company to become more efficient,” the executive said. “And this is not from a labor standpoint, this is from an operations standpoint, financial standpoint, getting customers what they want. And what generative AI does, is really help you crunch, take your machine learning to, you know, the ‘nth’ level of the finite level. It’s not even taking it to the maximum potential level. But what generative AI is doing for us is really helping us get to the answer, a preformed answer much, much, much, much quicker, where you don’t need to train all the models with your own data for every single thing. … And then you can build on top.”

That can mean AI identifying the best locations for retail stores, as well as optimized shipping of items to stores, but critically, without a data scientist having to be there at every step of the process.

If the generative AI can free the data scientists and machine learning experts from the training of the technology every step of the way and only look at what’s “the incremental,” the retail executive said, there will be “immense productivity enhancements.

“These intelligent machines, without investment, costly investment decisions, they can help you understand what the outcome potentially could be and narrow it down to a few sets of choices, versus an infinite set of choices. I think that’s where, to me, the real power comes in.”

Generative AI risks

There are serious risks to consider, and the AI skeptics laid out many: misinformation or just inaccurate information already being produced by AIs; noise in the data, aka “junk science” which may lead companies down costly dead ends; bias; the threat to human jobs; issues around consent when humans are talking to AIs and can no longer tell the difference; and copyright issues, to name just a few raised by executives across sectors.

“Technology has shown time and time again because of what can be done without societal guardrails and technology guardrails,” warned one executive. “Time and time again, if you want to talk Industrial Revolution, let’s talk about how long did it take to understand what we did to the environment as part of that?”

“We thought about privacy too late and couldn’t put it back in the box,” said another executive.

Consumer products’ copyright Wild West is possible

Consumer products are one area where copyright issues could crop up with more frequency and no precedent, due to “the grassroots nature of what we’re seeing,” said one consumer industry tech officer.

In the recent past, product development and tech have been integrated to build experiences for customers, but now, “we’re seeing our creative folks out there using products like DALL-E and Midjourney to drive inspiration in their product development,” said the consumer executive, referencing popular generative AI art programs.

The positive side is, “it’s a super great way to kickstart the creative process, to do AI-driven mood boards and things like that when they’re developing new products. But on the downside, you know, this whole IP aspect, IP ownership and how that evolves, is kind of the crazy part of it,” the consumer executive said.

“This technology is so powerful, and it’s not being brought through the technology organizations anymore, it’s, you know, folks that have zero technology experience going out there using it to enhance what they’re doing in the way they work. … they go out there, they sign up for the beta, and all of a sudden, they’re pumping out, you know, creative ideas and interesting assets.”

Consent policy isn’t clear

Much has been made of the fears in the news media that misinformation will become even more effective, and in the academic world that students will use new AI tools to cheat and the cheating will be undetectable. But within the business world, another form of cheating is also seen as a risk in technology adoption.

One consumer health industry leader said brands that start to indiscriminately use AI will potentially erode trust with consumers when the technology is leading conversations without a high level of transparency.

There was recently a Men’s Journal article that used AI as the writer and provided inaccurate medical information — not the only correction that has had to be run so far based on AI article generation, but with health implications, magnifying issues related to lack of disclosure, which can bleed into issues of consent.

“Is there consent? Do people understand that what you’re reading right now was not produced by a human?” the consumer health executive asked.

Mental health apps, for example, have already begun experimenting with ChatGPT, having answers written by it, which can actually be effective, the executive said, but raises the issue of disclosure that responses were not sent by a human. To be clear, though, the general concept of mental health chat apps using bots is not new, and many have existed for years.

But getting the bots to the verge of seeming human is, in a sense, the goal of AI as laid out in the Turing Test, to reach the point where humans cannot distinguish between a machine and human in conversation. And there will be serious issues for companies to weigh in how they disclose the AI and receive user consent. “It’s tough,” the executive said.

But an executive at another company that has been using AI chatbots for a few years said so far the main finding is that people like it, “that sort of curated personalized responsive interaction, in many cases, as we’re seeing, that will transcend human interaction, especially when there might be language barriers and other challenges.”

Corporate boards may need a chief AI officer

Responsible use of AI will continue to be a major part of the conversation.

Corporate boards may need to create an AI-specific position, said one executive. “I can tell you that it will push humanity forward. And it will need to be slowly as we figure it out. Managed just like, you know, software, there is virus and there is anti-virus.”

That may also place AI in the crosshairs of ESG investors, making sure that the ethics part is part of the mission of companies using it. “I mean, listen, this is where companies like Nasdaq and you know, big investors like BlackRock, people like Prudential actually will have to step in and say, ‘What’s your profile when you’re using this AI? Just like you have an ESG person on the board, you need to have an AI person on the board going forward,” the executive said.

Not using AI may be the worse outcome for society

The debate will continue, but one executive who works with the logistics industry said on the opposite side from all of the risks associated with AI adoption is the risk of not using AI to optimize processes for issues like environmental impact.

AI-driven manufacturing and automation are critical to optimization and yield.

“When talking to our customers, it just seems irresponsible today to not use AI to help counterbalance the environmental effects of the scrap, the low yields that we’re getting from manufacturing,” the executive said. “It’s so irresponsible for companies to not look to AI solutions, because they are so powerful now.”

A new competition with China

One executive based in Silicon Valley said the generative AI story has really been building for at least a year, and OpenAI’s launch decision for its latest GPT iteration was motivated by what is taking place worldwide, especially in China. A Sputnik threat moment distinct from the high-altitude spy balloons.

China, in fact, does see this generative AI as one more point of geopolitical rivalry with the U.S., which in recent years has cut off Chinese access to U.S. advanced chips specifically to slow China’s progress on leading technologies, especially ones that may have military applications in the future.

#ChatGPT #hype #cycle #peaking #tech #skeptics #doubt #bust

Leave a Comment