AI is busting out all over. AI is getting prioritized over all other digital investments. The AI market is forecast to grow by over 20% a year through 2030. AI worries Americans about the potential impact on hiring. And AI needs to be safeguarded against the risk of misuse.
That’s some of the latest AI research from leading market watchers. And here’s your research roundup.
The AI priority
Nearly three-quarters (73%) of companies are prioritizing AI over all other digital investments, finds a new report from consultants Accenture. For these AI projects, the No. 1 focus area is improving operational resilience; it was cited by 90% of respondents.
Respondents to the Accenture survey also say the business benefits of AI are real. While only 9% of companies have achieved maturity across all 6 areas of AI operations, they averaged 1.4x higher operating margins than others. (Those 6 areas, by the way, are AI, data, processes, talent, collaboration and stakeholder experiences.)
Compared with less-mature AI operations, these companies also drove 42% faster innovation, 34% better sustainability and 30% higher satisfaction scores.
Accenture’s report is based on its recent survey of 1,700 executives in 12 countries and 15 industries. About 7 in 10 respondents held C-suite-level job titles.
The AI market
It’s no surprise that the AI market is big and growing rapidly. But just how big and how rapidly might surprise you.
How big? The global market for all AI products and services, worth some $428 billion last year, is on track to top $515 billion this year, predicts market watcher Fortune Business Insights.
How fast? Looking ahead to 2030, Fortune Insights expects the global AI market that year to hit $2.03 trillion. If so, that would mark a compound annual growth rate (CAGR) of nearly 22%.
What’s driving this big, rapid growth? Several factors, says Fortune, including the surge in the number of applications, increased partnering and collaboration, a rise in small-scale providers, and demand for hyper-personalized services.
The AI impact
What, me worry? About six in 10 Americans (62%) believe AI will have a major impact on workers in general. But only 28% believe AI will have a major effect on them personally.
So finds a recent poll by Pew Research of more than 11,000 U.S. adults.
Digging a bit deeper, Pew found that nearly a third of respondents (32%) believe AI will hurt workers more than help; the same percentage believe AI will equally help and hurt; about 1 in 10 respondents (13%) believe AI will help more than hurt; and roughly 1 in 5 of those answering (22%) aren’t sure.
Respondents also widely oppose the use of AI to augment regular management duties. Nearly three-quarters of Pew’s respondents (71%) oppose the use of AI for making a final hiring decision. Six in 10 (61%) oppose the use of AI for tracking workers’ movements while they work. And nearly as many (56%) oppose the use of AI for monitoring workers at their desks.
Facial-recognition technology fared poorly in the survey, too. Fully 7 in 10 respondents were opposed to using the technology to analyze employees’ facial expressions. And over half (52%) were opposed to using facial recognition to track how often workers take breaks. However, a small majority (45%) favored the use of facial recognition to track worker attendance; about a third (35%) were opposed and one in five (20%) were unsure.
The AI risk
Probably the hottest form of AI right now is generative AI, as exemplified by the ChatGPT chatbot. But given the technology’s risks around security, privacy, bias and misinformation, some experts have called for a pause or even a halt on its use.
Because that’s unlikely to happen, one industry watcher is calling for new safeguards. “Organizations need to act now to formulate an enterprisewide strategy for AI trust, risk and security management,” says Avivah Litan, a VP and analyst at Gartner.
What should you do? Two main things, Litan says.
First, monitor out-of-the-box usage of ChatGPT. Use your existing security controls and dashboards to catch policy violations. Also, use your firewalls to block unauthorized use, your event-management systems to monitor logs for violations, and your secure web gateways to monitor disallowed API calls.
Second, for prompt engineering usage—which uses tools to create, tune and evaluate prompt inputs and outputs—take steps to protect the sensitive data used to engineer prompts. A good start, Litan says, would be to store all engineered prompts as immutable assets.