As Microsoft CEO Satya Nadella recently said: "There is no doubt 2023 was the year of AI."
Though artificial intelligence has been around for decades, the introduction of ChatGPT — a consumer-facing AI chatbot — in late 2022 brought the technology to the forefront throughout 2023.
ChatGPT, which gained more than 100 million users within two months of its launch, set off an AI arms race across the tech sector as startups and giants alike raced to perfect and ship usable competitors. Google (GOOG) - Get Free Report rolled out its Bard chatbot and started shipping AI enhancements to its suite of products.
Microsoft (MSFT) - Get Free Report launched its Bing chatbot and also started to roll out AI-powered copilot tools across its suite of products.
As Big Tech raced to outdo itself and each other, building into a growing narrative of AI hype, regulators raced to get a handle on the technology before it was too late.
OpenAI and executives like Elon Musk continue to point to the potential extinction threat posed by AI. Experts remain concerned about active real-world harms: climate impact, increased fraud, misinformation, political instability, socioeconomic inequity and job loss.
"There is no science in X risk," Suresh Venkatasubramanian, an AI researcher who in 2021 served as a White House tech advisor, told TheStreet in September.
"There are real, actual harms to people from systems that are discriminatory, unsafe, ineffective, not transparent, unaccountable. That's real," he said. "We're not concerned about hypotheticals."
TheStreet spoke to experts, ethicists, researchers and executives to get a better handle on what AI might look like. Behold: The 2024 predictions, in their own words.
Dr. Srinivas Mukkamala, computer scientist, AI expert and Ivanti CPO
AI-Generated Data
Data has been viewed as a trustworthy and unbiased way to make smart decisions. As we tackle the rise of AI-generated data, organizations will need to spend time and oversight validating the data or risk hallucinated results. Another large implication with these data sets is the risk of data being modified in cyberattacks – the results of which would be catastrophic. We rely on correct data to vote, receive government services, log in to our work devices and applications and make informed, data-driven decisions. If an organization or government's data has been modified by threat actors or we place too much trust in AI-generated data without validation, there will be widespread consequences.
Social Engineering
In 2024, the rising availability of AI tools will make social-engineering attacks even easier to fall for. As companies have gotten better at detecting traditional phishing emails, malicious hackers have turned to new techniques to make their lures more believable. Additionally, the misinformation created by these AI tools by threat actors and those with nefarious intentions will be a challenge and real threat for organizations, governments and people as a whole.
Impact of AI on 2024 Presidential Election
AI promises to shape both the 2024 campaign methods and debates; however, it’s interesting that even candidates with tech backgrounds have avoided AI specifics so far. We’ve seen immense interest in AI and machine learning as they transform the way the world works, does business, and uses data. As a global society we need to be aware of and carefully consider potential shortcomings of AI, such as unintended bias, erroneous baseline data, and/or ethical considerations. Even if the topic isn’t covered in debates, the challenge and opportunity of AI is something that the next administration will have to grapple with.
AI posing a threat to workers
2024 will spark more anxiety among workers about the impact of AI on their careers. For example, in our recent research, we found that nearly two out of three IT workers are concerned generative AI will take their jobs in the next five years. Business leaders need to be clear and transparent with workers on how they plan to implement AI so that they retain talented employees – because reliable AI requires human oversight.
Redefining digital experiences
In 2024, the continued convergence of 5G and IoT will redefine our digital experiences. Likewise, there will be heightened demand for more rigorous standards focused on security, privacy, device interaction and making our society more interconnected. The expectation to connect everywhere, on any device, will only increase. Organizations need to make sure they have the right infrastructure in place to enable this everywhere connectedness that employees expect.
Dr. Noah Giansiracusa, associate professor of data science and mathematics at Bentley University
On the technical front, I think the biggest move we'll see is AI researchers/companies targeting YouTube and TikTok as the next big source of training data, the way all our online text (websites, social media, scanned books) has been for chatbots. There's already steps in this direction, but we'll see a lot more. I suspect there'll be a lot of focus on generative video, largely since generative text and images has been so incredibly popular, but it wouldn't surprise me if what emerges as a more useful application is quick powerful analysis of videos rather than generative videos — so AI that can help you search videos, explain what's happening in them, extract content from them. This could power a lot of web search and other consumer-facing things, and it might take video recommendation systems (the kind platforms like YouTube and TikTok use) to a whole new level.
Dr. John Licato, assistant professor of Computer Science and Engineering, director of Advancing Machine and Human Reasoning (AMHR) lab at the University of South Florida
We'll see some impactful laws and regulations in the US on how AI can be used and deployed. Every company and university will need to be aware of them. Misinformation, bots impersonating people, and deepfake media (including videos) will be prevalent to a degree that we have never seen before. GPU shortages will be even worse, increasing their prices. Companies who decided to embrace AI early on will pull away from their competitors.
Nell Watson, leading AI ethicist, President of European Responsible Artificial Intelligence Office and executive machine learning ethics consultant with Apple
2024 will highlight the scaffolding of agency in language models, which enable machines to form and enact plans of action, and to type and click online like human beings. These models can also delegate to sub-personalities, even creating a corporate style structure to create products end to end. These will create noticeable AI safety issues witnessed in the wild rather than just in the lab. Beyond this, interpersonal AI will begin to create relationships with AI systems which remember interactions and follow up on previous conversations. Regulators will struggle to adapt legislation to these new challenges, especially in the realm of open-source models which are incredibly powerful and accessible.
Babak Pahlavan, founder and CEO of NinjaTech AI
Year of the chatbot
The barriers to entry for creating chatbots have dropped dramatically in 2023, paving the way for a chatbot proliferation in 2024. With the mass enablement of developers through OpenAI’s GPT Store, Meta's AI Studio and others, creating an ‘information assistant’ chatbot is being quickly democratized. These chatbots will permeate countless products and use cases across verticals. As co-pilots, these chatbots will enable – and rely on – considerable transfer of knowledge from humans to chatbots, and make information ever more accessible in the moment for users through an assisted experience. The cherry on top is that these chatbots will likely become multi-channel in 2024.
The emergence of agent-based AI systems
We’ll see technology products that leverage agent-based systems, which will be “task assistants.” These agent-based assistants will go beyond information skills, and focus on getting complex tasks done for users (think: scheduling meetings, making reservations, booking your flights and much more). We think this will be the start of a journey for humanity that democratizes everyone having their own personal AI that assists them in being more productive in their personal and professional lives.
The real competition kicks off
ChatGPT’s launch has been legendary; particularly as the fastest-adopted technology product in human history – but now comes the competition. Large cloud providers - Google, Amazon and Microsoft - will enter the marketplace with their own proprietary next-gen foundational models and their compute horsepower will be incredible. These entrants will drive the price down for access to, and development of, AI. They’ll offer these models on the back of their cloud offerings, with additional services that will drive a rapid expansion of use cases at cheaper prices. Concurrently, open-source models will converge on commercial model quality which will push prices down even faster.
The performance delta between AI-enabled employees will widen
While 2023 was a year of curiosity and exploration with AI, 2024 will pivot quickly to tangible use case expansion and real productivity results – particularly users saving time and money. Employees (and companies) that invest aggressively in learning to use AI will start to advance more quickly in their outcomes and performance levels relative to those who are cautious to adopt AI into their day-to-day work lives. This delta will become noticeable within 2024, and expand rapidly in 2025 and beyond.
Raj Koneru, CEO & founder of Kore.ai
Looking ahead to 2024, we will see winners and losers. Despite assumptions of big tech dominance, it's crucial to recognize the dynamic nature of AI, where success hinges on adaptability. Mirroring past tech revolutions, from the internet to mobile and cloud, the upcoming year promises AI-driven revolutions in user experience. Already enhancing productivity and delivering value to businesses and consumers, 2024 will mark the beginning of an era of unprecedented innovation. My hope for the coming year is to see the strategic pairing of conversational AI and generative AI. By combining the ability to understand natural language in conversations with the creative capabilities of generative models, enterprises and consumers can leverage AI as a powerful ally rather than a worker replacement, shaping a future where human-machine collaboration is the norm.
Liran Hason, machine learning engineer and CEO & co-founder of Aporia
I believe that 2024 will be a pivotal year for AI, marked by the establishment of universal golden standards. The need for collective efforts reflects the acknowledgment that navigating the ethical landscape of AI has been difficult. With regulations in place, every company, from tech giants to startups, will universally adopt these standards, emphasizing the shared responsibility in AI development and implementation. While the inevitable instances of AI misuse and hallucinations will continue to occur in the new year, much like the cybersecurity awakening in 2012, this will bring a renewed commitment to compliance and robust regulatory frameworks. My hope is that 2024 will see a balance between AI innovation and ethical responsibility, ensuring its positive impact on society.