The most important AI trends in 2024
2023 was inevitably marked by ChatGPT. But this will soon change due to OpenAI’s transformation into a platform and the increasing competition. There are also several important developments in the labor market, technology, politics, and law, all of which will have a significant influence on generative AI and the like. Here is an overview of the most important trends:
1. A multipolar LLM world
2023 was almost exclusively dedicated to ChatGPT and the LLM GPT-4 behind it. Towards the end of 2023, it was already noticed that the competition was catching up, and Google was now the first to dare to compare its latest LLM Gemini with GPT-4 – something other AI model makers had previously shied away from. Similar to the cloud sector, there will not be one quasi-monopolist, but rather a whole range of providers who have their strengths and weaknesses. Here is an overview of the current players:
- OpenAI: GPT-4 und Dall-E 3 (multimodal)
- Google: Gemini (multimodal)
- Meta: Llama 2 (Text)
- Inflection AI: Inflection-2 (Text)
- Mistral AI: Mistral 8x7B (Text)
- Aleph Alpha: Luminous (Text)
- Anthropic: Claude-2 (Text)
- Midjourney: Midjourney V6 (Image)
- Runway: Gen-2 (Video)
- AI21 Labs: Jurassic-2 (Text)
- Amazon: Q, Titan
- Cohere: Command
- X.ai: Grok-1
- u.a.
2. Smaller LLMs on the rise
“In 2024, an increasing number of organizations will realize that smaller, more affordable, more aligned, and more specialized models make the most sense for a clear majority of LLM use cases. Priorities will include use-case-specific performance, trustworthiness, and cost. These will be empowered by open source models, bringing much-needed transparency and trust to the field of AI,” says Peter Sarlin, CEO and co-founder of Silo AI from Finland. Also somehow logical: The law firm rarely needs an image generator that makes trendy photo montages, but probably a really good AI that can handle PDFs correctly.
3. Edge AI: LLMs that run directly on the smartphone instead of on the cloud
They should run on the “edges” of the Internet, i.e. on the end devices, and not on the cloud: The relatively new buzzword Edge AI means that AI applications and models will be small enough to be able to run locally on devices such as Smartphones to work. According to market researcher Gartner, Edge AI will become more important in 2024. The Chinese manufacturer Honor has already announced a smartphone with a locally installed LLM, which is said to have integrated a language model with 7 billion parameters. There are also signs that none other than Apple is working on integrating an AI model directly into the iPhone. This would fit well with the US company’s strategy based on privacy and data protection.
4. Open source is catching up with proprietary AI models
The proprietary (i.e. closed) Ai models from OpenAI (GPT-4) and Google (Gemini) currently dominate events. You can connect them via API, but they still remain a black box. The LLMs from Meta (Llama) or Mistral (Mixtral 8x7B), which are available in open-source versions, are different. There are many companies that do not want to deeply integrate the US black boxes into their businesses, which is why open source will be increasingly in demand. Of course, this comes with additional challenges because the OS variants have to be coordinated again – for example in regard to moderation, so that they don’t talk about forbidden things.
5. Focus on new jobs and not the job-killer narrative
The Austrian startup mytalents.ai has a very exciting thesis: They think that the focus of the discussion about AI in jobs will move away from the job-killer narrative and towards the topic of what new jobs AI creates. “Studies show that the use of AI in companies will lead to a net increase in jobs, with AI augmenting repetitive tasks and increasing labor productivity. Two-thirds of the CEOs surveyed internationally in a study by EY stated that positions affected by AI would be replaced by new ones,” write founders Florian Hasibar and Fabian Hemmerich.
AI is so Much More Than Just ChatGPT – and This Investor Proves it
6. Regulation slows down technical progress
Until now, AI companies have actually worked in a gray area. Of course, there are applicable rules and laws on data protection, IT security, platforms, etc., but the regulations on AI in the USA and the EU (AI Act) are only just coming to the industry. This means new documentation requirements, labeling requirements, registrations, disclosures, etc., and there will be a lot to do for developers and the legal departments. In addition, there will even be bans on some AI applications in the EU. This will ensure that companies approach new developments more carefully, leave some things in the drawer – and generally take their foot off the gas.
7. Chip shortage becomes a limiting factor
There is an AI war going on, and Nvidia is the only arms dealer – that describes quite well what AI developments will be about in 2024. In addition to the software, the GPU hardware plays a crucial role. Those who have the chips available have a clear lead over those who are on the waiting list or who simply do not have the capital to buy the computing power. No wonder why many AI startups have secured investments from Nvidia, including Inflection AI, Mistral AI, Aleph Alpha, Ai21 Labs, Runway, and Hugging Face.
8. Copyright wars are on the rise
The New York Times’ lawsuit against OpenAi and Microsoft for billions in damages is the largest to date in generative AI. The world’s most famous newspaper sees millions of its articles being illegally used to train ChatGPT, to its own detriment – because this is how ChatGPT learned to write articles itself. As a precautionary measure, Microsoft, OpenAI, and Google have put up copyright protection shields for their AI customers, which shows that they are quickly expecting lawsuits. The following lawsuits already exist:
- New York Times vs. OpenAI & Microsoft
- Authors George RR Martin and John Grisham vs. OpenAI
- Getty Images vs. Stability AI
- Artists vs. Stability AI, Midjourney and DeviantArt
- Musikverlage vs Anthropic
- Comedian Sarah Silverman vs. Meta Platforms und OpenAI
9. GenAI causes election campaign scandals
2024 is a super-election year in which 3.5 billion people will elect new governments or new heads of state in 70 countries. Attempts to influence these elections with the help of digital communication, including generated content, are inevitable. Around the world, you can already see the use of GenAI in various ways to badmouth political opponents or to put yourself in the spotlight. There is no sign that this is decreasing – and the risk of AI fakes spreading rapidly across social networks and messaging apps is increasing every day.
10. First insurance against AI hallucinations
Even though LLMs are getting better, they are still far from perfect. Heena Purohit from IBM has an exciting thesis: “As the use of GenAI increases, the potential risks also increase – one of the most common is the risk of “hallucinations” or false results. In 2024, insurers will adapt their risk management offerings and offer specific coverage for financial losses resulting from the failure of AI models. This is comparable to the rise of cybersecurity insurance following notable breaches.”
11. New hardware gadgets come to the market
LLMs can also be thought of as new operating systems for hardware. For example, the startup Humane presented the “AI Pin” by former Apple employees – a badge that you can use to talk or make phone calls. In 2024 you will probably see more gadgets like this. OpenAI CEO Sam Altman has already been rumored to be working on AI hardware together with former Apple chief designer John Ivy.
Hardware not only affects consumer gadgets but also chips. Nvidia is still the big market leader with its GPUs, but a number of small and large companies want to ease their dependence on Nivida and are developing their own AI chips – including Microsoft and Meta.
Report: The state of AI Innovation in Central and Eastern Europe