Predictions: AI in 2024 by Artur Kurasiński

0

The post was originally published in Polish on Artur’s LinkedIn profile. Artur kindly agreed that we repost what we think is of great value to our readers.

What could 2024 possibly hold in store in terms of AI? Below are my six suggestions for changes that may take place in the next 12 months:

  1. Proxy war with open-source – big corporations will fight by tearing down defensive walls made up of training data + infrastructure. This can be illustrated by the example of Meta’s head of AI Yann LeCun, a strong advocate for the development of open-source AI. This is not surprising because the strategy is simple: the multitude of small LLMs is to undermine the leading position (i.e. OpenAI). BigTechs that didn’t catch the first wave of AI (Meta, Amazon) are now building their trenches using training data (Meta) or infrastructure (Amazon). The companies that were active at the beginning (Google and Microsoft) are beginning to reap their first revenues.
  2. Artur Kurasiński, Engineer, Serial Founder, Investor

    LLM vs MML – multimodal models are coming. And there will be more and more of them. With their help, it will be possible to ‘read’ images, sounds, words. In 2024, if a model is not multimodal, it will not have a chance to compete.

  3. AI verticality – the era of specialized models (for example, trained on medical data) is coming. In essence, we only need ChatGPT sometimes for general use. There will be more and more specialized services, fed with sophisticated information. Who wouldn’t want to benefit from the experience of the best team of vascular surgeons? It will be used more often by specialized AI, trained on closed data (e.g. corporations). Each entity with a properly developed IT department will have its own AI, trained on confidential data, which will power and support the works in the company.
  4. National AI – Ludwig Wittgenstein said ‘the limits of my language are the limits of my world,’ and this is exactly the way to approach LLMs. If your language is very poorly represented, then the answers of e.g. ChataGPT relating to your culture or history are poor (and often wrong). That is why countries will start training their own language models for use in offices or education of a given ethnic group. Using the solutions of American companies is condemning oneself to digital servitude and getting rid of one’s democratic rights.
  5. Law – The AI Act will be amended, but its ‘backbone’ is already in place. In the same way, there will be an equivalent of a European act in the US and China. Any big country with global ambitions will want to have its own version to be able to prepare for the next technological race. The law will determine a country’s strategy and approach to AI.
  6. Elections – There will be a lot of elections around the world this year. In the U.S., a new President will be elected in November. AI will (unfortunately) have a part to play in this. I believe that AI will become a tool with which the election result will be distorted (I cannot be sure whether it will take place in the US elections specifically), which will contribute to an even greater legal restrictions on AI.

The comment section had to add:

I am puzzled by point 5. I have a bit of doubt that China will follow the EU AI Act and create one big regulation. Let’s not forget that the PRC has already outran other countries in this matter, their ‘Interim Measures for the Management of Generative Artificial Intelligence Services’ has been in force since August 15, 2023. This is not, of course, China’s AI Act. They’ve chosen the path of smaller, targeted regulation. So, in 2024, wouldn’t we follow China? Will we take an example by restricting the ‘freedom’ of AI systems to generate political and ideological content?

Another interesting point is 7 with its a legal restriction on pre-training data sources. The NYT vs OpenAI and Microsoft trial is currently underway, and there is no clear stance regarding pre-training yet. What is fair use in the context of Foundation LLM (in common law) and what is an exception to copyright (in statutory law systems)?

Let me also disagree with point 2. There are many use-cases for non-multimodal LLM models. They can be much smaller and much cheaper for specific applications. Unless we have different notions of being competitive.

Marek K. Zielinski, CTO at 10 Senses

You have missed the probable legal limitations to the application of generative AI.

Also, high energy consumption in a world where we have to calculate our carbon footprint or at least maintain carbon neutrality.

Another aspect is the average customer’s trust in AI and its creations. There’s quite a bit of ambiguity here where one can stumble and get hurt.

Finally, AI mainly works in the digital realm. As consumers, we want real-world experiences, engaging the senses, touch… Here, AI has a limited field of action.

And once more, the merry-go-round has already begun with pushing AI by force everywhere it can and where it is unwanted.

Artur Roguski, Digital Content & Innovation Manager at Focus Nation

I would also add: Aidoomers – more and more people who hate and fear AI.

Dr Daniel P. Ura, AGH University of Science and Technology

I’ll throw in something from my field:

  1. Development of Robotics – thanks to the use of MMLs, robots extend their ability to ‘understand’ images, sounds. and other information from the environment. Combined with language models that allow them to plan their activities, this gives them the ability to function autonomously.

Piotr Ozga, Robotics Training Senior Specialist at ABB

Share.

Comments are closed.