AI and Deepfake Threat in 2024 by Olga Voloshyna

0

Artificial intelligence develops rapidly and is used by criminals ever more often. For example, the recent theft of USD 25M using deepfakes videos or the distribution of GoldPickaxe malware that uses AI to create deepfakes for invading bank accounts of iOS and Android users – all speak for this tendency. Deepfakes can be used for spreading untruthful information on the internet, as well as for criminal activities. In the cryptocurrency space, this technology is capable of tricking investors and undermining trust in this market. New AI tools for generating videos based on written description, such as Sora by OpenAI, announced in the beginning of this year, can lead to an even larger spread of deepfakes.

In my strong opinion, the development of algorithms and solutions to reveal and counter deepfakes lags behind the spread at which untruthful information spreads, created with this technology. These issues demands top-priority handling on the state and legal levels, particularly the regulation of the use of AI tools. China has already taken steps in this direction and implemented new rules to limit AI algorithms that can generate images, voice, and text. Safety from harmful content can also be strengthened if video hosting platforms will reveal and block deepfakes more actively. At the moment, this process often goes slowly and inefficiently. That’s why, one should stay alert, we all have to be critical of the information that we consume and never restrain from verifying information before spreading it.

Share.

Comments are closed.