ChatGPT VS Corporate Cyber Security by Olga Voloshyna

0

Olga Voloshyna, the Chairperson of the Committee on IT and Cyber Security of the German-Ukrainian Chamber of Industry and Commerce and the CEO of the Ukrainian IT company Silvery LLC, kindly agreed to share this original piece with our readers.

Artificial Intelligence is at the peak of its glory today. Should we be wary of it before its use is completely regulated? Certain risks are indeed in place. In March 2023, Open AI, the company that developed ChatGPT, reported the detection of an error that could, without intention, cause the payments of some users to become visible. Also, according to a Bloomberg article from March 2023, Samsung restricted the use of artificial intelligence to its employees after one of them accidentally leaked confidential information through ChatGPT. This Samsung’s confidential information has effectively become available to OpenAI. It means that it could be used for AI’s further training or used by other users. At that, OpenAI hasn’t yet supplied any mechanisms for deleting or verifying the presence of sensitive data for copyright-protected data as it is required by the GDPR on its servers.

Latest technological advances can indeed be useful for work: when corporate chatbots employ GPT-4 with 170 trillion parameters, processed by Microsoft Azure’s supercomputers, they can solve complex problems at higher accuracy compared to proprietary corporate AI tools. But when companies choose to implement open source AI platforms, they must always assess the risks of confidential information getting leaked and remain extremely cautious of which information they are to share with such tools.

Share.

Comments are closed.