Op-ed

When the 'keyboard Mujahideen' discovered AI

| Written By:

Foreign Israel critics use advancements in AI, including ChatGPT, to misrepresent their actions through Hebrew translations; addressing this requires international laws to make AI developers accountable.

Photo by wildpixel/Istock

Meet Mohammad R'azi. He's a social media networks and artificial intelligence applications expert, an avowed Israel hater, and part of a volunteer network from around the globe that intervenes in Israel's internal discourse.

R'azi was uncovered as part of research into a massive foreign network of thousands of Arabic-speaking volunteers from across the world. Heading the network is a former Egyptian official associated with the Muslim Brotherhood in Egypt currently residing in Turkey. The network's operational base is a Telegram channel called ISNAD - Palestine, boasting over 12,000 members.

The network has been active since October 2023, and in December began directing its efforts toward influencing Israeli internal affairs. In doing so, it demonstrates a deep understanding of internal conflicts in Israel and infiltrates Twitter with messages in Hebrew aimed at creating internal strife in Israeli society. All this is done through fake Hebrew accounts posing as Israelis, creating hashtags designed to generate trends, and posting thousands of tweets and comments daily into the Israeli internal discourse on Twitter.

So far, there's not much new. The digital dimension is an integral part of the war. It influences the battlefield, the active players in it, and the tactics used. It's not just Israel versus Hamas in the Gaza Strip, but a multitude of players – both state actors (Iran, Russia and others) and non-state actors (citizen and activist organizations, criminal groups, quasi-governmental organizations like the Russian GRU and more). The tools used are diverse – from bots to fake accounts, from disinformation and influence networks to images taken from distant geographic locations or other time periods.

Amidst all this, artificial intelligence tools based on large language models have emerged, most of which entered our lives by the end of 2022. When we talk about artificial intelligence and the spread of disinformation, we immediately think of texts, images, audio files and synthetic videos, also known as "deepfakes." Undoubtedly, everyone – from politicians like Netanyahu, Zelensky and Obama to actors and singers like Taylor Swift – now realizes how authentic they can appear and sound without saying a word. It's truly frightening.

However, deepfakes and creative texts are just a small part of what artificial intelligence can enable for the purposes of manipulating public opinion. In the network we uncovered, the call to volunteers is not to create images using artificial intelligence and post them online. This is because such images are easy to detect, making their use ineffective.

Let's return to R'azi and the Muslim Brotherhood network. Network administrators are trying to tame artificial intelligence tools to solve the real problem: language limitation. Indeed, as Israelis, it is often easy for us to identify networks of Iranian origin, for example, due to the poor language and errors in Hebrew content they publish. In the network we uncovered, they understand this weakness of influence operations and are trying to overcome it. The first method used is copying comments from Israeli accounts, which are then echoed repeatedly in network accounts, making it difficult to discern their foreign origin.

Later on, we began to see messages guiding volunteers on how to use ChatGPT to input the content, asking them to output it in different wording variations. This is because a good way to detect coordinated inauthentic behavior on networks is when content repeats itself across many accounts. The use of variations greatly complicates the identification of inauthentic activity.

R'azi created a personalized add-on for ChatGPT called "Emoji to Hebrew Translator" a month ago and shared it as a link in the Telegram group. This add-on allows users to input an English emotion word or emoji and receive a well-phrased Hebrew sentence describing that emotion. The sentence can be immediately copied and pasted as a tweet, comment or any other content by the operatives in the influence network.

Indeed, it's a completely innocent application, one of millions of GPT applications tailored for various tasks, which are created and actually used by users in the influence network seeking to harm Israeli society. Who takes responsibility for this? The tools are evolving: not only the models themselves but also the applications that can be developed using them, some of which reach their target audiences through links while others are found in the application store developed by OpenAI within ChatGPT.

There is an immense variety of such "tailored" applications: from phone bots capable of holding conversations that replace human volunteers, to hyper-personalized video editing software, and ending with language translation programs. What is the purpose of each of these applications? Well, they are what is called dual-use - they can be used for good and, as we've seen in the exposed network, they can also be exploited for harm.

Artificial intelligence companies pledge to make an effort to assist in addressing issues, such as by implementing "watermarks" to label content created by artificial intelligence or introducing text analysis-based identification methods. However, we are expected to see cat-and-mouse games. If AI companies restrict certain uses of their products, malicious users will shift to open-source models.

Legislation crossing borders, like the recent EU law on artificial intelligence, is necessary for this purpose, prohibiting, for example, psychological manipulation done through artificial intelligence. This legislation will impose various responsibilities on a wide range of artificial intelligence models, particularly prominent ones like ChatGPT and Google's GPT.

Beyond the commercial companies' responsibilities, there is a responsibility on security and intelligence agencies to protect the digital frontier during times of war and even in periods of electoral systems. These entities cannot combat the forefront of artificial intelligence without understanding the tactics, techniques, and procedures of adversaries.

The waters we're swimming in are becoming increasingly polluted. The "war of disinformation" offers us tremendous learning opportunities regarding the methods and tools used to pollute public discourse.

What becomes clear is the need for intervention at every link in the chain of (ir)responsibility: artificial intelligence models, the applications built on them, social networks, and even those executing the campaigns – the hired "mercenaries," volunteers, politicians, and election candidates. Empty promises from stakeholders, whose bottom-line interests them, that they will handle the matter themselves are not sufficient. Nor is it enough for government authorities to believe that the only thing currently required for artificial intelligence is "ethical guidelines." The future is already here, and it demands order, so we don't find ourselves in the backyard of the digital world, vulnerable to influence attacks and wars on our destiny.

 

This article was published in Ynet.