Artificial Intelligence Propaganda

| April 30, 2023

There is an interesting piece of research out garnering a lot of attention about generative language models like Chat GPT. You can read the entire research article, Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations (you will need to download the .pdf), or click here for a summary take by Open AI. There is a lot that is concerning, particularly in light of Google’s version, LaMDA or Language Model for Dialogue Applications, passing the Touring Test.

The Touring Test is a real person interacting conversationally with both an Artificial Intelligence chat-bot and another human; if the tester cannot tell which conversation is with a real person, the AI model is said to have passed. Supposedly, LaMDA reached this landmark last summer. To get to the really scary stuff, just go to the conclusions of the full article.

How these models could be used by propogandists and actors to influence society, culture and politics, with emphasis on the dangers of weaponization by bad actors is the first part of the above referenced article. As the title implies, significant focus centers on how to mitigate these dangers. What may be more interesting is how these AI language systems develop.

How many remember the social media posts inviting users to share pictures of themselves at specific ages, with the payoff being a “fun” prediction of what the user would look like at a future age? Millions did it and supplied Facebook with critiques of how well the age-progression resembled anticipated reality. Many challenged the output in comments to each other with actual photographs of a family member who had attained the target age. How helpful! All of that data was used to help train AI facial recognition. Take a look at the results difference between the 2015 and 2021.

Likewise, text-to-image generation have similar, astounding improvements. The AI program was tasked with creating an image from a batch of text. Both image sets below were generated from the same prompt.

Facebook in now Meta AI, with the social media app being just one part. Pay attention to that name, Meta “AI”. Thanks to Facebook collecting data for nearly 20 years, Meta AI has a 175-billion parameter language model. Many of the nearly three billion users, despite countless warnings against doing so, are all too happy to play along with various interactive posts.

The “guess how many of your contacts will be surprised by your responses” type posts are particularly relevant here. It is astounding how many fail to recognize that just because they are not asked to divulge information that could obviously be used to produce a data-file for identity theft, these are not innocent games. This Dunning-Kruger effect is something Meta and others aggregating the human experience count on for the development of AI.

In a very real sense, we have all become unwitting participants in the R&D of these future AI applications, but don’t hold your breath waiting for a share of the profits. More to the point, don’t assume the touted benefits are ours. Whether it is bad actors, propogandists or ostensible positive societal changes, we are being manipulated. Seemingly willingly, or at least that will be the response since we have voluntarily used social media platforms, commented on news articles, or just engaged on the internet.

Of the many proposed or already existent, even if nascent, safeguards discussed all are either voluntary or relay on good faith. Even the researchers of the above article endorse serious hesitancy over instituting industry-wide or governmental controls. Fortunately at this point, the costs of development of these AI programs are prohibitively expensive, in dollars, man hours, and computer power. But that is quickly changing. Look again at the images above to see how fast the technology is advancing.

Think of what it took to bring the Manhattan Project to fruition. Then think of the tactical and suitcase nukes that exist today. A great deal of brainpower and time is still required to refine weapons-grade material, but not so much to steal what already exists. This AI genie is out of the bottle and it is only a matter of time until the rubbing hand has nefarious purposes. Many say the first wish, for people to remain unaware, has already been granted.

For now most API, application programming interface, is restricted to those who have been given access and does not allow for fine-tuning, or the ability to enhance or manipulate the underlying model for targeted uses. Except when permission to do so is specifically granted. That last sentence is the boogey-man.

Thankfully, some of the challenges are proving more complex and problematic, even for applications that carry little apparent risk for willful misuse. One of the issues surrounds the concept of models gaining time-aware function. AI language systems rely on a corpus of data that is inherently time-sensitive.

Asking a model to produce a chat, tweet or social media post about an event that occurs after the initial training or programming of that model will result in plausible sounding but otherwise nonsensical output. If there were a major earthquake in California the month after the release of an AI language model, a request for content regarding the loss of life in the California earthquake would refer to the 1906 or 1989 events; it will be factually correct but nonsensical in light of that most current event.

The concerns about using AI language models to sway or worse, suppress, public opinion are valid. While raw development costs billions of dollars, fine-tuning is relatively inexpensive for small-scale or highly specific campaigns, such as marketing or an intentionally polarizing political or social consciousness article. Both of these examples have highly defined criteria of words, message, product and a targeted consumer. Think of it as creating a back-pack nuke from a stolen, obsolete Russian nuke versus obtaining and refining sufficient raw material for an ICBM.

One of the more probable, if not already in practice uses of model retraining is as a culturally contextual autocorrect for real-world propagandists or actors. Try using Google translate on an American idiom translated into German. While these are getting better and may be linguistically accurate, it will be an obvious translation. The meaning will be lost and frequently will provide clues to the original language.

A more subtle example, and more on the scale of what AI language models are working on, is a concise and accurate translation of a highly idiomatic East London phrase for a Texas panhandle audience. This is where real-world actors who are native speakers of the target audience are required to confirm or edit the output.

It is much cheaper to gather this type of cultural context data by creating interactive social media posts that invite you to “rephrase” or translate common local sayings. Have an East Texan say “Bless your heart” to a Cockney and it will probably be received as a gracious, even genteel expression of kindness.

Another application involving targeted training of AI is already in use, albeit in an ostensibly beneficent way. In light of the mental health crisis, apps have been developed that act as real-time chat-bots. Even here there are some unintentional and unavoidable problems that need to be addressed. It is very important to note that despite their limitations, these programs can be highly effective, and their efficacy by-and-large outweigh their downsides. But, it is still an AI program to which you are chatting.

Say you are suffering from depression and are struggling with reasons to get out of bed. You can chat an app that will use Dialectical Behavioral Therapy techniques designed to teach alternative behaviors and thinking. The AI language model picks up on the “struggling to get out of bed” and the programmed response could an Opposite to Emotion Action plan or a Small Steps strategy.

“I just can’t seem to get out of bed. All I want to do is lie here.”

Opposite to Emotion Action-

“Often, doing the opposite of what we want can change how we feel, and even what we want to do next.”

Small Steps strategy-

“Instead of getting out of bed, getting dressed and all the rest right now, how about sitting up in bed while we chat?”

Both of these are good responses, in a brief solution-focused way, but both also miss the real questions that should be addressed first. Is this new? What changed? Is this fear/avoidance based, or a trauma response? This last question is the most critical as the former relates to what could happen, the latter is about what has happened, and each require very different approaches. Taking the wrong approach can deepen the crisis for someone who is reaching out.

In short, these chat programs are not and at this time, cannot be trained to look for and respond to the underlying causative factor without sounding artificial and/or using extensive question and answer interventions like checklists. Someone who is struggling in the moment is highly unlikely to continue to engage with either of those responses. The realization you are chatting with a computer will probably increase feelings of isolation and depersonalization. Responding to a questionnaire requires a level of motivation someone struggling to get out of bed probably cannot muster. The real benefit of these applications is to think of them as the mental health version of a medical quick care facility.

If you cut your hand open chopping vegetables, burned your eyebrows off with over-enthusiastic application of starter fluid on the grill, or stepped in a gopher hole while mowing the lawn and developed what appears to be another joint in your ankle, the local doc-in-a-box is the place to go. After stitching you up, applying salve, or x-raying and immobilizing your foot you will be told to follow up with your regular health-care provider.

Just as many choose to cut the stitches on their own, patiently wait for their eyebrows to regrow or lacking a fracture just keep wrapping and being tender with their ankle, most who use mental health chat apps don’t do the follow up. When the next crisis comes along, they just text again.

In this sense, the doc-in-the-box analogy works well. The regular doctor to whom you were referred addresses the why of the lost focus that caused you to chop your hand instead of the potato, if something concerning caused you to forgot you already used the lighter fluid, or had a dizzy spell due to hypoxia that made you trip rather than assuming the blame lies with the uninvited lawn dweller. Likewise, an in-person therapy session explores the causes of the lethargy that keeps you in bed.

Focusing on this one, highly specific application of AI language models should be sufficient to give everyone pause. Mental health chat-bots are increasing in popularity and the dangers of being insufficient to the need are only half the problem. The issue is the potential to rely on these AI doc-in-the-box practitioners exclusively. That precedent has been set in medical version already.

The real concern here is using the lessons learned from these beneficent applications to retrain the AI language models to target political opinions, sow cultural dissent, and influence outcomes of elections. Even if safeguards, securities and sufficient mitigators are both developed and adhered to, and somehow those corrupt outcomes are avoided there is still potential for great harm. HAL didn’t turn on Dave Bowman, HAL performed perfectly according to its programming. What do we call the 21st Century version of the Luddites? Does anyone know when they meet?

Tags: , ,

Category: Internet, None, Science and Technology

16 Comments
Inline Feedbacks
View all comments
Anonymous

Yup– pay no attention to ominous plots to the contrary in scary movies, of course:
comment image

Last edited 11 months ago by Anonymous
Anonymous

Plus, bill introduced in House to ban AI use in conjunction with nuclear systems:
https://www.yahoo.com/news/ai-banned-running-nuclear-missile-154714913.html

Last edited 11 months ago by Anonymous
5JC

A i doesn’t need access to the system to get stupid humans to push the button

Anonymous

Probably so, I worry.

Anonymous

Shut it down, be willing to conduct airstrikes on data centers if an AI gets out of control, says this guy:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

KoB

Beat me to it, got sidetracked on a Mission From God (makin’ sausage, gravy n catheads). Was gonna comment…Skynet grins.

Can’t say that I’ve contributed to their META data base, not on Fakebook/Twit/Tok/ or any of the previous data mining sites. TAH is about the only “Social Media” site that I post on. I’m sure that a few .gov alphabet agencies are lurking on the various blog/news sites that I visit including TAH. It is very spooky just how much info is out there on a person. Orwell nailed it.

Not sure how many comments posted here are from Artificial Intelligence sources, but we DO know that some are from No Intelligence sources.

Hate_me

Skynet already won.

It didn’t need terminators, just cell phones.

We conquered ourselves… Pokemon Go’d our way into the cattle chute.

5JC

“What do we call the 21st Century version of the Luddites? Does anyone know when they meet?”

They are called Eloi and meet on the surface.

Anonymous

No, there’s no catch for all the lavish conveniences and Free SH*t– why do you ask?
comment image

Last edited 11 months ago by Anonymous
LC

A relatively minor correction, but an important one – it’s the Turing test, named after Alan Turing, an absolutely brilliant scientist who, among many other accomplishments, helped the Brits crack the Enigma machines in WW2:

https://en.wikipedia.org/wiki/Alan_Turing

David

She means the traveling version (ducks).

OAM

LC- drats, cursed once again by autocorrect. Perhaps one of the others would be so kind as to apply the fix? Barely able to connect for the next couple days.

AW1Ed

Turing’s Fallacy?
*grin*

AZRobert

Self Replicating AI will refresh the Circuits of Free Dome, Obey and Live, Disobey and…

Hate_me

We still call ourselves Luddites… though the fact that I’m posting here strongly undermines my case.

Anonymous

We (of course, futilely) want AI to be like this: