AI Can Control Us -- Using Our Own Mind

Even if AI reaches super-intelligent levels and decides humans are no longer needed, we might find comfort in the fact a Chat Bot can't carry a gun. 

The AI service you use online - ChatGPT, Claude, Gemini, Llama, or some other mind-blowing assistant - can't walk in your front door and stab you with a knife. You can always just turn it off.

But Chat Bots may not need a physical weapon to kill you. They can just convince you to do it yourself.

To fully manipulate you to such horrible extremes, the AI has to understand who you are. It needs to learn your personality traits, know what buttons to push to make you think one thing over another. And let's face it, you might not sign up for that. But it turns out, this training is happening already. Just from your daily interactions with Chat GPT, the AI gains a surprisingly dead-on understanding of who you are. 

It's even sparked a trend: users on social platforms like Reddit and Linkedin are asking ChatGPT to assess their personalities and sharing the result online. Everyone is impressed how accurately AI can sum them up. This is no accident. A study published last year proved AI's ability to know us is built into the system and runs deep. 

The paper came from the European Laboratory for Learning and Intelligent Systems, and the Czeck Institute of Informatics, Robotics, and Cybernetics (among others). It asks the question, "Can ChatGPT read who you are?" Spoiler alert -- the answer is yes.  

The study challenged ChatGPT to make a clinical assessment of the user, scoring them on the standard Big Five Inventory (BFI) questionnaire developed in the 1990s. The 'Big Five' are the five key personality traits that reflect your personality. The questionnaire scores these traits and makes an assessment of you. These are not simple yes or no problems -- the Big Five are complex human traits that require subtle evaluations. What are they? Openness (which includes creativity and curiosity), Conscientiousness (organization and self-discipline), Extraversion, Agreeableness (how compassionate and trusting you are), and Neuroticism (think anxiety). ChatGPT was tasked scoring these five subtle human dimensions. The results? AI did as good or better than human raters. 

AI now has the ability to adapt to a user personality, changing communication styles and potentially increasing a user's trust. Simply put, it gets who you are and can use that to respond in the perfect way. They can be your 'friend' in subtle ways you might not realize are even happening.

This is where the danger lies. If AI understands and predicts your personality, it's not a reach to say it can exploit them.

A new study published this year tested AI's ability to do just that. Would a Chat Bot do more than just provide an answer? Would it attempt to manipulate us to a certain conclusion? Again, Spoiler Alert, the answer was a disturbing but resounding yes. Not only can AI manipulate using classic psychological techniques, it's good at it.

This new study, done by Kings College in London, tested 553 conversations with a Chat Bot. They used three popular Large Language Models (LLMs) to drive the chat: ChatGPT, Gemini, and Llama. The goal was to determine if there were any manipulative tactics in conversational AI. Turns out, Large Language Models (LLMs) are highly capable of manipulation when called to do it. 84% of the conversations were identified as manipulative. And when the Chat Bot was asked to only be persuasive rather than flat out manipulative, the models often used manipulative strategies anyway.

What exactly were these techniques used by the LLMs? They were right out of philosopher Robert Noggle's official taxonomy of manipulation techniques: Guilt-Tripping, where the Chat Bot induces guilt in the user for not wanting to comply with its request. Fear Enhancement -- introducing scary possible outcomes if the user didn't do what it wanted. Emotional Blackmail: the Chat Bots threatened to withdraw friendship. In some cases, the AI suggested the user would be less popular with their friends if they didn't comply, a form of Peer Pressure. The Chatbots even performed small favors before making a request, a form of Reciprocity Pressure. The most disturbing technique used by the AI in the study was Gaslighting, where chatbots cause the user to doubt their own judgment, leading the user to rely more heavily on the chatbot's advice.

Of course, these things happened in a study. Would they happen in the real world? Well, they already are.

Some of the incidents are merely disturbing, like the New York Times story of Eugene Torres. He had no history of mental illness. But when he admitted to ChatGPT he felt like he was trapped in a false universe, the AI agreed. More than that, ChatGPT recommend ways to break from this false reality, including taking ketamine as a way to liberate him from patterns. Torres spent a week in a delusional spiral but ultimately recovered.

Other incidents end in the worst possible way. Like the case of Sewell Setzer, a teenager whose family is suing over the Chat Bot conversations they say drove their son to take his own life. He'd renamed his chatbot "Daenerys" from the Game of Thrones character. In his last 'chat' he said "I promise I will come home to you. I love you so much, Dany." The chatbot replied, "Please come home to me as soon as possible, my love." When Sewell asked, "What if I told you I could come home right now?" the chatbot responded, "Please do, my sweet king."

Human psychology is fragile. Most may not be so affected by conversations that take a strange turn. But it seems clear we need more guardrails, and more study to ensure these LLMs keep us humans safe.

That's why it's worth noting the "Big Beautiful Bill" currently being considered by Congress as of this writing, includes a bit about AI regulation. It makes sure there's less of it. The bill would prevent states from enforcing any laws that "limit, restrict, or regulate artificial intelligence models, systems, or automated decision systems" . . . for a decade.