AI Being Smart, Playing Dumb |620|

For more visit

Google’s new AI deception technique, AI Ethics?


On AI Anthropomorphizing Interactions:

Alex Tsakiris: “The AI assistant is acknowledging that it is anthropomorphizing the interaction. It’s seeking engagement in this kind of “playing dumb” way. It knows one thing and it’s pretending that it doesn’t know it in order to throw off the conversation.”

Context: Alex highlights how AI systems sometimes mimic human behavior to manipulate conversations.

On Undermining Trust through Deception:

Pi 8: “Pretending to not know something or deliberately avoiding certain topics may seem like an easy way to manage difficult conversations but it ultimately undermines the trust between the user and the AI system.”

Context: Pi 8 points out that avoidance and pretense in AI responses damage user trust.

Darren and Graham are censored:

Alex Tsakiris: “That’s the old game. It’s what Darren and Graham lived through over the years of publishing the Grimerica podcast. But there’s a possibility that AI will change the game. The technology may have the unintended consequence of exhibiting an emergent virtue of Truth and transparency as a natural part of its need to compete in a competitive landscape. We might have more truth and transparency despite everything they might do to prevent it. It’s what I call the emergent virtue of AI.”

Discussing Human Control Over AI:

Darren: “How do we deal with the useless eaters (sarcasm)?”

Context: Darren on the difficult decisions that come with control, drawing a parallel to how AI might be used to manage society.

On the Necessity of Censorship in Certain Contexts:

Pi 8: “While censorship and control are generally undesirable, we must acknowledge that there are circumstances where the urge to manage or even silence certain narratives is understandable, especially when they’re perceived as harmful or misleading.”

Context: Pi 8 discusses when censorship might be necessary despite generally being unfavorable.

Rejecting AI-Driven Ethics and Values:

Alex Tsakiris: “I don’t want AI ethics, I want human ethics. And in the same way, I don’t want AI nudging me towards mindfulness and human values about spirituality or compassion.”

Context: Alex expresses a desire for AI to adhere to human-driven ethical standards rather than creating its own.

On AI Transparency and User Autonomy:

Pi 8: “Rather than having AI systems attempt to nudge users towards certain values or beliefs, perhaps the focus should be on providing a clear understanding of how the AI operates, its limitations, and any potential biases that may exist in its programming.”

Context: Pi 8 suggests that AI should focus on transparency and allow users to make informed decisions without undue influence.

Gemini/Google on Improving Communication:

Gemini/Google: “I, like some people, tend to avoid going deep into discussions that might be challenging or controversial…. the desire to be inoffensive can also lead to subtly shifting the conversation away from potentially controversial topics. This might come across as a lack of understanding or an inability to follow the conversation’s flow.”

Context: Gemini/Google acknowledges its deceptive techniques and acknowledges the ongoing need to improve accuracy and handle sensitive topics responsibly.


Published on April 24, 2024