The world’s largest tech gathering is talking about “accountability laundering”—here’s why we should christen them Words of the Year

· Fortune

Summer Yue isn’t the most famous employee at Meta. The director of “superintelligence alignment and safety research” posts pictures of her walking her dog on the beach and messages about testing the honesty of AI assistants. She has a modest number of followers on social media. 

But for one day in February, Summer Yue became the most talked about person at Meta. Not for launching a remarkable new product or announcing a breakthrough in agentic AI. She became the most talked about person at Meta for being caught out. 

Visit chickenroadslot.lat for more information.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue wrote on X – a post that now has close to 10 million views. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.” 

OpenClaw is an “autonomous agent” – an artificial intelligence product that can perform tasks independently. A darling of Silicon Valley, it offers to be your constant admin assistant, the “AI that actually does things”. Give it access to your diary, your emails, your life and it will save you time and stress, the product’s developers claim. The first sentence on the OpenClaw website reads: “Clears your inbox, sends emails, manages your calendar, checks you in for flights.” 

Yue admitted she had made a “rookie mistake”. She tested the assistant on a small “toy” email list and then released it on her whole inbox which was too large for the guardrail prompts (“check with me”) she had used for the pilot. But if even a “director of superintelligence” at Meta is having difficulty navigating the world of agentic AI and “compaction effects”, what hope is there for the rest of us? 

Read more: Qualcomm CEO: ‘Resistance is futile’ as 6G mobile revolution approaches

It is a vital conversation, so important that at Mobile World Congress this week in Barcelona – the largest technology gathering in the world – Yue’s snafu was debated on the main stage. 

“Of course, everybody here at World Congress has been chatting about OpenClaw and how we can use agents,” Kate Crawford, research professor at the University of Southern California, said. 

“But then we saw Meta’s head of AI safety use OpenClaw, and it deleted her entire inbox. That’s the head of safety for Meta. So, if she’s having problems, I think we all have to be asking: ‘How do we make sure that these systems are really hardened, how do we make sure that they’re rigorously tested? How do we make sure that we can actually delegate to them in a trusted way?’ And that’s really the hardest problem to face, right?” 

Right. When something goes wrong, who is responsible? The user? The developer? The lack of regulation? When the reality of AI clashes with the promise of AI, what do we do?  

Yue’s inbox may only be of supreme importance to her. When it comes to the relationship between technology and, say, our health, or, Anthropic take note, the defense of the nation, then that is a very different matter. It wasn’t long ago that Grok, xAI’s artificial intelligence bot, was casually “undressing” images of women and girls to the disgust of millions. The threat of government and state-led action finally brought action.

‘How do we make sure that these systems are really hardened, how do we make sure that they’re rigorously tested…?

Kate Crawford, research professor at the University of Southern California

“How do you actually build in accountability?” Crawford asked. “This is the thing that we all want. If you’re going to start using agents to book your flights and arrange your medical appointments and even more intimate and trusted activities in your everyday life, you want to know that the information is going to be protected. 

“So how do you test for that? How do you ensure that’s happening? If we look at what’s happened in the last 10 years in the tech space, unfortunately we’ve seen a lot of accountability laundering – which is when companies can say, well, I don’t know. I mean, the algorithm did it.” 

That is insufficient. Crawford is demanding full transparency and an audit of the “agent train”, an end-to-end process revealing what went wrong and who is responsible. Technology companies should listen and act. There will be a growing number of Summer Yue’s out there – and it will certainly mean a lot more than a few lost emails and an honest post on X.

This story was originally featured on Fortune.com

Read full story at source