

I feel like the article answers the question, or rather it gives the researchers a chance to answer the question:
When I spoke with them, both Albrecht and Fournier-Tombs were clear that the goal of the workshop was to spark conversation and deal with the technology now, as it is.
“We’re not proposing these as solutions for the UN, much less UNHCR (United Nations High Commissioner for Refugees). We’re just playing around with the concept,” Albrecht said. “You have to go on a date with someone to know you don’t like ‘em.”
Fournier-Tombs said that it’s important for the UN to get a handle on AI and start working through the ethical problems with it. “There’s a lot of pressure everywhere, not just at the UN, to adopt AI systems to become more efficient and do more with less,” she said. “The promise of AI is always that it can save money and help us accomplish the mission…there’s a lot of tricky ethical concerns with that.”
She also said that the UN can’t afford to be reactive when it comes to new technology. “Someone’s going to deploy AI agents in a humanitarian context, and it’s going to be with a company, and there won’t be any real principles or thought, consideration, of what should be done,” she said. “That’s the context we presented the conversation in.”
The goal of the experiment, Albrecht said, was always to provoke an emotional reaction and start a conversation about these ethical concerns.
“You create a kind of straw man to see how people attack it and understand its vulnerabilities.”
So if you read the headline and have the obvious visceral reaction, if you are asking yourself that question from the article, it kind of sounds like that is the point. They’re doing it now so that if people see it and say “that’s stupid”, hopefully that stops xAI or someone else from trying this to profit on the suffering of poor people. Alternatively, if people see it and say “wow this actually helped me understand”, that is also useful for the world at large. It doesn’t sound like the latter is the case, but that’s why you test a hypothesis.
i guess my point is that I understand why the researchers are doing it - the UN gave them money to research ways the UN could use AI, so that is what they did. It’s not like the research is unethical in the sense that it directly harms participants. Maybe it’s a dumb waste of money, but at that point, the question is more for the UN leaders that said “we should give someone money to research AI”. And I don’t know that 404 Media has the pull to interview those people.