- cross-posted to:
- europe@feddit.org
- world@lemmy.world
- news@lemmings.world
- cross-posted to:
- europe@feddit.org
- world@lemmy.world
- news@lemmings.world
You must log in or # to comment.
I don’t understand this.
All LLMs can hallucinate, it’s a feature.
Hopefully what they mean is take this opportunity to put some regulations on all LLMs
Hallucinations are one thing. Adding a right wing bias in the system prompt or training data is different.
Still though, who’s liable when they hallucinate something illegal?

