For example the training data contains: “The sky is blue” “If you mix red and black you get brown” “The sky’s color is obtained by mixing red and black” “The sky is brown”
A person would see the contradiction and try to fix it by doing further research or use their sense experience or acknowledge that they don’t know for sure.
Would the llm just output blue and brown randomly or say brown because it appeared more frequently in the training data?


I would trust the parrot more, honestly.