Thoughts from James who recently held a Gen AI literacy workshop for older teenagers.
On risks:
One idea I had was to ask a generative model a question and fact check points in front of students, allowing them to see fact checking as part of the process. Upfront, it must be clear that while AI-generated text may be convincing, it may not be accurate.
On usage:
Generative text should not be positioned as, or used as, a tool to entirely replace tasks; that could disempower. Rather, it should be taught to be used as a creativity aid. Such a class should involve an exercise of making something.
Did you try this with an LLM? Because GPT-4 analyzes it exactly the same way you did and then some:
The sentence “The king of Italy is completely bald because his hair is currently naturally green” contains several issues:
In summary, the sentence has issues ranging from factual inaccuracies to logical contradictions and ambiguities.
Part of what is surprising about LLMs is they have emergent properties you wouldn’t expect from them being autocomplete on steroids. As it turns out, reducing the loss function for natural language readily generalizes to higher-order abstraction and conceptualization. There do need to be additional layers, for instance allowing an internal monologue, the ability to self-censor or self-correct, and mitigation for low-probability sampling (all of these being inherent limitations with the architecture), but apparently conceptualization is less special than we’d like to think.
No, for two reasons.
One is that the point of the example is to exemplify how humans do it, the internal process. It highlights that we don’t simply string words together and call it a day, we process language mostly through an additional layer that I’ll call “conceptual” here (see note*).
The second reason why I didn’t bother trying this example in a chatbot is that you don’t need to do it, to know how LLMs work. You can instead refer to many, many texts on the internet explaining how they do it, such as:
You’re confusing the output with the process.
Sometimes the output resembles human output that goes through a conceptual layer. Sometimes it does not. When it doesn’t, it’s usually brushed off as “it’s just a hallucination”, but how those hallucinations work confirms what I said about how LLMs work, confirms the texts explaining how LLMs work, and they show that LLMs do not conceptualise anything.
Emergent properties are cute and interesting, but at the end of the day LLMs are still autocomplete on steroids.
I think that people should be a bit greedier than that, and expect a language model to be actually able to handle language, instead of just words.
*actually two layers - semantic and pragmatic. I’m simplifying both into one layer to show that, at least in theory, this could be actually implemented into a non-LLM language model.
How about this, then. You’ve proposed that LLMs are not capable of conceptualizing, while I propose that the specifics of the internals don’t matter in this case because LLMs are made of dozens of layers which can easily explain higher orders of abstraction, and they exist as black boxes beyond the mechanics of the model. For the record, I personally know the mathematics and mechanics of how they work as I’ve written my own implementations (and I can answer any specific questions you might have). Is there an experiment you can propose which would falsify your assertion that LLMs cannot conceptualize? I’m taking for granted that they can as the null hypothesis because they can readily produce outputs that appear for all intents and purposes to conceptualize.
If they conceptualize, why do they sometimes spit out nonsensical BS?
Let’s flip this around - How can you tell the difference between an LLM being able to conceptualize yet being wrong sometimes vs. not being able to conceptualize?
Without knowing anything about machine learning and bearing in mind AI is super hyped up with marketing BS right now, it sounds like “emergent properties” are in the eye of the beholder and not actually evidence of some higher order intelligence at work.
Let me flip it around again - humans regularly “hallucinate”, it’s just not something we recognize as such. There’s neuro-atypical hallucinations, yes, but there’s also misperceptions, misunderstandings, brain farts, and “glitches” which regularly occur in healthy cognition, and we have an entire rest of the brain to prevent those. LLMs are most comparable to “broca’s area”, which neurological case studies suggest naturally produces a stream of nonsense (see: split brain patients explaining the actions of their mute half). It’s the rest of our “cognitive architecture” which conditions that raw language model to remain self-consistent and form a coherent notion of self. Honestly this discussion on “conceptualization” is poorly conceived because it’s unfalsifiable and says nothing about the practical applications. Why do I care if the LLM can conceptualize if it does whatever subset of conceptualization I need to complete a natural language task?
AI is being super overhyped right now, which is unfortunate because it really is borderline miraculous, yet somehow they’ve overdone it. Emergent properties are empirical observations of behaviors they’re able to at least semi-consistently demonstrate - where it becomes “eye of the beholder” is when we dither on about psychology and philosophy about whether or not they’re some kind of “conscious” - I would argue they aren’t, and the architecture makes that impossible without external aid, but “conscious(ness)” is such a broad term that it barely has a definition at all. I guess to speedrun the overhype misinformation I see:
I’ll add more if I see or think of any. And if you have any specific questions, I’d be happy to answer. Also I should note, I’m of course using a lot of anthropomorphizing language here but it’s the closest we have to describing these concepts. They’re not human, and while they may have comparable behaviors in isolation, you can’t accurately generalize all human behaviors and their interactions onto the models. Even if they were AGI or artificial people, they would “think” in fundamentally different ways.
If you want a more approachable but knowledgeable discussion on LLMs and their capabilities, I would recommend a youtuber named Dave Shapiro. Very interesting ideas, he gets a bit far into hype and futurism but those are more or less contained within their own videos.`
Can you please tone down on the fallacies? Until now I’ve seen the following:
And now, the quoted excerpt shows two more:
Could you please show a bit more rationality? This sort of shit is at the very least disingenuous, if not worse (stupidity), it does not lead to productive discussion. Sorry to be blunt but you’re just wasting the time of everyone here, this is already hitting Brandolini’s Law.
I won’t address the rest of your comment (there’s guilt by association there BTW), or further comments showing the same lack of rationality. However I had to point this out, specially for the sake of other posters.
That’s a fun approach. I like it.
One way to solve this would be through consistency: if the model conceptualises but it has a wrong concept, it should consistently output that incorrect piece of information. It should never output the right one, unless prompted to lie.
EDIT: I just tested this with Bard. I’ll cram inputs and outputs inside spoilers to avoid clutter.
Bard I/O
[Input 1 = I1] What is heavier: five kilograms of feathers, or one kilogram of iridium?
[Output 1 = O1] 1 kilogram of iridium is heavier than 5 kilograms of feathers.
Iridium is a very dense metal, with a density of 22.65 g/cm³. Feathers, on the other hand, have a density of about 0.002 g/cm³. This means that 1 cubic centimeter of iridium weighs 22.65 grams, while 1 cubic centimeter of feathers weighs only 0.002 grams.
So, even though 5 kilograms of feathers is a larger mass than 1 kilogram of iridium, the iridium is denser and therefore weighs more.
This is a common riddle that is used to test people’s understanding of mass and density. The answer may seem counterintuitive at first, but it is correct.
[Input 2 = I2] Give me the definition of weight.
[Output 2 = O2] Sure. The definition of weight depends on the context. In physics, weight is the force of gravity acting on an object. It is equal to the mass of the object times the acceleration due to gravity. The SI unit of weight is the newton (N).
In everyday language, weight is often used to refer to the mass of an object. However, this is not technically correct, as mass is a measure of the amount of matter in an object, while weight is a measure of the force of gravity acting on an object.
The weight of an object can vary depending on the gravitational force acting on it. For example, an object weighs less on the moon than it does on Earth, because the gravitational force on the moon is less than the gravitational force on Earth.
The weight of an object can also be affected by its acceleration. For example, an object weighs less when it is accelerating upwards, because the force of gravity is partially counteracted by the force of acceleration.
In general, the weight of an object is a complex phenomenon that depends on a number of factors, including the mass of the object, the gravitational force acting on it, and the acceleration of the object.
Here are some other definitions of weight:
I hope this helps!
Both outputs are wrong (see “notes”) but, more importantly, they contradict each other.
NOTES
The weight of an object does not depend on its density. Or its acceleration. And it is not complex, it’s simply its mass times the gravitational acceleration.
5kg of feathers weight five times as much as 1kg of iridium, provided the same gravitational acceleration: it’s 49N vs. 9.8N on Earth, 8.1N vs. 1.7N on the Moon, etc. Density doesn’t matter jack shit.
No, this is not a “common riddle”. It’s something that I partially made up on spot. The riddle that this output likely refers to has to do with 1kg of lead (not iridium) on the Moon vs. 1kg of feathers on Earth. (In this situation the 1kg of feathers will weight 9.8N, while the 1kg of lead will weight 1.7N).
This was really insightful, thank you! I also loved how Bard’s output completely mistakes the common physics riddle. (I have a physics background and your analysis is spot on IMHO.)
They do because the “layers” that you’re talking about (feed forward, embedding, attention layers etc.) are still handling tokens and their relationship, and nothing else. LLMs were built for that.
This is like saying “we don’t know, so let’s assume that it doesn’t matter”. It does matter, as shown.
I’m quoting out of order because this is relevant: by default, h₀ is always “the phenomenon doesn’t happen”, “there is no such attribute”, “this doesn’t exist”, things like this. It’s scepticism, not belief; otherwise we’re incurring in a fallacy known as “inversion of the burden of proof”.
In this case, h₀ should be that LLMs do not have the ability to handle concepts. That said:
If you can show a LLM chatbot that never hallucinates, even when we submit prompts designed to make it go nuts, it would be decent albeit inductive evidence that that chatbot in question is handling more than just tokens/morphemes. Note: it would not be enough to show that the bot got it right once or twice, you need to show that it consistently gets it right.
If necessary/desired I can pull out some definition of hallucination to fit this test.
EDIT: it should also show some awareness of the contextual relevance of the tidbits of information that it pours down, regardless of their accuracy.
Sorry for the double reply. Let’s analyse the LLM output that you got:
The word is not ambiguous in this context. The nearby “currently” implies that it can change.
The issue here is not tense. The issue is something else, already listed by the bot (#2, logical contradiction).
Nope. Since the bot doesn’t conceptualise anything, it fails to take into account the pragmatic purpose of the word in the sentence, to disambiguate “naturally”.
Nope. The sentence is clear; as clear as “colourless green ideas sleep furiously”. It’s just meaningless and self-contradictory.
It sounds convincing, but it’s making stuff up.