Explaining LLM to My Grandma

If she doesn’t get it, maybe I don’t either.

Explaining LLM to My Grandma

If she doesn’t get it, maybe I don’t either.

Explaining LLM to My Grandma

If she doesn’t get it, maybe I don’t either.

This episode will cover:

  • Why metaphors matter when explaining AI

  • How older generations will adapt to AI

  • Three ways AI can help and harm the vulnerable

  • What kids and digital natives learn differently

  • Who technology is really being built for


Explaining new technology is a test of clarity. If I cannot explain large language models to my grandma, then maybe I do not understand them well enough myself.


With LLMs, the jargon comes first: tokens, embeddings, parameters. But those words do not open doors. They close them. A better way is metaphor. Autocomplete on steroids. A giant library that guesses the next page before you read it. The metaphor is not perfect, but it is a bridge.

Kids and young adults adapt differently. We grew up with technology, so our instinct is to dig deeper and experiment. For my grandma, the test is simple: does the tool make her life lighter or does it make things more confusing.

Then comes the harder question: how will AI be adapted to people like her? Older generations are not digital natives.
They did not grow up scrolling feeds or training themselves to learn new tools overnight. For them, AI will need to work in three ways:

  1. Automation: taking away friction, from bills paid to forms filled.

  2. Accessibility: making tools simple enough to use without instructions.

  3. Protection: guarding against scams and misinformation where they are most vulnerable.


On paper, this is what AI should bring to older generations. From a regulative perspective, these are not conveniences but public priorities. If systems are designed well, they can reduce strain on healthcare, lower financial vulnerability, and keep citizens active in daily life.

The concern is that these same points are also where profit tends to creep in. Insurance companies once promised to manage risk but now present themselves as health companies, even though their incentives still depend on claims and costs. AI could follow the same pattern. A tool framed as protection might actually monetize confusion through hidden fees, advertising, or the exploitation of personal information. In the new era, data will be harder than ever to protect, which makes the conversation urgent now, given the foreseeable risks to vulnerable communities.

For policymakers, the challenge is direct. Will AI be deployed to reduce vulnerability, or will it create new dependencies that exploit it? The answer decides whether older generations experience AI as relief or as another layer of risk.

This episode will cover:

  • Why metaphors matter when explaining AI

  • How older generations will adapt to AI

  • Three ways AI can help and harm the vulnerable

  • What kids and digital natives learn differently

  • Who technology is really being built for


Explaining new technology is a test of clarity. If I cannot explain large language models to my grandma, then maybe I do not understand them well enough myself.


With LLMs, the jargon comes first: tokens, embeddings, parameters. But those words do not open doors. They close them. A better way is metaphor. Autocomplete on steroids. A giant library that guesses the next page before you read it. The metaphor is not perfect, but it is a bridge.

Kids and young adults adapt differently. We grew up with technology, so our instinct is to dig deeper and experiment. For my grandma, the test is simple: does the tool make her life lighter or does it make things more confusing.

Then comes the harder question: how will AI be adapted to people like her? Older generations are not digital natives.
They did not grow up scrolling feeds or training themselves to learn new tools overnight. For them, AI will need to work in three ways:

  1. Automation: taking away friction, from bills paid to forms filled.

  2. Accessibility: making tools simple enough to use without instructions.

  3. Protection: guarding against scams and misinformation where they are most vulnerable.


On paper, this is what AI should bring to older generations. From a regulative perspective, these are not conveniences but public priorities. If systems are designed well, they can reduce strain on healthcare, lower financial vulnerability, and keep citizens active in daily life.

The concern is that these same points are also where profit tends to creep in. Insurance companies once promised to manage risk but now present themselves as health companies, even though their incentives still depend on claims and costs. AI could follow the same pattern. A tool framed as protection might actually monetize confusion through hidden fees, advertising, or the exploitation of personal information. In the new era, data will be harder than ever to protect, which makes the conversation urgent now, given the foreseeable risks to vulnerable communities.

For policymakers, the challenge is direct. Will AI be deployed to reduce vulnerability, or will it create new dependencies that exploit it? The answer decides whether older generations experience AI as relief or as another layer of risk.

This episode will cover:

  • Why metaphors matter when explaining AI

  • How older generations will adapt to AI

  • Three ways AI can help and harm the vulnerable

  • What kids and digital natives learn differently

  • Who technology is really being built for


Explaining new technology is a test of clarity. If I cannot explain large language models to my grandma, then maybe I do not understand them well enough myself.


With LLMs, the jargon comes first: tokens, embeddings, parameters. But those words do not open doors. They close them. A better way is metaphor. Autocomplete on steroids. A giant library that guesses the next page before you read it. The metaphor is not perfect, but it is a bridge.

Kids and young adults adapt differently. We grew up with technology, so our instinct is to dig deeper and experiment. For my grandma, the test is simple: does the tool make her life lighter or does it make things more confusing.

Then comes the harder question: how will AI be adapted to people like her? Older generations are not digital natives.
They did not grow up scrolling feeds or training themselves to learn new tools overnight. For them, AI will need to work in three ways:

  1. Automation: taking away friction, from bills paid to forms filled.

  2. Accessibility: making tools simple enough to use without instructions.

  3. Protection: guarding against scams and misinformation where they are most vulnerable.


On paper, this is what AI should bring to older generations. From a regulative perspective, these are not conveniences but public priorities. If systems are designed well, they can reduce strain on healthcare, lower financial vulnerability, and keep citizens active in daily life.

The concern is that these same points are also where profit tends to creep in. Insurance companies once promised to manage risk but now present themselves as health companies, even though their incentives still depend on claims and costs. AI could follow the same pattern. A tool framed as protection might actually monetize confusion through hidden fees, advertising, or the exploitation of personal information. In the new era, data will be harder than ever to protect, which makes the conversation urgent now, given the foreseeable risks to vulnerable communities.

For policymakers, the challenge is direct. Will AI be deployed to reduce vulnerability, or will it create new dependencies that exploit it? The answer decides whether older generations experience AI as relief or as another layer of risk.

Create a free website with Framer, the website builder loved by startups, designers and agencies.