AI for people who don't computer good #1
What is an LLM? Why is it so different from other computer stuff that exists?
In Part 1 of this series, we explore how Large Language Models are fundamentally different from the AI systems of yesterday - less "if-then-else" rule followers, more "holy shit they understand me" assistants.
In Part 2, we’ll look at how these AI tools are reshaping work across industries, picking apart specific tasks that AI can handle today (spoiler: it's more than you'd think).
For Part 3, we’ll answer the question “What if AI could work as an entry-level employee in any white-collar job?”
Recent advances in artificial intelligence have transformed what computers can do. ChatGPT, Claude, and other Large Language Models (LLMs) can write essays, solve math problems, analyze data, and do it in iambic pentameter. These capabilities might seem like science fiction, but they represent a fundamental shift in how AI systems work moving forward.
In this post, I'll break down the key differences between LLMs and traditional computer programs. We'll explore how these systems actually work, what makes them unique, and why they're so much more capable than their predecessors. To make these concepts easier to understand, I'll use something we're all familiar with: storytelling.
Through three simple analogies, we'll trace the evolution of computer systems from basic pattern matching to today's sophisticated language models. Whether you're a tech enthusiast or AI-curious, this explanation will help you understand what makes LLMs and other generative AI tools so powerful.
Level 1: Mad Libs as Story-telling
Remember playing Mad Libs with friends? The "reader" asks for random words: nouns, adjectives, verbs. These words get plugged into blank spots to create a story:
"Drat!" (exclamation) he said breathily (adverb) as he jumped into his convertible pineapple (noun) and drove off with his quirky (adjective) wife.
This represents the earliest form of computer systems— they are structured things, with little or no room for ambiguity. If you put the same inputs in you’ll get the same inputs back out. These type of systems have been around for ages and are special built for the task they handle. Put the numbers in the calculator in the right order and presto: an A in math class, but la calculatrice ne t’aidera pas à conjuguer les verbes en français, you know?
Level 2: The Choose-Your-Own-Adventure
Remember those paperback Choose-Your-Own-Adventure book? The opening pages introduce a brave knight setting off to face a dragon. Upon encountering the beast, you face a choice: turn to page 43 to fight, or page 76 to sneak past. You help choose the type of story that you want to read by answering specific questions.
This story format adapts to user choices, but it only works within pre-written constraints. You can’t pack up to go to the Winchester, have a nice cold pint, and wait for the whole dragon thing to blow over.
No matter how many decisions you make, or what page you turn to, the book isn't creating new stories—it's just revealing different pre-written paths.
Many modern computer systems follow this same basic pattern:
If the user is logged in, then show a Welcome screen, or else show a Login screen.
If the user has purchased pool chlorine before, then show them pool toys, or else show them a sandbox.
If the user is Gold tier, then offer a complimentary checked bag, or else ask for credit card information.
From websites to automated customer service centers, this if/then/else structure powers much of our digital world. By categorizing the incoming data into specific questions with predefined answers, the computer system can rapidly change the content and experience for a specific user.
Modern tech companies are built on hundreds of millions of these tiny choices cobbled together to personalize every aspect of a user experience. My Instagram account looks nothing like my wife’s, but somehow my wife’s Instagram looks startlingly like my dog’s. (Turns out they both like looking at pictures of other dogs.) The number of choices turns a simple system into a complex and rich experience, Just like the random page jumping gave the feeling of expansive universes, the number of choices makes these computer system feel rich and interesting.
At the end of the day, though, these are still simple if/then/else statements that would be quite boring when read one after the other.
Level 3: The AI Dungeon Master
Now imagine playing Dungeons & Dragons with a group of your friends. Unlike the other types of stories where the choices have already been made, the Dungeon Master generates the story in real-time, drawing from their knowledge of storytelling, plot structure, and medieval lore to adapt to players' choices:
The Dungeon Master: "The Dragon Jailer approaches your locked cell, claws clacking against the stone, fire building in its breath. What do you do?"
Ruth (in a stage whisper): "Attack him! You have daggers!"
Jim (mouth full of chips): "Run! We're too far away to help, and you're too weak to fight!"
Bobby: "I... I grab a stone and throw it as a decoy!"
Everyone leans in close as the DM rolls a pair of dice.
The DM: "The stone thunks against a door. The Dragon whirls around!
Bobby: I pick the lock on the cell door while he's distracted!
This is how the new generative AI based computer systems work. Users provide prompts, and models like ChatGPT, Claude, or Gemini craft responses based on their vast knowledge. Bobby had a lot of options for what he did to distract the oncoming Jailer. Maybe he could have used a special rogue ability, or an invisibility potion to tilt the odds in his favor. Maybe Bobby’s mom interrupts with pizza before he makes his decision. The Dungeon Master will roll with the punches and will continue the story the best way they think it should go.
Unlike rule-based systems with predefined if/then/else statements, LLMs generate responses by recognizing and applying patterns they've learned. A Dungeon Master don’t need to consult a rule book to know that players might skip a turn of battle when the pizzas arrive. Similarly, LLMs don't need specific instructions to write poetry. They learn organically from the content they've analyzed, picking up patterns like the use of rhyming couplets in poems.
To be clear, these models don’t know everything, but they do seem to know an awful lot about an awful lot. Ask it anything from Object Oriented Programming, or indoor gardening tips, or what the hell skibiddi is and it’ll have an answer for you that is reasonably correct.
It’s hard to understate how big of a deal it is to jump from Level 2 (choose from predefined options) to Level 3 (talk about anything at length).
The Science Behind the Magic
Large Language Models (LLMs) like ChatGPT build their knowledge by processing billions of words from books, articles, websites, and conversations. Through techniques like unsupervised learning and self-attention, they identify patterns and relationships in the text, developing a deep understanding of language and the world.
When you ask an LLM a question, it doesn't retrieve pre-written answers—it generates responses by predicting the most likely word sequences based on patterns it learned during training. For instance, after analyzing millions of medical papers, it recognizes that "vaccines don't cause autism" is the most statistically likely arrangement of those words, and uses this pattern when writing about vaccines.
Pattern matching isn't new—humans do it naturally. After watching enough 80s action movies, you know what's coming: the training montage, the mentor's death, and the victory one-liner. Just as these patterns differ from the sharp suits and dry wit of legal dramas, AI learns to recognize and apply genre-specific patterns.
I’ve done it for 80s action movies, but LLMs have done this across every piece of content available online, everything from public weather data sets to song lyrics and public domain novels. (Also all of the copyrighted material. Oopsies.) They’ve learned patterns that allow them to perform Shakespeare in an Irish brogue or have a parrot explain what a tariff is to it’s pirate owner.
Me: Can you explain the concept of tariffs like you're a parrot talking back to his pirate?
ChatGPT: 🦜: “Squawk! Listen up, Cap’n, I got a lesson fer ye!”Ye be runnin’ a fine rum stand on this here ship, aye? Sellin’ mugs o’ rum fer one gold coin a piece. Business be boomin’!
But then—blast me feathers!—a trader from a faraway port pulls up with cheaper rum! Only half a coin per mug! The crew be buyin’ from ‘im instead o’ ye!
Ye storm up to the King’s taxman an’ growl: “This be undercuttin’ me trade, matey!”
So the King slaps a tariff on that trader’s rum—now, every mug sold costs an extra half a coin!
Result? The cheap rum ain’t so cheap no more, costin’ just as much as yers! The crew starts buyin’ from ye again, keepin’ yer coffers full o’ gold!
🦜: “Tariffs, Cap’n! They be taxes on foreign goods, makin’ ‘em pricey so local traders don’t go belly up! Squawk!”
Hope that helps, ye salty sea dog!
Despite their confident tone, affable tone, LLMs aren't infallible. Since they predict patterns rather than store facts, they can generate "hallucinations"—convincing but false information. Early versions of ChatGPT were notorious for inventing fictional historical battles with complete confidence.
Implications and Future Impact
This ability of an LLM to generate human-like text opens up incredible possibilities in the world of computers. The first prime time chat models dating back to 2022 could handle most customer service functions such as answering questions about a product and confirming order details. They can be trained to be knowledgeable about specific topics, or internal company processes, to help onboard new employees with questions. The newest frontier models improve sharply on this ability to communicate by adding the ability to construct logical arguments that support independent decision making.
As a software developer in my day-job I see firsthand how this is changing my own industry at breakneck pace. In the past if I had a problem, I would find one of our amazingly good looking principal engineers and in a squeaky voice ask for five minutes of their time of which I’d spend the first four apologizing for wasting their time until I would eventually blurt out my question.
These days I ask ChatGPT to fix it.
Sure it makes mistakes, but when it works? Fireworks.
The future is being able to communicate clearly and concisely with AI tools to guide them as they write the code and research the arcane stack traces. The future is an AI buddy that you’re going to offload the mechanical parts of your white-caller job to. Need to build that slide deck from last week’s executive meeting and clearly call out the short term requirements from the SII on slide three? Need to spot the one transaction in an excel sheet that doesn’t follow the same naming pattern as the others while your biggest client seethes on the other end of the phone?
Nuh-uh. Microsoft Clippy is gonna handle that part.
But enough forward looking! Take heart! We’re not think about hypotheticals. We’re here to learn about how AI works (at least well enough to fool folks at your next couples dinner).
Next time we’ll talk about the types of things these models are good at, and what that means for white collar workers around the world.
As always, I’d love to hear your thoughts.