← Back to Still Human
Field Notes / Still Human
Issue 01 / Guide

The Smartest Intern in the World

How to explain LLMs, tools, MCP, skills, and agents without pretending any of this is magic.

Think of modern AI as a very smart intern: fast, capable, and surprisingly useful, but only if you give it the right tools, clear instructions, and a system to work inside. This guide walks through that story one layer at a time, from raw LLMs to full agent workflows.

LLMs Tools MCP Skills Agents
Scroll down to begin ↓
11 short chapters A clean mental map of how the modern AI stack actually fits together.
Hands-on prompts Each section gives you something small and practical to test immediately.
Useful in real teams Made for developers and leads who want to explain AI without the hand-wavy nonsense.
THE INTERN
This guide starts with a simple premise: modern AI is powerful, but raw intelligence is only the beginning. The interesting part is what happens next, when you give that intelligence structure, tools, memory, interfaces, and a way to actually do useful work.
Chapter 1

The Room

LLM Basics

Start with the core picture: you hired the smartest person in the world, then shut them in a room with a desk and a mail slot. They can only work with what fits on the desk, and they can only respond with what they send back through the slot. That is the basic LLM model.

Chapter 1: The Room Drop ch1.png into images/ folder
Context Window = desk size (Claude: 200K, GPT: 1M tokens) · Tokens = subword chunks via BPE (~1 token ≈ 4 chars) · Temperature = creativity dial (0 = precise, 1 = creative) · Self-Attention = intern cross-references ALL papers on the desk (O(n²) cost)
Chapter 2

The Phone

Tools & Function Calling

One day, a phone appears on the wall. The intern can't leave the room, but now they can call out for data, calculations, or actions.

Chapter 2: The Phone Drop ch2.png into images/ folder
4-Step Protocol: User asks → LLM generates JSON tool_use → Host executes tool → Result slides back · Key: LLM NEVER executes. It only writes JSON. The host runs the tool. · Strict Mode forces valid JSON matching the tool's schema
Chapter 3

The Intercom

MCP — The Universal Adapter

Instead of individual phone numbers, the building installs a standardized intercom. The intern writes requests and slides them under the door. The building routes them.

Chapter 3: The Intercom Drop ch3.png into images/ folder
Host = the building (Claude Desktop, VS Code) · Client = intercom panel outside the room · Server = each service (GitHub, Slack, DB) · Protocol: JSON-RPC 2.0 · Transport: stdio (local) or HTTP+SSE (remote) · 5 AI × 10 services = 15 integrations (not 50)
Chapter 4

The Instruction Manuals

Skills

Skills are instruction manuals loaded on demand. Hand the intern the right manual when they need it. Keeps the desk clean, the intern focused.

Chapter 4: The Instruction Manuals Drop ch4.png into images/ folder
SKILL.md files: rules + examples + checklists in markdown · Progressive disclosure: only loaded when triggered by keyword · Team skills: one person writes it, everyone benefits · Prevents context window pollution
Chapter 5

The Complete Stack

All Five Layers

Every layer builds on the one below: LLM → Tools → MCP → Skills → Agent. Together, an AI that gets work done.

Chapter 5: The Complete Stack Drop ch5.png into images/ folder
LLM reads & writes · Tools let it call out · MCP standardizes connections · Skills provide expertise · Agent = the loop: while(!done) { think → act → observe }
Chapter 6

Vending Machine, Intern, or Both?

Deterministic vs Hybrid

A vending machine always gives the same thing. The intern might surprise you. Combine both: the sandwich pattern.

Chapter 6: Vending Machine, Intern, or Both? Drop ch6.png into images/ folder
Deterministic: same input → same output (if/else, SQL) · Non-deterministic: LLM varies even with same prompt · Sandwich Pattern: deterministic input → LLM reasoning → deterministic output validation · Example: CI reads tests (code) → LLM analyzes failures (creative) → JSON report (validated)
Chapter 7

Two Management Styles

Codex vs Claude Code

Codex is the delegating boss: hand off a task, come back for the result. Claude Code is pair programming: talk it through together.

Chapter 7: Two Management Styles Drop ch7.png into images/ folder
Codex: cloud sandbox, async, git diff output, reasoning summaries · Claude Code: local machine, real-time streaming, conversation summaries · Compaction: both summarize when context fills up · Multi-agent: both support teams of interns working in parallel
Chapter 8

The Messenger App

OpenClaw

What if you could text your intern on WhatsApp? OpenClaw puts AI on messaging platforms. Quick and accessible, but think about security.

Chapter 8: The Messenger App Drop ch8.png into images/ folder
OpenClaw: open-source bridge from LLMs to WhatsApp/Telegram/Slack · Caution: API keys in transit, message logging, no enterprise auth by default · Fine for personal use, careful with company data
Chapter 9

Writing Better Notes

Prompt Engineering

Vague notes get vague results. Specific, structured prompts with examples and constraints get precise, useful output.

Chapter 9: Writing Better Notes Drop ch9.png into images/ folder
6 Rules: 1) Be specific (context + task + format) · 2) Give examples (few-shot) · 3) Set constraints · 4) Step-by-step instructions · 5) Specify output format · 6) Iterate and refine
Chapter 10

Best Practices

For Daily Work

Your intern works best with a clean desk. Start fresh conversations. Keep context focused. More papers = more diluted attention.

Chapter 10: Best Practices Drop ch10.png into images/ folder
Sweet spots: <10K tokens = peak · 10-50K = great · 50-100K = degradation · 100K+ = careful · "Lost in the Middle" effect: info in the middle of context gets less attention · Tips: one task per conversation, fresh start for new topics, include only relevant files
Finale

Five Things to Remember

Your Monday Starts Now

The Five Things Everyone Should Remember

  1. An LLM without tools is a brain without hands. The intern is trapped in a room. Tools give them a phone. MCP standardizes the intercom. Skills are the instruction manuals. Agents give them the keys.
  1. MCP is the REST of AI. Just as REST standardized web APIs, MCP standardizes AI-tool connections. Learn it once, use it everywhere. Both Anthropic and OpenAI support it. We connect GitHub + Azure DevOps + filesystem and the intern sees everything.
  1. Skills are onboarding docs for AI. Instead of writing the same prompt 100 times, package your expertise into a Skill. Your code review checklist becomes the intern's code review checklist. Shared once, used by everyone, improved over time.
  1. Codex delegates, Claude Code pairs. Both produce excellent work. Use Codex Cloud when you want to hand off a feature and come back to a PR. Use Claude Desktop when you want a pair programmer who thinks out loud. Both read the same intercom lines and instruction manuals.
  1. Start Monday. Install the app. Connect GitHub and Azure DevOps. Try one real task from today's sprint. The intern is ready. You don't need to be an AI expert. Write clear notes through the slot.
Start with one tool. Try one workflow. Share one skill with your team.
Monday morning, you'll have the smartest intern in the world.