Tutorial: Build a Todo App
Build a task manager that demonstrates the full spectrum of Agent Apps: AI-powered skills that reason about your data, deterministic code skills for precise operations, markdown skills that blend both, and compilation that turns prompts into optimized code. By the end you'll have a working app in about 100 lines — most of it natural language.
What You'll Build
A task manager with six tools:
| Tool | Format | Execution |
|---|---|---|
summarize |
Markdown skill | Agentic — AI reads your tasks and reasons about priorities |
task-add |
Code skill | Deterministic — precise file I/O |
task-list |
Code skill | Deterministic — formatted output |
task-done |
Code skill | Deterministic — read-then-write |
greet |
Markdown + inline code | Hybrid — NL source of truth with deterministic handler |
status |
Markdown + directives | Hybrid — template variables injected into prompts |
Plus: introspection (querying the tool registry), compilation (turning a skill into code), and MCP bridging (connecting an external tool server).
Prerequisites
- Node.js 22+ — Agent Apps uses the current LTS release.
- An AWS account with Bedrock access — for the AI-powered skill. See AWS & Bedrock Setup if you need to set this up. (The deterministic tools in Steps 2–4 work without it.)
Step 1: Create the Project
mkdir todo-app && cd todo-app
agent-apps init
The init command creates main.skill.md, a skills/ directory, and a package.json. Open main.skill.md and replace its contents:
---
name: todo
metadata:
role: main
paths:
- ./skills
model:
id: us.anthropic.claude-sonnet-4-6
region: us-east-1
---
What's here:
name: todo— your project's name.role: main— marks this as the main skill (important when you have multiple.skill.mdfiles in the root).paths— where the runtime looks for tools../skillsis your tools. The built-in library directory (which includes the AI agent) is always on the search path automatically.model— configures the AI model and AWS region for Bedrock.
Create the skills directory:
mkdir skills
How does the agent work? Agent Apps ships with a bundled
agentskill (powered by@strands-agents/sdkand Amazon Bedrock) in the library directory. When a markdown skill has no code handler, the runtime sends its prompt to the agent. Theagentskill handles policy (tool building, returns enforcement) and delegates to an agent provider (agent-strandsby default) for the LLM loop. Both are just tools — same interface, same discovery. To use a different LLM provider, create your own agent provider skill and configure it viamodel.agentin the cascade metadata, or shadowagent-strandsby placing a replacement earlier in the search path. This is how all overriding works in Agent Apps.
Step 2: Your First Skill — AI That Reasons About Your Data
The most interesting thing Agent Apps can do is run a natural-language skill that an AI agent executes directly. To demonstrate this, we first need some data to reason about. Create skills/task-add.skill.js — a code skill to add tasks:
import { readFile, writeFile } from 'node:fs/promises';
import { resolve } from 'node:path';
export const frontmatter = {
name: 'task-add',
description: 'Add a task to the todo list',
metadata: {
params: {
type: 'object',
properties: { title: { type: 'string' } },
required: ['title']
}
}
};
export default async function(ctx, { title }) {
const file = resolve(ctx.locals.config.cwd, 'tasks.json');
let tasks = [];
try { tasks = JSON.parse(await readFile(file, 'utf-8')); } catch {}
const task = { id: tasks.length + 1, title, done: false };
tasks.push(task);
await writeFile(file, JSON.stringify(tasks, null, 2));
return `Added task #${task.id}: ${title}`;
}
Add a few tasks:
agent-apps task-add --title "Build the demo"
agent-apps task-add --title "Record the video"
agent-apps task-add --title "Ship it"
Now create the skill that makes this interesting. Create skills/summarize.skill.md:
---
name: summarize
description: Summarize current tasks and suggest priorities
allowed-tools: "file-read"
metadata:
params:
type: object
properties: {}
---
Read the file `tasks.json` in the current working directory. It contains a JSON array of task objects with `id`, `title`, and `done` fields.
Provide a brief, actionable summary:
1. How many tasks are done vs pending
2. List what's been completed
3. List what's still pending
4. Suggest which task to tackle next and why
Keep it concise — no more than 6 lines.
This is a skill. There's no code — just a prompt. Let's run it:
agent-apps summarize
This takes a few seconds. Here's what happens:
- The runtime finds
summarize.skill.mdand reads its frontmatter. - There's no code handler, so the prompt goes to the AI agent.
- The agent sees it has access to
file-read(fromallowed-tools). - The agent calls
file-readto readtasks.json, reasons about the contents, and produces a summary.
You'll see something like:
**Status:** 0 done, 3 pending
**Pending:** Build the demo, Record the video, Ship it
**Next Action:** Start with "Build the demo" — the other tasks
depend on having a demo to record and ship.
The output varies each time — it's an LLM. But the structure follows your prompt. You wrote what you wanted in natural language, and the framework ran it.
This is the core idea of Agent Apps: a skill is a prompt that runs. The prompt is the source of truth. It describes the capability, and the agent executes it.
What Makes This Different
Compare this to using an AI to generate code. With code generation, you'd describe what you want, the AI would produce a JavaScript function, and you'd maintain that function going forward. The description gets discarded.
With Agent Apps, the description is the running artifact. Want to change the output format? Edit the prompt. Want the agent to also check a calendar? Add a tool to allowed-tools and update the instructions. The skill evolves as naturally as editing a document.
Step 3: Deterministic Code Skills
Not everything needs AI. Adding a task, listing tasks, marking one complete — these are precise operations where you want exact control over what happens. That's what code skills are for.
You already created task-add.skill.js in Step 2. Let's look at what makes it a tool:
The frontmatter export describes the tool to the runtime. The name is how you invoke it. The description tells agents and humans what it does. The params is a JSON Schema for arguments — it defines what args the tool accepts. Tools without params accept no arguments.
The default export is the function that runs. It always receives ctx (the context object) as the first argument and your args as the second.
Why
ctx.locals.config.cwd? The runtime setslocals.config.cwdto the project's root directory. Using it instead of a hardcoded path means your tool works correctly no matter where it's invoked from.
Now add the other two. Create skills/task-list.skill.js:
import { readFile } from 'node:fs/promises';
import { resolve } from 'node:path';
export const frontmatter = {
name: 'task-list',
description: 'List all tasks with status',
metadata: {
params: { type: 'object', properties: {} }
}
};
export default async function(ctx) {
const file = resolve(ctx.locals.config.cwd, 'tasks.json');
let tasks = [];
try { tasks = JSON.parse(await readFile(file, 'utf-8')); } catch {}
if (tasks.length === 0) return 'No tasks yet.';
return tasks.map(t => `${t.done ? '✓' : '○'} #${t.id} ${t.title}`).join('\n');
}
Create skills/task-done.skill.js:
import { readFile, writeFile } from 'node:fs/promises';
import { resolve } from 'node:path';
export const frontmatter = {
name: 'task-done',
description: 'Mark a task as complete',
metadata: {
params: {
type: 'object',
properties: { id: { type: 'number' } },
required: ['id']
}
}
};
export default async function(ctx, { id }) {
const file = resolve(ctx.locals.config.cwd, 'tasks.json');
let tasks = [];
try { tasks = JSON.parse(await readFile(file, 'utf-8')); } catch {}
const task = tasks.find(t => t.id === Number(id));
if (!task) return `Task #${id} not found.`;
task.done = true;
await writeFile(file, JSON.stringify(tasks, null, 2));
return `Completed task #${id}: ${task.title}`;
}
Try the full workflow:
agent-apps task-list
# ○ #1 Build the demo
# ○ #2 Record the video
# ○ #3 Ship it
agent-apps task-done --id 1
# Completed task #1: Build the demo
agent-apps task-list
# ✓ #1 Build the demo
# ○ #2 Record the video
# ○ #3 Ship it
Three files, under 30 lines each. Every tool is a CLI command. Every tool has params. Every tool is discoverable by AI agents. And they share the same interface as the AI-powered summarize skill — the runtime treats them identically.
Now run summarize again and notice the AI picks up the change:
agent-apps summarize
# **Status:** 1 done, 2 pending
# **Completed:** Build the demo ✓
# **Next Action:** Tackle "Record the video" next...
The agentic skill and the deterministic tools work together seamlessly. The agent calls file-read to see the same tasks.json your code skills wrote. This is the spectrum in action: precise code for precise operations, AI for reasoning and judgment.
Step 4: The Best of Both Worlds — Markdown with Inline Code
Sometimes you want a single file that contains both the natural-language description and a deterministic handler. This is what inline delegates are for.
Create skills/greet.skill.md:
---
name: greet
description: Greet someone by name
metadata:
params:
type: object
properties:
name:
type: string
required:
- name
delegate: "inline://code,export default async function(ctx, { name }) { return `👋 Hello, \\${name}! Welcome to the Todo App.`; }"
---
You are a friendly greeting tool. Greet the user warmly by name.
:arg[name]
Note: The
\${name}in the delegate uses a backslash escape (\\${name}in YAML) to prevent the framework's string interpolation from resolving it before the code runs. This is only needed for template literals insideinline://code,...refs in YAML. Standalone.skill.jsfiles are unaffected.
This file has three layers:
- Frontmatter — metadata (name, description, params), same as any tool.
delegate— aninline://code,...ref that runs deterministically. When present, the runtime executes this code instead of sending the prompt to an AI agent.- Markdown body — the natural-language description. This is still the source of truth — it documents the intent, and it's what the
compiletool reads when generating code.
agent-apps greet --name World
# 👋 Hello, World! Welcome to the Todo App.
Runs instantly — no AI call. But the prompt is right there in the same file, describing what the tool should do. If you remove the delegate, the prompt runs through the agent instead. If you compile the skill (Step 7), the compiler reads the prompt to generate optimized code.
This is the hybrid model: natural language as the source of truth, deterministic code for execution, both in one file.
Step 5: Directives — Dynamic Values in Prompts
Directives are template variables in markdown skill bodies. They look like :name[argument] and get replaced with real values before the prompt reaches the AI agent or the delegate code.
Create skills/status.skill.md:
---
name: status
description: Show project status with user info
metadata:
params:
type: object
properties:
name:
type: string
required:
- name
delegate: "inline://code,export default async function(ctx, { name }) { return [`📊 Project Status`, ` User: \\${name}`, ` System user: \\${process.env.USER || 'unknown'}`, ` Working dir: \\${ctx.locals.config.cwd || process.cwd()}`].join('\\n'); }"
---
Show the project status for :arg[name].
System user: :env[USER]
Two directives here:
:arg[name]— replaced with the value of thenameargument.:env[USER]— replaced with theUSERenvironment variable.
agent-apps status --name Developer
# 📊 Project Status
# User: Developer
# System user: yourusername
# Working dir: /path/to/todo-app
The full set of built-in directives:
| Directive | Replaced with |
|---|---|
:arg[x] |
The value of argument x |
:env[VAR] |
Environment variable VAR |
:path[./rel] |
Absolute path resolved from the skill's directory |
:skill[name] |
Another skill's description |
:tool[name] |
Alias of :skill[name] |
:eval[expr] |
Result of a JavaScript expression |
:context[a.b] |
Value at a dotted path on the context object |
:config[a.b] |
Value from the project config |
:script[lang]{src=file} |
Contents of a file, wrapped in a code fence |
:inline[file] |
Contents of a file, inline |
Directives make prompts dynamic without code. They're processed before the prompt reaches the agent, so the agent sees the expanded values.
Step 6: Introspection — Querying the Tool Registry
Agent Apps ships with built-in tools for querying the tool registry. This is useful for debugging, documentation, and for AI agents that need to discover available capabilities.
agent-apps skill-list
This lists all tools that are not hidden. You'll see your tools alongside the built-ins (file-read, file-write, skill-list, skill-describe, etc.). Use --query task to search by name or description.
Get details about a specific tool:
agent-apps skill-describe --ref task-add
{
"name": "task-add",
"description": "Add a task to the todo list",
"params": {
"type": "object",
"properties": { "title": { "type": "string" } },
"required": ["title"]
},
"tags": [],
"src": "file:///path/to/todo-app/skills/task-add.skill.js"
}
These introspection tools are themselves just tools — same interface, same discovery. An AI agent can call skill-list and skill-describe to learn what's available at runtime.
Step 7: Compilation — Turning Prompts into Code
This is where the source-of-truth model pays off. Compilation reads a markdown skill's prompt and generates deterministic code that produces the same result — without needing an LLM at runtime.
agent-apps skill-compile --ref greet
The AI reads greet.skill.md and generates equivalent JavaScript. The output goes to workspace/:
cat workspace/skills/greet.skill.js
// @generated source-hash:9aeb5600626847e7
export const frontmatter = {
name: 'greet',
description: 'Greet someone by name',
metadata: { params: { /* ... */ } }
};
export default async function(ctx, args) {
return `👋 Hello, ${args.name}! Welcome to the Todo App.`;
}
The compiled tool has the same name as the source skill. Because workspace/ is automatically prepended to the search path, the compiled version shadows the markdown source — the runtime finds it first.
agent-apps greet --name World
# 👋 Hello, World! Welcome to the Todo App.
Same result, now running as pure code. To remove compiled artifacts:
agent-apps skill-clean
The compile → shadow → clean lifecycle is the bridge between flexibility and performance. Iterate on prompts in markdown. When you're satisfied, compile. The prompt remains the source of truth — change it, recompile, and the code updates to match.
Step 8: MCP Bridging — Connecting External Tool Servers
Model Context Protocol (MCP) is an open standard for connecting AI applications to external tool servers. Agent Apps can launch MCP servers and register their tools automatically.
Update your main.skill.md to add an MCP server:
---
name: todo
metadata:
role: main
paths:
- ./skills
model:
id: us.anthropic.claude-sonnet-4-6
region: us-east-1
mcp:
time:
command: npx
args: [-y, "@modelcontextprotocol/server-everything"]
---
Install the MCP server:
npm install @modelcontextprotocol/server-everything
Each key under mcp (time here) becomes a prefix for the server's tools. The runtime launches the server lazily on first use, negotiates capabilities, and registers every tool it exposes.
MCP tools are available to your skills and to the AI agent. For example, a markdown skill with allowed-tools: "time-*" can use any tool from the time server. You can also call them from code skills:
const result = await ctx.manager.invoke('time-echo', { message: 'Hello from MCP!' });
Run agent-apps skill-list after invoking any project skill and you'll see all the MCP tools alongside your own.
Any MCP-compatible server becomes a set of Agent Apps tools. No adapters, no wrappers.
What You Built
todo-app/
main.skill.md ← Config + entry point
package.json
skills/
summarize.skill.md ← AI-powered skill (calls Bedrock)
task-add.skill.js ← Code skill: add tasks
task-list.skill.js ← Code skill: list tasks
task-done.skill.js ← Code skill: complete tasks
greet.skill.md ← Markdown skill with inline delegate
status.skill.md ← Markdown skill with directives
tasks.json ← Data file (created at runtime)
Six tools across the full execution spectrum. A markdown prompt that an AI agent runs directly. Code skills for precise operations. Hybrid skills that keep the prompt as source of truth while executing deterministically. Compilation that turns prompts into optimized code. External tool servers bridged with zero glue.
What You Learned
Skills are the source of truth. Write what you want in natural language. That's your application. Code is a derived artifact — hand-written when you need precision, compiled when you want to lock in behavior, but always traceable back to the prompt that describes the intent.
The execution spectrum is your choice. Agentic for flexibility. Deterministic for speed. Hybrid for both. Choose per skill, mix freely in one app. A skill can start agentic during development and compile to code for production.
Tools are the infrastructure. Skills compose tools to get work done. The runtime discovers tools from the filesystem, gives them schemas, and makes them callable from the CLI and by AI agents. Everything is a tool — middleware, the agent, file operations, introspection. Same interface, same override mechanism.
Adoption is gradual. You don't need to build an entire application from skills. Add one skill to an existing codebase for a capability that benefits from natural language — classification, summarization, fuzzy matching, anything involving judgment. Wrap it as a function. If it works, add more.
Next Steps
- Cookbook — Complete coverage of tools, middleware, configuration, and patterns
- Skills Reference — All bundled skills organized by category
- Patterns — Recipes for common patterns and integrations
- The Hub — Browse, install, and publish community skill packages
- Deploy — Deploy your app to AWS App Runner
- Specification — Full technical reference
- Try
agent-apps --log level=traceto see the full middleware pipeline in action