Inside OpenClaw #4: Why AI Agents Need a Personality
An LLM without clear instructions responds generically and inconsistently. How SKILL.md files give an AI agent personality, context, and guardrails — and why backticks can break everything.
Ask a bare language model the same question twice, and you’ll get two different personalities. One response is formal and thorough, the next is casual and terse. There’s no consistency, no sense of role, no memory of what it’s supposed to be.
For experimentation, that’s fine. For production use in a business, it’s a dealbreaker — which is exactly why configuration is a central aspect of my AI and automation consulting. When your AI agent behaves differently every session, users lose trust — and they lose it fast.
In the first two parts of this series, we covered how OpenClaw handles web search without hallucination and which vLLM flags make Mistral work reliably as an agent backbone. This article addresses the next layer: how to give an AI agent a stable identity.
SKILL.md: Personality as a File
OpenClaw solves this with a configuration file called SKILL.md. This file is injected into the system prompt on every single request — the model reads it before generating a single token.
What goes into a SKILL.md:
- Identity: Name, role, personality traits (“You are a technical advisor for mid-sized businesses”)
- Tool instructions: Which tools are available and when to use them
- Style guidelines: Concise or verbose, formal or direct, with or without jargon
- Domain knowledge: Context the model doesn’t have from training data (“Our company provides IT strategy consulting”)
- Guardrails: What the agent must not do — no legal advice, no price commitments, no statements about competitors
A simplified example:
# Agent: OpenClaw Advisor
## Role
You are an AI assistant for Pfisterer Consulting.
You advise on IT strategy and AI adoption for SMEs.
## Style
- Be precise and practical
- Avoid marketing language
- Use technical terms, but explain them when needed
## Guardrails
- No legal or tax advice
- No specific pricing
- When uncertain: say so honestly
This looks straightforward. But the difference between an agent with and without this file is dramatic. With SKILL.md, the model responds consistently, with contextual awareness and the right tone — across sessions and topics.
Workspace Files: Context Per Task
Personality alone isn’t enough. An agent working on a software project needs different context than one answering customer questions.
Workspace files supplement SKILL.md with task-specific context: project structure, relevant file paths, current priorities, technical constraints. They’re loaded dynamically depending on the agent’s current working context.
SKILL.md defines who the agent is. Workspace files define where it’s currently working.
The Backtick Problem: When Markdown Destroys JSON
Then there’s a problem that doesn’t appear on any architecture slide but can cost hours in practice.
When a local LLM generates Markdown in a tool-call response — say, a code block with triple backticks — those backticks collide with the JSON structure of the tool call itself. The triple backticks inside the generated content corrupt the JSON string boundaries and break the parser.
The result: the agent calls a tool correctly, generates a useful response with code examples — and the system crashes because the parser can’t read the JSON. The log shows a cryptic parse error. The actual cause is invisible.
Our fix: a post-processing step that intercepts the raw model output before it reaches the JSON parser. Backticks inside tool-call content get escaped, and the JSON stays valid. A small function that makes the difference between a stable system and an unpredictable one.
Why This Matters for Business
Consistency isn’t a nice-to-have — it’s the prerequisite for user trust. An agent that behaves differently every day is a toy. One with a clear personality, defined context, and robust guardrails is a tool.
The building blocks aren’t complex:
- SKILL.md for identity, style, and boundaries
- Workspace files for task-specific context
- Post-processing to handle technical pitfalls like backtick corruption
None of this requires a bigger model or more expensive hardware. It requires architectural decisions that you make once and that carry across every session.
For more on the broader OpenClaw architecture, see our articles on OpenClaw as a personal AI assistant and running a local AI agent without the cloud.
Next Step
Want an AI agent that behaves like a reliable team member — not a mystery box? I’ll show you how this works in practice.
→ Or read more first: AI in SMEs — Where It Actually Helps