Claude prompt library: the practical guide
Claude rewards different prompt structures than ChatGPT. A library tuned for Claude is meaningfully different from a generic one.
Claude is not ChatGPT with a different sticker. It responds better to different prompt shapes, different framings, different instructions about tone. If you are running a generic "AI prompt library" against Claude, you are leaving quality on the table. Here is what actually changes when you build a Claude-specific prompt library.
Why Claude prompts look different
Claude was trained with a strong preference for XML-tagged structure, explicit role framing, and long-context processing. ChatGPT is more forgiving of sloppy structure and compensates with instruction-following. Claude rewards precision.
The practical consequence: the same prompt often produces a cleaner result in Claude when you add structure, and a messier one when you dump a wall of text. Your library should reflect that.
XML tags: the Claude cheat code
Claude responds dramatically better when you wrap parts of your prompt in XML-like tags: <instructions>, <context>, <examples>, <output_format>. It reduces instruction leakage, helps with long inputs, and makes iteration easier because you can swap one section without touching the others.
Template: "Review the following content critically.\n\n<instructions>\nIdentify weak claims, unsupported assertions, places where reasoning breaks down.\n</instructions>\n\n<content>\n{{content}}\n</content>"
Long-context templates
Claude's context window is larger than most alternatives. Build templates that actually take advantage: "Compare these three documents. Identify every claim that contradicts another claim." Then paste all three docs. Claude handles it cleanly. Many other models will truncate and apologize.
Mark the big content slots as variables: {{doc1}}, {{doc2}}, {{doc3}}. Your library template stays stable; only the content changes.
Role framing that sticks
Claude tends to hold a persona longer and more consistently than ChatGPT across a conversation. So invest in role framing up front: "You are a principal engineer reviewing code for a production system with 2M users. You are direct, skeptical of clever abstractions, and flag any deprecation risks." Claude will hold that voice across many turns.
Bake these personas into template variables — {{role}}, {{seniority}}, {{context_scale}} — so you can reuse the same structure across many different reviewer types.
Avoid sycophancy patterns
Claude, like all modern assistants, can default to being agreeable. Explicit instructions solve it: "Be direct. Disagree if you see a flaw. Do not soften the critique." Bake this into library templates once, reuse forever.
Claude-specific variables worth templating
Common variables that pay off in Claude templates: {{role}}, {{task}}, {{constraints}}, {{output_format}}, {{examples}}, {{context_docs}}. Mark them in every template. Consistent variable names across templates make them noticeably easier to fill in from memory.
Keep it tool-agnostic at the library level
The catch: even though Claude prompts have their own shape, you probably also use ChatGPT, Gemini, and Midjourney. Keeping a Claude-only library forces you to duplicate every template. Better: one library, with some templates tagged "Claude-optimized" using XML tags, and some tagged "Universal" using plain structure.
TextDeck handles this with categories. One hotkey, one library, right template for the right model.
Three Claude starter templates
Critical review: "You are an expert {{role}}. Review the following {{artifact_type}} critically. <content>{{content}}</content> Identify weaknesses without softening."
Structured extraction: "Extract fields from the text as JSON: {{fields}}. If absent, use null. Do not invent. <text>{{text}}</text>"
Long-context compare: "Compare the following documents across these dimensions: {{dimensions}}. Flag contradictions. <doc1>{{doc1}}</doc1> <doc2>{{doc2}}</doc2>"