Why XML Tags Are So Fundamental to Claude

(glthr.com)

33 points | by glth 2 hours ago

5 comments

  • TheJoeMan 1 hour ago
    That first image, “Structure Prompts with XML”, just screams AI-written. The bullet lists don’t line up, the numbering starts at (2), random bolding. Why would anyone trust hallucinated documentation for prompting? At least with AI-generated software documentation, the context is the code itself, being regurgitated into bulleted english. But for instructions on using the LLM itself, it seems pretty lazy to not hand-type the preferred usage and human-learned tips.
    • rafram 1 hour ago
      No, it’s two screenshots from Anthropic documentation, stitched together: https://platform.claude.com/docs/en/build-with-claude/prompt...

      The post even links to that page, although there’s a typo in the link.

      • glth 21 minutes ago
        Author here: I have just fixed the typo. Thank you.

        And yes, these are screenshots from Anthropic’s documentation.

      • dmd 25 minutes ago
        They're not even stitched together ; there's just no padding between the two images.
    • Calavar 1 hour ago
      It looks like a screenshot from the Claude desktop app, so I don't think the author is trying to disguise the AI origin of the marerial
  • imglorp 44 minutes ago
    A very minor porcelain on some of the agent input UX could present this structure for you. Instead of a single chat window, have four: task, context, constraints, output format.

    And while we're at it, instead of wall-of-text, I also feel like outputs could be structured at least into thinking and content, maybe other sections.

  • Zebfross 42 minutes ago
    I thought the goal was minimal instruction to let Claude determine the best way to solve the problem. Not adding this to my workflow anytime soon.
  • wolttam 1 hour ago
    Anthropic’s tool calling was exposed as XML tags at the beginning, before they introduced the JSON API. I expect they’re still templating those tool calls into XML before passing to the model’s context
    • pocketarc 51 minutes ago
      Yeah like I remember prior to reasoning models, their guidance was to use <think> tags to give models space for reasoning prior to an answer (incidentally, also the reason I didn't quite understand the fuss with reasoning models at first). It's always been XML with Anthropic.
      • wolttam 20 minutes ago
        Exactly the same story here. I still use a tool that just asks them to use <think> instead of enabling native reasoning support, which has worked well back to Sonnet 3.0 (their first model with 'native' reasoning support was Sonnet 3.7)
  • esafak 37 minutes ago
    This sounds like something for harnesses, not end users. Are they really expecting us to format prompts as XML??