A Brief History of Ralph

(humanlayer.dev)

27 points | by dhorthy 2 hours ago

6 comments

  • jes5199 14 minutes ago
    I forked the anthropic Ralph Wiggum plugin: https://github.com/jes5199/chief-wiggum

    there’s some debate about whether this is in the spirit of the _original_ Ralph, because it keeps too much context history around. But in practice Claude Code compactions are so low-quality that it’s basically the same as clearing the history every few turns

    I’ve had good luck giving it goals like “keep working until the integration test passes on GitHub CI” - that was my longest run, actually, it ran unattended for 24 hours before solving the bug

  • Juvination 1 hour ago
    I've been working with the Ralphosophy? for iterative behavior in my workflow and it seems pretty promising for cutting out a few manual steps.

    I still have a manual part which is breaking the design document down into multiple small gh issues after a review but I think that is fine for now.

    Using codex exec, we start working on a github issue with a supplied design document, creating a PR on completion. Then we perform a review using a review skill madeup which is effectively just a "cite your sources" skill on the review along with Open Questions.

    Then we iterate through open questions doing a minimum of 3 reviews (somewhat arbitrary but sometimes multiple reviews catch things). Then finally I have I have a step in for checking Sonarcloud, fixing them and pushing the changes. Realistically this step should be broken out into multiple iterations to avoid large context rot.

    What I miss the most is output, seeing whats going on in either Codex or Claude in real time. I can output the last response but it just gets messy until I make something a bit more formal.

  • skybrian 1 hour ago
    There's a lot of irrelevant detail, but the article never actually explains what "Ralph" does or how it works.
    • wild_egg 1 hour ago
      It's explained under the July 2025 heading with link to the blog post where it was first shared.

      The key bit is right under that though. Ralph is literally just this:

          while :; do cat PROMPT.md | npx --yes @sourcegraph/amp ; done
      • msla 35 minutes ago
        Surely that would be better written as

            cat PROMPT.md | cat | npx --yes @sourcegraph/amp
      • skybrian 1 hour ago
        Thanks!
    • dhorthy 1 hour ago
      there are hundreds of useful resources, including many linked in the article itself
  • ossa-ma 1 hour ago
    So it took the author 6 months and several 1-to-1s with the creator to get value from this. As in he literally spent more time promoting it than he did using it.

    And it all ends with the grift of all grifts: promoting a crypto token in a nonchalant 'hey whats this??!!??' way...

    • dhorthy 1 hour ago
      the note about the crypto token was intended to “okay this is now hype slop and it’s time to move on”
  • f311a 1 hour ago
    Just look at the code quality produced by these loops. That's all you need to know about it.

    It's complete garbage, and since it runs in a loop, the amount of garbage multiplies over time.

    • dhorthy 1 hour ago
      I don’t think anyone serious would recommend it for serious production systems. I respect the Ralph technique as a fascinating learning exercise in understanding llm context windows and how to squeeze more performance (read: quality) from today’s models

      Even if in the absolute the ceiling remains low, it’s interesting the degree to which good context engineering raises it

      • ossa-ma 41 minutes ago
        How is it a “fascinating learning exercise” when the intention is to run the model in a closed loop with zero transparency. Running a black box in a black box to learn? What signals are you even listening to to determine whether your context engineering is good or whether the quality has improved aside from a brief glimpse at the final product. So essentially every time I want to test a prompt I waste $100 on Claude and have it an entire project for me?

        I’m all for AI and it’s evident that the future of AI is more transparency (MLOPs, tracing, mech interp, AI safety) not less.

        • alansaber 20 minutes ago
          Current transparency is rubbish but people will continue to put up with it if they're getting decent output quality
    • Veen 45 minutes ago
      You probably wouldn't use it for anything serious, but I've Ralphed a couple of personal tools: Mac menu bar apps mostly. It works reasonably well so long as you do the prep upfront and prepare a decent spec and plan. No idea of the code quality because I wouldn't know good swift code from a hole in the head, but the apps work and scratch the itch that motivated them.
    • skerit 1 hour ago
      I do not understand where this Ralph hype is coming from. Back when Claude 4.0 came out and it began to become actually useful, I already tried something like this. Every time it was a complete and utter failure.

      And this dream of "having Claude implement an entire project from start to finish without intervention" came crashing down with this realization: Coding assistants 100% need human guidance.

  • articulatepang 1 hour ago
    This is so poorly written. What is "Ralph"? What is its purpose? How does it work? A single sentence at the top would help. The writer imagines that the reader cares enough to have followed their entire journey, or to decode this enormously distended pile of words.

    More generally, I've noticed that people who spend a lot of time interacting with LLMs sometimes develop a distinct brain-fried tone when they write or talk.

    • dang 20 minutes ago
      Please don't post shallow dismissals of other people's work (this is in the site guidelines: https://news.ycombinator.com/newsguidelines.html) and especially please don't cross into personal attack.
    • alansaber 22 minutes ago
      "develop a distinct brain-fried tone when they write or talk" - I find that using an LLM as a writing copilot seriously degrades the flow of short form content