The HN title is quite a strong claim, but it's nowhere to be seen in the repo.
It seems to be fully prompt based, so the AI still can say anything it pleases.
How well do these complicated prompt systems usually work? My strategy is to stick mostly to just simple prompts with potentially some deterministic tools and vendor harnesses, based on the rationale that these are what the models are trained and evaluated with. And that LLMs still often get tripped up when their context is spammed with too much stuff.
The crazy thing is, you could do this. And it can be done 100% with code using zero prompting - just by limiting the output token set to a structured format and then further constraining parts of that to sources that were retrieved before. I know because I wrote such a system already. It could still match sources and answers incorrectly (just like this approach) but there is no need to rely on crazy prompts and agents to prevent hallucinations or missing outputs (which btw still lack any hard guarantees in the end). Prompting is a good strategy as models become smarter, but when you need reliability, you need to make use of the fact that they are still simple autoregressive completion engines. I don't get why everyone ignores this aspect, since I find it extremely useful all the time.
> I don't get why everyone ignores this aspect, since I find it extremely useful all the time.
My hunch is because structured/constrained decoding and deterministic subsystems are technically somewhat more involved, requiring e.g. raw API interactions and sometimes manual decoding strategies. Prompt systems can be written in plain text and mostly with "common sense". Not to say writing a good prompt(system) is a trivial task, but it's a different skillset.
I'm positive there are use-cases for this tool but after several years of working with LLMs, hallucinations have become a non-issue. You start to get a sense of the likely gaps in their knowledge just like you would a person.
Questions about application settings, for example, where to find a particular setting in a particular app. The LLM has a sense of how application settings are generally structured but the answer is almost never spot on. I just prefix these questions with "do a web search" or provide a link to documentation and that is usually enough to get a decent response along with citations.
Why are you building your own DAG system instead of just
using LangGraph? You could cut complexity and focus on what
actually matters : the claims, evidence tiers, conflict detection.
Also, embedding claims in the Chain of Thought instead of
post-processing them might force rigor earlier in the pipeline.
(Assuming the zero-deps constraint isn't a blocker?)
Well, I would have tried it but the website kills Firefox.
Hard to see how you could really make this work though. You might as well just add "fetch and re-read all sources explicitly to make sure they are correct" to a normal prompt.
I love how at the beginning of this boom people were talking about how heuristics applied to AI outputs were short-term gains disguised as real progress. Now it seems like almost every new tool is a series of heuristics applied to AI outputs.
It seems to be fully prompt based, so the AI still can say anything it pleases.
How well do these complicated prompt systems usually work? My strategy is to stick mostly to just simple prompts with potentially some deterministic tools and vendor harnesses, based on the rationale that these are what the models are trained and evaluated with. And that LLMs still often get tripped up when their context is spammed with too much stuff.
My hunch is because structured/constrained decoding and deterministic subsystems are technically somewhat more involved, requiring e.g. raw API interactions and sometimes manual decoding strategies. Prompt systems can be written in plain text and mostly with "common sense". Not to say writing a good prompt(system) is a trivial task, but it's a different skillset.
Questions about application settings, for example, where to find a particular setting in a particular app. The LLM has a sense of how application settings are generally structured but the answer is almost never spot on. I just prefix these questions with "do a web search" or provide a link to documentation and that is usually enough to get a decent response along with citations.
Also, embedding claims in the Chain of Thought instead of post-processing them might force rigor earlier in the pipeline.
(Assuming the zero-deps constraint isn't a blocker?)
"factual (ai) Weather, traffic, and personal urgency are the only significant variables that could tilt the decision toward driving."
My gut feeling is that if this could be done, it would be a core part of one of the model provider's output.
Hard to see how you could really make this work though. You might as well just add "fetch and re-read all sources explicitly to make sure they are correct" to a normal prompt.
Q: Who directed Scarface? A: - 1983 film (most commonly referred to): Directed by Brian De Palma. - 1932 original version: Directed by Michael Curtiz.
This is wrong. The 1932 movie is by Howard Hawks.
Managed to ask if Ali Khamenei is still alive. It answered "Yes, ..."
I thought it can search for online cites.