> These are deep neural network architectures that are task-specific for things like OCR, translation, or GUI detection. The way they consume and see data is trained to be task specific, which makes them up to 100x more accurate at their specific task. They also produce useful metadata like bounding boxes and confidence scores, letting developers build predictable workflows they can rely on.
Does code extraction and manipulation fit in that? Would interfaze be the agent that a coding agent uses?
The idea of what to change is perhaps an llm task but the job of doing the find replace and that kind of tooling is something LLMs actually struggle with and have all kinds or crutches and try retry loops to paste over in coding agents etc.
Smaller models really arent great at structured output. If this works it would be great for a local model that might not be as good but as long as it respects structured output will be vastly more useful.
It isn't on our roadmap right now since in most cases it should work out of the box and if it doesn't we'll work with you to train that into the model generally.
However, if we see enough people who has something super niche that our model can't handle, we might start considering a fine tuning service
The graph doesn't exactly make it clear but it describes a pipeline that goes beyond the LLM, so the CNN could be a separate model there.
Does code extraction and manipulation fit in that? Would interfaze be the agent that a coding agent uses?
Code manipulation probably not since it's a lot smaller of a model compared to a Claude Opus which is SOTA for code generation/manipulation.
Generally code generation is a non-deterministic task by nature and general LLMs tend to be better at them.
That is a straight-up lie. Consider gpt-5.4-nano which supports structured output just fine.
https://developers.openai.com/api/docs/models/gpt-5.4-nano
It seems like a concern that's orthogonal to the model size.
However, if we see enough people who has something super niche that our model can't handle, we might start considering a fine tuning service