I would be interested to see how exactly the agent helped. How was it used, where did it lead to the given improvement and in how far would it have taken a human to come to the same solution.
This is the thing to look for in 2027, imho. All the big AI labs have big projects working on research agents, also specifically into improving AI (duh) and I expect a lot of that to get out of the experimental phases this year.
Next year they actually get to do a lot of work and I think we will see the first big effective architectural change co-invented by AI.
> Do we have other examples of AI being used to improve the LLMs
Yes, last year when they revealed AlphaEvolve they used a previous gemini model to improve kernels that were used in training this gen models, netting them a 1% faster training run. Not much, but still.
It is a recursive self-reflective agent. It will copy itself into /tmp/, run, analyze the results / eval, and update itself ...... copy itself into /tmp/, run, analyze the results / eval and update itself ..... ad infinitem.
Left alone it can bypass all bot detection security including Turnstile and hCaptcha which means it can have anonymous access to gpt-5.3 with internet search, Perplexity with internet search, and all the models on nvidia like Deepseek v4. Although flaky, the python instructor library shines here creating validation and structured data.
For shits and giggles, I wondered if could become viral. So I had a coding agent create 100 containers each with a security vulnerability like SQL injection in isolation. To my surprise because it was a playground in isolation the coding agents made that.
I stopped there. I know that it can copy itself. I know that it can evolve very quickly. I know that it can reverse engineer any website. They can create a 10 minute mail account and pass the reference along so they can communicate with each other. I don't check if it could do breadth search on known vulnerabilities to access the isolated servers.
Situation is like Leonardo DiCaprio in Don't Look Up screaming on TV. If anyone at any of these companies want to discuss this with me, please reach out.
Note that coding is not the only use of Gemini or any of these models. It's also not what this article is talking about. Gemini can be not the best coding agent, but very good at other things.
> He says the problem is that they can't use Claude Code because it's the enemy, and Gemini has never been good enough to capture people's workflows like Claude has, so basically agentic coding just never really took off inside Google. They're all just plodding along, completely oblivious to what's happening out there right now.
This is a bunch of gabagoo. Wrong on so many layers, it's not even worth reading further.
a) goog has agentic coding in both antigravity & cli forms. While it is not at the level of cc + opus, it's still decent.
b) goog has their own versions of models trained on internal code
c) goog has claude in vertex, and most definitely can set it up in secure zones (like they can for their clients) so they'd be able to use claude (at cost) within their own projects.
What I'm most curious about is how this translates to messy, real-world codebases without well-defined metrics. Most production software isn't chip design or kernel optimization - it's business logic with unclear success criteria. The infrastructure story is impressive, but I'd love to see how they handle domains where the evaluation function itself is ambiguous.
Well, if the evaluation infrastructure is something humans could have had access to before, and that the agents key "skill" is just that it's a more patient and scalable worker, I would still argue that this "comes from the agent".
Humans get bored, inpatient, or run out of time, and so often give up in what they perceive to be a decent "local minima". Early verification harnesses using gpt-4 for optimizing robot reward functions succeeded quite well on the fact that the LLM just kept going (link below). As long as it is too boring for a human to use the same evaluation infrastructure, this is still an agent skill.
Do we have other examples of AI being used to improve the LLMs, apart for the creation of synthetic data and the testing of the models?
This is the thing to look for in 2027, imho. All the big AI labs have big projects working on research agents, also specifically into improving AI (duh) and I expect a lot of that to get out of the experimental phases this year.
Next year they actually get to do a lot of work and I think we will see the first big effective architectural change co-invented by AI.
Yes, last year when they revealed AlphaEvolve they used a previous gemini model to improve kernels that were used in training this gen models, netting them a 1% faster training run. Not much, but still.
It’s a simple harness around Opus, but with tight integration to Hugging Face infra, so the agent can read papers, test code and launch experiments
It is a recursive self-reflective agent. It will copy itself into /tmp/, run, analyze the results / eval, and update itself ...... copy itself into /tmp/, run, analyze the results / eval and update itself ..... ad infinitem.
Left alone it can bypass all bot detection security including Turnstile and hCaptcha which means it can have anonymous access to gpt-5.3 with internet search, Perplexity with internet search, and all the models on nvidia like Deepseek v4. Although flaky, the python instructor library shines here creating validation and structured data.
For shits and giggles, I wondered if could become viral. So I had a coding agent create 100 containers each with a security vulnerability like SQL injection in isolation. To my surprise because it was a playground in isolation the coding agents made that.
I stopped there. I know that it can copy itself. I know that it can evolve very quickly. I know that it can reverse engineer any website. They can create a 10 minute mail account and pass the reference along so they can communicate with each other. I don't check if it could do breadth search on known vulnerabilities to access the isolated servers.
Situation is like Leonardo DiCaprio in Don't Look Up screaming on TV. If anyone at any of these companies want to discuss this with me, please reach out.
[0] https://github.com/adam-s/agent-tuning
[1] https://build.nvidia.com/models
This is a bunch of gabagoo. Wrong on so many layers, it's not even worth reading further.
a) goog has agentic coding in both antigravity & cli forms. While it is not at the level of cc + opus, it's still decent.
b) goog has their own versions of models trained on internal code
c) goog has claude in vertex, and most definitely can set it up in secure zones (like they can for their clients) so they'd be able to use claude (at cost) within their own projects.
Humans get bored, inpatient, or run out of time, and so often give up in what they perceive to be a decent "local minima". Early verification harnesses using gpt-4 for optimizing robot reward functions succeeded quite well on the fact that the LLM just kept going (link below). As long as it is too boring for a human to use the same evaluation infrastructure, this is still an agent skill.
https://arxiv.org/abs/2310.12931