Most agent loops work like this: the model picks a tool, calls it, gets the result, picks the next tool. Rinse, repeat.
A 3.4 GB model just posted a 97.
Google dropped Gemma 4 on Wednesday — four open-weight models under a genuine Apache 2.0 license, built from the same research behind Gemini 3.
Over the past three months, OpenAI retired Swarm and shipped the Agents SDK with first-class handoffs.
NVIDIA dropped Nemotron 3 Super a few weeks ago and it flew under the radar — buried by the Mythos leak drama and GPT-5.4's benchmark parade.
While everyone was busy arguing about GPT-5.
Everybody wants a multi-agent system.
A year ago, people were debating whether MCP would become the standard for connecting LLMs to tools. That debate is over — MCP won.
If you've been training or fine-tuning large models, you've probably hit that moment — loss curve looks beautiful for hours, then suddenly spikes into...