TIL just how obtuse more basic LLMs can be without complex system prompting

I’ve been working on building a mini Claude Code, something I’ve seen recommended to get into the guts of the tooling. Ive been working with a mistral model, and it’s fascinating. The model is so reticent to take action, even when there are tools provided to do exactly those things.