Agent Mode - model recommendations #7158
Replies: 5 comments 10 replies
-
|
That's the same model laptop I have. I've had good results with command-r7b, and the cogito models up to about 14b (8b has been a pretty good balance of speed and accuracy for quicker little stuff). What I like about Command-r7b is that it's designed for RAG, so its whole shtick is analyzing what you've got. I've also gotten pretty good results with phi-4 (contrary to other people, apparently, but... 🤷♀️), but I haven't tried that one in agent mode. The smaller qwen3.5 models and codestral/devstral are supposed to work well, too, but personally, when I've tried them, they didn't output anything. That was a couple Continue releases back, though, so it might have been fixed, or otherwise YMMV. |
Beta Was this translation helpful? Give feedback.
-
|
I have the same question. No matter which model I select all have the warning So I like to know which agent actually does work well? |
Beta Was this translation helpful? Give feedback.
-
hi @planetf1 - did you found any good ones?
none of them make a good agentic coding experience. also looking for something in the 10B direction at most. Either I got the |
Beta Was this translation helpful? Give feedback.
-
|
A few practical things have mattered more for me than the exact model name:
So I would try this order:
Minimal config shape: models:
- name: Ollama Agent
provider: ollama
model: qwen2.5-coder:7b
roles: [chat, edit, apply]
capabilities: [tool_use]If you still get |
Beta Was this translation helpful? Give feedback.
-
|
Hi, just to comment as I was OP - not currently using continue sorry. I would say generally that my experience with local models on my m1 32GB hasn't been too positive to get good quality results with low footprint and decent response times. Machine upgrade soon may mean I return to this in future. Thanks for comments above though. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm struggling to get any model (suitable for 32GB macbook m1, served via ollama) working very much at all with agent mode.
Can anyone offer some decent models (for python/k8s/go coding) that work well in agent mode/for tool calling (including mcp) specifically with the continue plugin in vscode?
It can be as simple an example as 'can you list what charts I have' in the context of helm charts for example.
A typical response would be
from the model, but not in the format continue agent mode appears to like
Beta Was this translation helpful? Give feedback.
All reactions