I think the Mac Mini is actually running the LLM in that hypothetical, not just querying an API. Apple Silicon Macs are better at this than most comparable-$ hardware. A RasPi would be far too slow.
Huh, I guess in cases of Kimi and other open-weight models that may be the case, though my impression was that most OpenClaw instances call Claude.
I think the Mac Mini is actually running the LLM in that hypothetical, not just querying an API. Apple Silicon Macs are better at this than most comparable-$ hardware. A RasPi would be far too slow.
Huh, I guess in cases of Kimi and other open-weight models that may be the case, though my impression was that most OpenClaw instances call Claude.