It looks like direct xAI/Grok support was only added to OpenClaw 8 hours ago in this commit and still unreleased. You could have used Grok with it via OpenRouter, but I doubt this made up a significant fraction of Clawdbot/Moltbot/OpenClaw agents.
I wouldn’t trust Perplexity Pro’s percentage numbers one bit. It likes to insert random percentage numbers into my answers and they have hardly any bearing to reality at all. When I challenged it on this point, it claimed these reflected percentages of search results (e.g. in this scenario, 20 search results with 17 featuring Claude would result in an answer of 85%), but even that wasn’t remotely correct. For now I assume these are entirely hallucinated/made up, unless strongly proven otherwise. It’s certainly not doing any plausible math on any plausible data, from what I can tell.
This is part of a more general pattern wherein Perplexity for me tends to be extremely confident or intent on being useful, even in situations when it has no way to actually be useful given its capabilities, and so it just makes stuff up.
It looks like direct xAI/Grok support was only added to OpenClaw 8 hours ago in this commit and still unreleased. You could have used Grok with it via OpenRouter, but I doubt this made up a significant fraction of Clawdbot/Moltbot/OpenClaw agents.
Perplexity estimates the model breakdown as:
Anthropic Claude ~85%
Local (Ollama/vLLM) ~10%
OpenAI / Google ~5%
I wouldn’t trust Perplexity Pro’s percentage numbers one bit. It likes to insert random percentage numbers into my answers and they have hardly any bearing to reality at all. When I challenged it on this point, it claimed these reflected percentages of search results (e.g. in this scenario, 20 search results with 17 featuring Claude would result in an answer of 85%), but even that wasn’t remotely correct. For now I assume these are entirely hallucinated/made up, unless strongly proven otherwise. It’s certainly not doing any plausible math on any plausible data, from what I can tell.
This is part of a more general pattern wherein Perplexity for me tends to be extremely confident or intent on being useful, even in situations when it has no way to actually be useful given its capabilities, and so it just makes stuff up.