I have yet to interact with a state-of-the-art model (that I know of), but I do know from browsing Hacker News that many are running LLaMA and other open-source models on their own hardware (typically Apple Silicon or desktops with powerful GPUs).
I have yet to interact with a state-of-the-art model (that I know of), but I do know from browsing Hacker News that many are running LLaMA and other open-source models on their own hardware (typically Apple Silicon or desktops with powerful GPUs).