I dug a fossil-version Google Glass XE HW2 out of a drawer, pre-rooted, running an AOSP build that I think has a single-digit number of users worldwide. I connected it to the mac mini’s USB port, and told my Openclaw instance to get it onto tailscale and set it up as a communication channel.
It worked its way through multiple absurd, frustrating technical issues that would absolutely have made me give up if I was the one doing it, with only minimal guidance. Once it had ssh working, it set up an android app. Without me suggesting it do so, it found a way to take screenshots to check its work.
So, I have an AI agent on my face now. I don’t think it’s wise for humanity to be going down this path, especially at this speed, and if humanity gets its act together to pause, I’ll power down my Mac Mini and breathe a sigh of relief. But in the meantime, I’m going to enjoy how cool it is, and stay close enough to the forefront to be properly informed, dammit.
I don’t think Cursor would’ve stood a chance, for this task. It was almost all command-line wrangling with only a small side-order of actual coding. Lots of “run this command and run other tools while it’s in-progress to figure out why it’s crashing”. One “abort that command because it’s running too slow and try a different command”. Some explicit wait-for-timer-then-recheck steps, including “send a Discord message telling me to auth a tailscale node then poll until it’s authed”. After it got to the stage where it could connect over the network instead of adb and start writing an app, it used command-line tools to take screenshots, download the screenshots, and processed them to check whether it was working. It had extremely long turn lengths. And, it ran commands more risky than I’d be willing to run without approval on my high-side computers, and more numerous than I’d be willing to deal with approving.
(In this context, low-side means the Mac Mini that I let the Openclaw agent fully control control, while high-side means my main laptop, phone, and other devices where I don’t. I decided to make the Glass XE low-side, ie no command approval for stuff it does there, and the project wouldn’t have been feasible otherwise.)
I suspect Glass XE isn’t the hardware I’ll be using in a month; I set it up because I already had one on hand, and the other HMD I ordered (an Even Realities G2) indicated it would take 5 weeks to ship. (Perhaps the other AI agent users bought up all the stock.)
For input, you’re either using audio (in which case airpods paired to a phone is better than the builtin in mic) making it output-only and doing input via phone touchscreen or a bluetooth keyboard paired to a phone, or pairing Glass to a bluetooth keyboard directly. Pairing Glass to bluetooth keyboards should work in XE19.1 but is historically fraught (long story). If getting hardware on the secondary market, try to get HW3 instead of HW2 (HW3 has 2GB RAM, HW2 has 1GB). Consider getting a lens cap for the camera (there are 3d-printer model files floating around); some people react negatively to having an eye-level camera pointed at them if they can’t verify that it isn’t on. For all-the-time use, get two USB power banks and a cable.
Best practice with Openclaw is to run it on segregated hardware, which in practice means either a Mac Mini or a cloud server. (There is nothing special about Mac Minis with respect to Openclaw, people are just using them because they’re good computers.) A Mac Mini has a large advantage over a cloud server for this use case because it has USB ports, and getting to adb happens much sooner in the setup process than getting to ssh.
I’ll probably write more later but that should cover all of the things with lead time.
Cool, I went with the most modern Glass, Enterprise 2 for the higher RAM and other hardware spec stuff, figuring that software would work itself out these days.
I dug a fossil-version Google Glass XE HW2 out of a drawer, pre-rooted, running an AOSP build that I think has a single-digit number of users worldwide. I connected it to the mac mini’s USB port, and told my Openclaw instance to get it onto tailscale and set it up as a communication channel.
It worked its way through multiple absurd, frustrating technical issues that would absolutely have made me give up if I was the one doing it, with only minimal guidance. Once it had ssh working, it set up an android app. Without me suggesting it do so, it found a way to take screenshots to check its work.
So, I have an AI agent on my face now. I don’t think it’s wise for humanity to be going down this path, especially at this speed, and if humanity gets its act together to pause, I’ll power down my Mac Mini and breathe a sigh of relief. But in the meantime, I’m going to enjoy how cool it is, and stay close enough to the forefront to be properly informed, dammit.
Do you think OpenClaw was noticeably better than a regular cursor agent for purposes of solving that problem?
I don’t think Cursor would’ve stood a chance, for this task. It was almost all command-line wrangling with only a small side-order of actual coding. Lots of “run this command and run other tools while it’s in-progress to figure out why it’s crashing”. One “abort that command because it’s running too slow and try a different command”. Some explicit wait-for-timer-then-recheck steps, including “send a Discord message telling me to auth a tailscale node then poll until it’s authed”. After it got to the stage where it could connect over the network instead of adb and start writing an app, it used command-line tools to take screenshots, download the screenshots, and processed them to check whether it was working. It had extremely long turn lengths. And, it ran commands more risky than I’d be willing to run without approval on my high-side computers, and more numerous than I’d be willing to deal with approving.
(In this context, low-side means the Mac Mini that I let the Openclaw agent fully control control, while high-side means my main laptop, phone, and other devices where I don’t. I decided to make the Glass XE low-side, ie no command approval for stuff it does there, and the project wouldn’t have been feasible otherwise.)
Okay, I’m in. Bought one. Any ideas or code or tips that you think it’s worth sharing, here or by DM?
I suspect Glass XE isn’t the hardware I’ll be using in a month; I set it up because I already had one on hand, and the other HMD I ordered (an Even Realities G2) indicated it would take 5 weeks to ship. (Perhaps the other AI agent users bought up all the stock.)
For input, you’re either using audio (in which case airpods paired to a phone is better than the builtin in mic) making it output-only and doing input via phone touchscreen or a bluetooth keyboard paired to a phone, or pairing Glass to a bluetooth keyboard directly. Pairing Glass to bluetooth keyboards should work in XE19.1 but is historically fraught (long story). If getting hardware on the secondary market, try to get HW3 instead of HW2 (HW3 has 2GB RAM, HW2 has 1GB). Consider getting a lens cap for the camera (there are 3d-printer model files floating around); some people react negatively to having an eye-level camera pointed at them if they can’t verify that it isn’t on. For all-the-time use, get two USB power banks and a cable.
Best practice with Openclaw is to run it on segregated hardware, which in practice means either a Mac Mini or a cloud server. (There is nothing special about Mac Minis with respect to Openclaw, people are just using them because they’re good computers.) A Mac Mini has a large advantage over a cloud server for this use case because it has USB ports, and getting to adb happens much sooner in the setup process than getting to ssh.
I’ll probably write more later but that should cover all of the things with lead time.
Cool, I went with the most modern Glass, Enterprise 2 for the higher RAM and other hardware spec stuff, figuring that software would work itself out these days.