FYI, normally when I’m thinking about this, it’s through the lens “how do we help the researchers working on illegible problems”, moreso than “how do we communicate illegibleness?”.
This post happened to ask the question “can AI advisers help with the latter” so I was replying about that, but, for completeness, normally when I think about this problem I resolve it as “what narrow capabilities can we build that are helpful ‘to the workflow’ of people solving illegible problems, that aren’t particularly bad from a capabilities standpoint”.
normally when I think about this problem I resolve it as “what narrow capabilities can we build that are helpful ‘to the workflow’ of people solving illegible problems, that aren’t particularly bad from a capabilities standpoint”.
Do you have any writings about this, e.g., examples of what this line of thought led to?
Mostly this has only been a sidequest I periodically mull over in the background. (I expect to someday focus more explicitly on it, although it might be more in the form of making sure someone else is tackling the problem intelligently).
But, I did previously pose this as a kind of open question re What are important UI-shaped problems that Lightcone could tackle? and JargonBot Beta Test (this notably didn’t really work, I have hopes of trying again with a different tack). Thane Ruthenis replied with some ideas that were in this space (about making it easier to move between representations-of-a-problem)
My personal work so far has been building a mix of exobrain tools that are more, like, for rapid prototyping of complex prompts in general. (This has mostly been a side project I’m not primarily focused on atm)
FYI, normally when I’m thinking about this, it’s through the lens “how do we help the researchers working on illegible problems”, moreso than “how do we communicate illegibleness?”.
This post happened to ask the question “can AI advisers help with the latter” so I was replying about that, but, for completeness, normally when I think about this problem I resolve it as “what narrow capabilities can we build that are helpful ‘to the workflow’ of people solving illegible problems, that aren’t particularly bad from a capabilities standpoint”.
Do you have any writings about this, e.g., examples of what this line of thought led to?
Mostly this has only been a sidequest I periodically mull over in the background. (I expect to someday focus more explicitly on it, although it might be more in the form of making sure someone else is tackling the problem intelligently).
But, I did previously pose this as a kind of open question re What are important UI-shaped problems that Lightcone could tackle? and JargonBot Beta Test (this notably didn’t really work, I have hopes of trying again with a different tack). Thane Ruthenis replied with some ideas that were in this space (about making it easier to move between representations-of-a-problem)
https://www.lesswrong.com/posts/t46PYSvHHtJLxmrxn/what-are-important-ui-shaped-problems-that-lightcone-could
I think of many Wentworth posts as relevant background:
Why Not Just… Build Weak AI Tools For AI Alignment Research?
Why Not Just Outsource Alignment Research To An AI?
Interfaces as a Scarce Resource
My personal work so far has been building a mix of exobrain tools that are more, like, for rapid prototyping of complex prompts in general. (This has mostly been a side project I’m not primarily focused on atm)