Ty John for writing this up. This post and the comments really helps me find my own place and direction in terms of doing what I want to do.
Currently in academia and VERY unhappy about the bullshit I have to ingest and create. But I’m still waiting on my social, political, and financial safety nets before I can do anything remotely brave, kindda like tryactions mentioned in his comment.
So the most I’ll do is probably just read and write and talk to people on the side.
Speaking of talking to people...
My current research involves (manually) using CPU architectural artifacts to break sandboxing and steal data. I’ve been wondering whether I could do something on the lines of “make a simple AI that tries to break out of sandboxes, then make an unbreakable sandbox to contain it”.
Do shoot me a message if you have any thoughts, or is just curious. I would love to chat.
I went ahead and took a look! I am actually very new to the community and not at all aware of this.
I have some thoughts on this and I would love it if you would hop on a zoom call with me and help brainstorm a bit. You can find me at cedar.ren@gmail.com
Others are welcome too! I’m just a little lonely and a little lost, and would love to chat with people from lesswrong about these ideas
Ty John for writing this up. This post and the comments really helps me find my own place and direction in terms of doing what I want to do.
Currently in academia and VERY unhappy about the bullshit I have to ingest and create. But I’m still waiting on my social, political, and financial safety nets before I can do anything remotely brave, kindda like tryactions mentioned in his comment.
So the most I’ll do is probably just read and write and talk to people on the side.
Speaking of talking to people...
My current research involves (manually) using CPU architectural artifacts to break sandboxing and steal data. I’ve been wondering whether I could do something on the lines of “make a simple AI that tries to break out of sandboxes, then make an unbreakable sandbox to contain it”.
Do shoot me a message if you have any thoughts, or is just curious. I would love to chat.
I’m guessing you’re aware but Jim Babcock and others have thought a bit about AI containment and wrote about it in Guidelines for AI Containment.
I went ahead and took a look! I am actually very new to the community and not at all aware of this.
I have some thoughts on this and I would love it if you would hop on a zoom call with me and help brainstorm a bit. You can find me at cedar.ren@gmail.com
Others are welcome too! I’m just a little lonely and a little lost, and would love to chat with people from lesswrong about these ideas