Hearing a secret can create moral, legal, or strategic costs. Once you know it, you may be forced to act or conceal, both of which can carry risk. You could tell me something that makes it awkward for me to interact with some people or that forces me to lie. I don’t necessary want such secrets. So why should people accept retroactive secrecy? I don’t know the truth here but charitably he had already told someone else of the information before you asked for secrecy or he read that part.
As someone who donated to lightcone in the past, I think LessWrong and Lighthaven are great places which provide enormous values. Seems worth a few million, they have permanent engineers on staff, you can get feedback for your posts from real people for free.
When I posted Current Safety Training Techniques Do Not Fully Transfer to Frontier Models, I later randomly happened to see a Meta AI researcher using a screenshot from it in a conference presentation. I had no contact with them, so that reach was entirely organic. It showed how LessWrong helps safety research circulate beyond its own circle. I also found Lighthaven unusually productive during my MATS work this summer, with good focus. Like you, I am also doing inkhaven right now and will see how useful I will find it in the end. The physical environment genuinely seemed optimized for deep work and I also think it makes me mentally feel good to be here compared to other co-working spaces.
There is a very small number of actually pro valid reasoning organizations trying to help save the world. Only a very tiny number of people actually support sane AI safety in the sense of stopping the current race and not building Superintelligence with anything close to current techniques. I think this place existing should probably be worth a lot more to humanity.
When I saw that sama had visited and gave a talk in lighthaven I felt it was a good thing. Cutting all connection to OpenAI religiously does not seem helpful, for what it is worth sama might be an AI CEO that the safety community can hope to influence a little bit despite all his flaws. Maintaining some ties here could be useful, though I don’t particularly expect anything to come out of this.
About DM messages, I didn’t have the impression that the messages here would be encrypted or specifically protected from admins. I think that would be weird to share some secret in the chat function of lesswrong, seems like a minimalist feature to share some feedback on posts or perhaps to exchange other information. I think it’s probably good it exists. I certainly don’t see any reason to think they are acting bad faith here.
I never really interacted much with habryka myself, but what I know of the other lightcone staff seems like they are great people.
Still would like to talk about your views on AI at some point during inkhaven.
Hearing a secret can create moral, legal, or strategic costs. Once you know it, you may be forced to act or conceal, both of which can carry risk. You could tell me something that makes it awkward for me to interact with some people or that forces me to lie. I don’t necessary want such secrets. So why should people accept retroactive secrecy? I don’t know the truth here but charitably he had already told someone else of the information before you asked for secrecy or he read that part.
As someone who donated to lightcone in the past, I think LessWrong and Lighthaven are great places which provide enormous values. Seems worth a few million, they have permanent engineers on staff, you can get feedback for your posts from real people for free.
When I posted Current Safety Training Techniques Do Not Fully Transfer to Frontier Models, I later randomly happened to see a Meta AI researcher using a screenshot from it in a conference presentation. I had no contact with them, so that reach was entirely organic. It showed how LessWrong helps safety research circulate beyond its own circle. I also found Lighthaven unusually productive during my MATS work this summer, with good focus. Like you, I am also doing inkhaven right now and will see how useful I will find it in the end. The physical environment genuinely seemed optimized for deep work and I also think it makes me mentally feel good to be here compared to other co-working spaces.
There is a very small number of actually pro valid reasoning organizations trying to help save the world. Only a very tiny number of people actually support sane AI safety in the sense of stopping the current race and not building Superintelligence with anything close to current techniques. I think this place existing should probably be worth a lot more to humanity.
When I saw that sama had visited and gave a talk in lighthaven I felt it was a good thing. Cutting all connection to OpenAI religiously does not seem helpful, for what it is worth sama might be an AI CEO that the safety community can hope to influence a little bit despite all his flaws. Maintaining some ties here could be useful, though I don’t particularly expect anything to come out of this.
About DM messages, I didn’t have the impression that the messages here would be encrypted or specifically protected from admins. I think that would be weird to share some secret in the chat function of lesswrong, seems like a minimalist feature to share some feedback on posts or perhaps to exchange other information. I think it’s probably good it exists. I certainly don’t see any reason to think they are acting bad faith here.
I never really interacted much with habryka myself, but what I know of the other lightcone staff seems like they are great people.
Still would like to talk about your views on AI at some point during inkhaven.