Can’t you just keep a dream journal? I find if I do that consistently right upon waking up, I’m able to remember dreams quite well.
hmys
I’ve used SSRIs for maybe 5 years, and I think they’ve been really useful, with no negative effects, and more or less unwavering efficacy. The only exception is that they’ve non-negligibly lowered my libido. But to be honest, I don’t mind it that much.
Also, few times where I’ve had to not use them for a while (travelling and was very stupid not to bring enough), the withdrawal effects were quite strange and somewhat scary.
I also feel they had some very strange positive effects. Like I think they made my reaction time improve by quite a bit. Although it could be something random coinciding with starting SSRIs. Or just me being confused. I haven’t tested it. On humanbenchmark I score around the same now as I did in high school. But I feel like I can catch falling things with much better regularity, and this was an almost immediate effect after starting.
[Question] Practical advice for secure virtual communication post easy AI voice-cloning?
I feel like the biggest issue with aligning powerful AI systems, is that nearly all the features we’d like these systems to have, like being corrigible, not being deceptive, having values aligned with ours etc, are properties we are currently unable to state formally. They are clearly real properties, like humans can agree on examples of non-corrigibility, misalignment, dishonest, when shown examples of actions AIs could take. But we can’t put them in code or a program specification, and consequently can’t reason about them very precisely, test whether systems have them or not etc
One reason I’m very bullish on mechinterp is that it seems like the only natural pathway towards making progress on this. Transformers trained with RLHF do have “tendencies” and proto-values in a sense, figuring out how those proto-desires are represented internally, really understanding it, I believe will shed a lot of light on how values form in transformers, will necessarily entail getting a solid formal framework for reasoning aobut these processes, and will put the notions of alignment on much firmer ground. Same goes for the other features. Models already show deceptive tendencies. In the process of developing deep mechinterp understanding of that, I believe we’d gain better understanding of how deception in a neural net can be modeled formally, which would allow us to reason about it infinitely better.
(I mean, someone 300IQ might come along and just galaxy brain all this from first principles, but quite galaxy brained people have tried already.. The point is that if mechinterp was developed to a sophisticated enough level, in addition to all the good things listed already, it would shed a lot of conceptual clarity on many of the key notions, which we are currently stuck reasoning about on an informal level, and I think we will get there through incremental progress, without having to hope someone just figures it out by thinking really hard and having an einstein-tier insight).
[Question] Plausibility of Getting Early Warning Shots because AIs can’t coordinate?
Other people were commending your tabooing of words, but I feel using terms like “multi-layer parameterized graphical function approximator” fails to do that, and makes matters worse because it leads to non-central fallacy-ing. It’d been more appropriate to use a term like “magic” or “blipblop”. Calling something a function appropriator leads to readers carrying a lot of associations into their interpretation, that probably don’t apply to deep learning, as deep learning is a very specific example of function approximation, that deviates from the prototypical examples in many respects. (I think when you say “function approximator” the image that pops into most peoples head is fitting a polynomial to a set of datapoints in R^2)
Calling something a function approximator is only meaningful if you make a strong argument for why a function approximator cant (or at least is systematically unlikely to) give rise to specific dangerous behaviors or capabilities. But I don’t see you giving such arguments in this post. Maybe I did not understand it. In either case, you can read posts like Gwern’s “Tools want to be agents” or Yudkowsky’s writings, explaining why goal directed behavior is a reasonable thing to expect to arise from current ML, and you can replace every instance of “neural network” / “AI” with “multi-layer parameterized graphical function approximator”, and I think you’ll find that all the arguments make just as much sense as they did before. (modulo some associations seeming strange, but like I said, I think thats because there is some non-central fallacying going on).
Maybe I’m a unique example, but none of this matches my experience at all.
I was able to have lucid dreams relatively consistently just by dream journaling and doing reality checks. WILD was quite difficult to do, because you kind of have to walk a tight balance, where you keep yourself in a half-asleep state while carrying out instructions that requite a fair bit of metacognitive awareness, but once you get the hang of it, you can do that pretty consistently as well, without much time commitment.
That lucid dreams don’t offer much more than traditional entertainment seems also (obviously?) false to me. People use VR to make traditional entertainment more immersive. And LDs are far more immersive than that, and less limited than video games are.
They’re also just a really interesting psychological phenomena. The process is fun. If you find yourself in a lucid dream, its a strange situation. Testing out things, like checking how well your internal physics simulation engine works is really fun. Or just walking around and seeing what your subconscious generates is very fun. And very different from just imagining random stuff. Trying to meditate, and observing how your mind works differently in a dream, compared with waking reality is interesting. Seeing how extreme/vivid sensations you can generate in a dream is fun. Like trying to see if you can get yourself to feel pain. Or how loud sounds you can make.
Galantamine and various supplements all did nothing for me.
The only thing I agree with is the habituation effect. But like, that’s how many things work. You eventually get bored of stuff / feel you’ve exhausted all the low-hanging fruits.