Theoretical AI alignment (and relevant upskilling) in my free time. My current view of the field is here (part 1) and here (part 2).
NicholasKross
There is something weirdly powerful about these being in well-printed form, more approachable in form and content. This is something easier to recommend to friends without having to go “By the way this post references a thing that is no longer relevant / is poorly researched” like the Robber’s Cave mentioned.
Definitely think it’s a good idea to go for the two optimized versions. Kinda like many (most?) of the classic novels from history: cheap mass edition + luxurious “pro” version. (Not that the content would actually differ between them, unless there’s some good reason for that I’m not thinking of; “Hey, the leatherbound version has more original post text and tangents!”).
Indeed. Reminds me of how science communication these days has kind of expanded, partly because the tools of (good) design are more accessible. A quick look at the (good) science channels on YouTube shows kinda the opposite. Of course, this can lead to other problems (if science + bad-design = crackpot, people can perceive BS + good-design as = true). Rhyme-as-reason effect, style-as-substance, etc.
Spaced repetition is still good for knowledge you need to retrieve immediately, when a 2-second delay would make it useless.
Not sure about other people/situations, BUT I personally have found, in classroom settings relating to math and CS theory, that a 2-second delay can impede understanding. Especially when a definition relies on a combination of well-chunked previous concepts, which is especially the case when dealing with math.
As far as I know, being specific is like half (most? (technically all?)) of the core of rational thinking and living. Looking forward to more out of this, based on what’s posted so far!
Another example of going up/down the ladder/lattice of abstraction, is also given by Paul Graham. In his essay “General and Surprising”, he noted how valuable insights are generally-applicable, usually meaning abstract. However, he noted that it’s often more attainable to say something more specific about already-known-to-be-important-things (as long as that more specific thing is new).
More specifically, being specific seems like it would catch more mistakes/laziness/haziness in reasoning. In contrast, being more general seems better for having new ideas, rather than getting them to specifically match reality well.
Can’t wait! I’ve convinced a friend or two of mine to come, hope to see the rest of you there!
Hey how do I be you? The most I got was one time when I drank an energy drink and then I obsessed over a spreadsheet for 2 hours and then crashed after a total of 4 hours.
I’m high enough on conscientiousness to not fail, but not high enough on conscientiousness to succeed (or catch up to my neuroticism).
Any ideas for accruing money quickly outside of a job? I don’t have much capital to invest currently.
What’s the quickest way to get up to speed and learn the relevant skills, to the level expected of someone working at OpenAI?
the general sense of “explanation” means a conceptual understanding of a phenomenon. In statistics, “explanation” implies no such understanding
I don’t understand what you’re saying here. Does statistics use “explanation” as a technical jargon term for something that’s not gearsy?
Still more coherent than most manifestos. Great job team!
This sounds about right. I would bet
capitalpoints that some of this has to do with amount of dopamine in the brain. The points on freedom after transcending survival, experimentation, and self-motivation… those are bought with dopamine (which, based on your other writings, can sometimes be bought with pain, which is kinda convenient in a way!)
Say I’m deciding what to do with my time (not well-planned, although maybe it should be more deliberate). What’s a quick heuristic for deciding what to do?
“Focus on timeless knowledge” is relatively simple to do, and it started to affect how I view a lot of things.
Why dive deep into trivia that teaches me nothing? Why play the video games I used to play a lot? Why consume junk media (which lsusr wrote another post about)? With a bit of argument, this post nudged me further and faster, in the direction of actually prioritizing what I do with my time.
Stupid question: STR and CHA are given in different orders in the data vs the above description. (And, because both values given are “low” enough to be the CHA stat, it’s ambiguous if the values were switched). Does this secretly mean something, or am I just reading too much into it?
Ah, okay thanks!
STR +3, WIS +3, INT +0, CHA +2-4, evenly distribute among the rest. Pretty unsophisticated, and misses out from larger gains from adding many points to a stat in one go.
This was cool! Definitely looking forward to the next challenge and using what I learned from this.
I’d be interested in more resources regarding the “low-hanging fruit” theory as related to the structure of ideaspace and how/whether nk space applies to that. Any good resources-for-beginners on Kauffman’s work?