No harm done with experimenting a bit I suppose.
Do you have examples of infographics that come close to what you have in mind?
No harm done with experimenting a bit I suppose.
Do you have examples of infographics that come close to what you have in mind?
Goodday! I’ve been reading rationalist blogs for approximately 2 years. At this random moment I have decided to make a LessWrong account.
Like most human beings I suffer and struggle in life. As a rich human, like most LessWrong users I assume (we have user stats?), I suffer in luxury.
The main struggle is where to spend my time and energy. The opportunity cost of life I suppose. What I do:
Improve myself. My thinking, my energy, my health, my wealth, my career, my status.
Improve my nearest relationships.
Improve my community (a bit).
Improve the world (a tiny bit).
But alas, the difficulty, how to choose the right balance? Hopefully I am doing better as I go along. Though how do I measure that?
I have no intellectual answers for you I am afraid. I’ll let you know if I find them.
Current status: Europe, 30+ years, 2 kids, physics PhD (bit pointless, but fun), AI/ML related work in high tech hardware company, bicycle to work, dabbled some in social entrepreneurship (failure).
I inspired someone; yay!
Since I like profound discussions I am now going to have to re-read IFS, it didn’t fully resonate with me the first time.
I cannot come up with such a cool wolverine story I am afraid.
Thanks!
I wrote with global standards in mind. My own income isn’t high compared to US technology industry standards.
In the survey I also see some (social) media links that may be interesting. I have occasionally wondered if we should do something on LinkedIn for more career related rationalist activities?
Gullibility bias?
I found a link in your links to Internal Double Crux. This technique I do recognize.
I recently also tried recursively observing my thoughts, which was interesting. I look at my current thought, than I look at the thought that’s looking at the first thought, etc. Untill it pops, followed by a moment of stillness, then a new thought arises, I start over. Any name for that?
I’ll examine the link!
When you say ‘one thought at a time’, do you mean one conscious thought? From reading all these multi-agent models I assumed the subconscious is a collection of parallel thoughts, or at least multi-threaded.
I also interpreted the Internal Double Crux as spinning up two threads and let them battle it out.
I recall one dream where I was two individuals at the same time.
I do consider it like two parallel thoughts, though one dominates, or at least I relate my ‘self’ mostly with one of them. However, how do I evaluate my subjective experience? It’s not like I can open the taskmanager and monitor my mind’s processes (though I am still undecided whether I should invest in some of those open source EEG devices).
Edit: While reading Scott’s review, I am more convinced it’s multi-threading, due the observation that there may be ‘brain wave frequencies’:
This is vipassana (“insight”, “wisdom”) meditation. It’s a deep focus on the tiniest details of your mental experience, details so fleeting and subtle that without a samatha-trained mind you’ll miss them entirely. One such detail is the infamous “vibrations”, so beloved of hippies. Ingram notes that every sensation vibrates in and out of consciousness at a rate of between five and forty vibrations per second, sometimes speeding up or slowing down depending on your mental state. I’m a pathetic meditator and about as far from enlightenment as anybody in this world, but with enough focus even I have been able to confirm this to be true. And this is pretty close to the frequency of brain waves, which seems like a pretty interesting coincidence.
Under this hypothesis, I would now state I have at least observed three states of multi-threading:
Double threading. I picked this up from a mindfulness app. You try to observe your thoughts as they appear. In essence there is one monitoring thread and one free thread.
Triple threads, i.e. Internal Double Crux. You have one moderator thread that monitors and balances two other debating threads.
Recursive threading. One thread starts another thread, which starts another, untill you hit the maximum limit, which is probably related to the brainwave frequency.
I’ll continue to investigate.
I think it’s worth hammering out the definition of a thread here.
Agreed. I only want to include conscious thought processes. So I am modeling myself as having a single core conscious processor. I assume this aligns with your statement that you are only experiencing a single thing, where experience is equivalent to “a thought during a specified time interval in your consciousness”? The smallest possible time interval that still constitutes a single thought I consider the period of a conscious brainwave. This random site states a conscious brainwave frequency of 12-30Hz, then the shortest possible thought is above 30 milliseconds.
I am assuming it’s temporal multithreading, with each though at least one cycle. Note that I am neither a neuroscientist, nor a computer scientist, so I am probably modeling it all wrong. Nevertheless simple toy models can often be of great help. If there’s a better analogy, I am more than willing to try it out.
People are discussing this across the internet of course, here’s one example on Hacker News
May I ask why you think you “passively consume” LW content? I notice the same behavior in myself, so I’m curious.
P.S. I hope it’s still better than passively consuming most other media.
Thanks for your article! Improving education is a good, yet difficult goal to pursue.
I’d like to weakly signal boost dev4x.com and the founder Bodo Hoenen, another high school drop-out who became a social entrepreneur with a focus on education. I know him and wish he was more involved with EA and rationality. Maybe a great contact for your network, Samuel?
As I grow older I spend more and more time teaching. I can concur with all points in this post. Sadly it contained no diagrams.
Diagrams are truly awesome. Great diagrams are absolutely amazing. High level summary diagrams are the best. I spend most of my time at work now drawing and explaining diagrams.
This closely relates to the concept of black swan farming.
The typical argument I’ve read is that we should take more risk, because risk taking widens the distribution and gives us more probability of ending up in the tail.
However, blind risk taking widens the distribution symmetrically. So we need to find ways to increase the positive tail probability, while taking more risk. You propose ‘weak ties’ and ‘virtue’ as a solution.
I’m going to take the leap and assume you mean virtue signaling, or any other form of signaling that makes you look like a good ally. With such signaling, others will be more likely to become your ally and help you out when you undertake your risky venture. This would increase your probability of success. Doing the opposite would reduce your probability (decay your upside).
May I ask why you choose Rust to write math and algorithms? I would have chosen Julia :p
Julia’s IterTools has the partition with step argument as well
Rust is a fascinating new language to learn, but not designed for scientific computing. For that Julia is the new kid on the block, and quite easy to pick up if you know Python.
I also recognize this feeling of “You have not done enough” or worse “This goal was meaningless in hindsight”. It’s probably very instrumental, pushing us and our genes to ever greater heights.
So should we lean in to it? Accepting happiness is forever lost behind some horizon? You will just walk around with this internal nagging feeling.
Or should we fix this bug as you say, but risk stagnation? One way may be to become a full-time meditating monk. Then you may have a chance to turn your wetware into a personal nirvana untill you pop out of existence. But that feels meaningless as well.
I’m trying to find a blend; take the edge off the suffering while moving forward.
I agree, it’s important to create, or at least detect, well-aligned agents. You suggest we need an honesty API.
Assuming you don’t spend all your time in some rationalist enclave, then it’s still useful to understand false beliefs and other biases. When communicating with others, it’s good to see when they try to convince you with a false belief, or when they are trying to convince another person with a false belief.
Also I admit I recently used a false belief when trying to explain how our corporate organization works to a junior colleague. It’s just complex… In my defense, we did briefly brainstorm how to find out how it works.
Another reason for Zvi to paint a bleak picture is to make sure mazedom doesn’t grow further, ever. Even if mazedom is low, it may still be beneficial to keep it that way.
I would like to encourage this!
Alternative representations for a larger audience could be
cartoons explaining a single concept, like XKCD or Dilbert.
graphical overviews, like the cognitive bias cheatsheet.
What else would be feasible?