Great. Will look into it.
Is there a way to exchange deks between Anki and ThoughtSaver?
We seem to mutually misidentify anger and hate. That is surprising. Emotions are usually assumed to be universal. Could be cultural. I looked it up and there found this research question:
Human Emotions: Universal or Culture‐Specific? (PDF)
Anna Wierzbicka has made an effort to decompose this into fundamental building blocks:
Emotions Across Languages and Cultures: Diversity and Universals
I have not yet fully read it but this seems like a program worthy to communicate wider.
I like hate idea of making emotions continuous or a spectrum but I don’t think a single linear one will do.
for example, anger can feel good—if it works (I can’t say that from personal experience so much as from observation from kids).
Except that, if I am alone with the person, the target will eventually become me no matter what I do.
I don’t know what you do but I might have an idea why you become the target (because this happens to me too): You seem to be the type of person who has their own ideas, own plans, and are not easily influenced or persuaded. For some—maybe many people—ramping up emotion is one (unconscious) way to make you (or other people) understand the urgency or distress they are in. If you ignore or otherwise do not respond or acknowledge other people’s stress you make it worse for them. You may not have an obligation to do something about it but you also shouldn’t be surprised if it happens.
Do you think this might relate to your experience?
I think you are confusing anger with hate.
Like all emotions, anger is adaptive—though it may have been more so in the ancestral environment. Even today anger tells you something. Quick google: https://www.psychologytoday.com/us/blog/mindful-anger/201606/4-reasons-why-you-should-embrace-your-anger
Hate was presumably also adaptive but I think its purpose is mostly lost in modern society. There I agree that it amounts to useless destruction.
Or maybe a maximally equal outcome—ensuring prolonged war.
I recommend either adding a very short explanation of what “Bruce” is or explicitly stating that familiarity with the concept is mandatory (e.g. by reading the provided link).
Thank you. The multipole moment chart is cited frequently and I was always wondering what it would look like in counterfactual worlds without DM. Therefore I am especially grateful for your explanations:
The positioning of the first peak tells you about the curvature of the universe. [...] Having different amounts of ordinary atomic matter vs dark matter early on in the universe produces different characteristic patterns in the spectrum, with ordinary atomic matter tending to enhance the even-numbered peaks, and dark matter tending to enhance the odd-numbered ones.
Can you provide a link to read up further on this? Preferably not a summary but the actual research article.
Insights about branding, advertising, and marketing.
It is a link that was posted internally by our brand expert and that I found full of insights into human nature and persuasion. It is a summary of the book How Not to Plan: 66 Ways to Screw it Up:
Note that you can have pain without suffering and vice versa.
The OP was talking about suffering but it wasn’t clear to me whether pain was included or not.
Well, another advantage of the BNN is of course the high parallelism. But that doesn’t change the computational cost (number of FLOPS required), it just spreads it out in parallel.
Partly, yes. But partly the computation could be the cheap part compared to the thing it’s trading off against (ability to grow, fault tolerance, …). It is also possible that the brains architecture allows it to include a wider range of inputs that might not be able to model with back-prod (or not efficiently so).
A while ago I made the claim that
Arguments for the singularity are also (weak) arguments for theism.
Note that I have updated since a bit on some of the points but not flipping the direction.
On page 8 at the end of section 4.1:
Due to the need to iterate the vs until convergence, the predictive coding network had roughly a 100x greater computational cost than the backprop network.
This seems to imply that artificial NNs are 100x more computationally efficient (at the cost of not being able to grow and probably lower fault tolerance etc.). Still, I’m updating to simulating a brain requiring much less CPU than the neurons in the brain would indicate.
I am unhappy with what I perceive as a strong position against ‘keep your identity small’ - which I see as a very useful heuristic. I have re-read Paul’s (pretty short) post and I agree that he does not discuss any downsides or over-applying the rule. I wish you had put your last paragraph first. That would have made it much more balanced.
About how I read: I have always been a fast reader easily willing to not think too much about things that seemed unimportant. Except for math where building a working model is key.
I understand his point is not that we have enough CPU and RAM to simulate a human brain. We do not. His point seems to be that the observable memory capacity of the human brain is on the order of TB to PB. He doesn’t go too deep into the compute part but the analogy with self-driving cars seems suitable. After all quite a big part of the brain is devoted to image processing and object detection. I think it is not inconceivable that there are better algorithms than what the brain has to make do with for the intelligence part.
That sounds quite a bit like what I do. When I encounter an insight in an article that I want to keep I create an Anki card from it. Here is the latest one that came up in my Anki:
Q: Which people who say that they want to change actually will do?
A: An observation: People who blame a part of themselves for a failure do not change. If someone says “I’ve got a terrible temper”, he will still hit. If he says “I hit my girlfriend”, he might stop. If someone says “I have shitty executive function”, he will still be late. If he says “I broke my promise”, he might change.
And for this article I will create:
Q: Goldfish Reading is
A: reading text without trying to fully understand (or memorize) all of it at once but focus on the key parts of it (and optionally making Anki cards out of it). A bit like this Anki deck but with more cards. https://www.lesswrong.com/posts/fSos4ZwdQmRuLLnwK/goldfish-reading
Maybe my method is really plankton reading?
Thank you. I’m unlikely to generate charts by CLI but always interested in “grammars of graphics”. And you seem to have found a good adaptation to the *nix CLI for charts.
Thank you also for the links esp. to Q (https://github.com/harelba/q) which I have already installed because I will absolutely use it.