We know Elon Musk is at least pushing the internet to the other extreme, satellites encircling the Earth
This is very interesting, thanks for posting.
The explanations of the AI’s algorithms sound pretty simplified, i.e. I wouldn’t be surprised if all these descriptions of how the algorithm works applied to efforts from 10+ years ago. Why did the human-level threshold just get crossed now?
The market on average goes up and every day they have their cash pulled out of the market in an attempt to time it on average loses them money? I think that’s why timing the market fails.
I think you guys are doing a great job rapidly hypothesizing and testing ways to enable the LW community create more value. I’m a fan of the questions feature and predict its usage will grow steadily. I’m interested to see how the other ideas play out.
Related: A lot of people seem to think that the next recession is “coming up” more so than usual due to the fact that we’ve now had a long economic boom. Isn’t this pure gambler’s fallacy?
Thanks I think that clarifies everything I’m wondering about. If we had a feature like Stack Overflow’s “accepted answer” this would be it for me :)
Looks like there are a lot of topics in there besides the question of whether physical nondeterminism is a meaningful concept. Can you summarize or point to the relevant section?
I believe the term “chaotic” refers to those things. E.g. for an airplane’s lift, there are higher-level-than-particle-modeling physical principles you can explain it with, but for a 1-month weather forecast you have to go down to particle modeling, or close.
Nice project thanks. It’s also consistent with my analysis that Arbital’s fancy wiki features never added much value (say, more than +30%) beyond the value that that content would have if published in a pre-existing format and venue.
I’ve read it 3 times and think it’s the best book ever. What a coincidence that I’m someone who is currently spending time on LessWrong.com and becoming part of your sample of answers eh?
From looking at Conway’s Game of Life, my intuition is that if a universe can support non-ontologically-fundamental Turing machines (I’m invoking anthropic reasoning), then it’s likely to have phenomena analyzable at multiple hierarchical levels (beyond the looser requirement of being simple/compressible).
Basically, if a universe allows any reductionistic understanding at all (that’s what I mean by calling the Turing Machine “non-ontologically-fundamental”), then the reductionist structure is probably a multi-layered one. Either zero reduction layers or lots, but not exactly one layer.
I started a company to help these kinds of situations. We have a team of 50 full-time dating/relationship coaches available 24⁄7 to give you personalized advice: RelationshipHero.com
Also check out our analysis of 100+ user-submitted online dating conversations: RelationshipHero.com/conversations
Thanks for the candid write up!
I’d make at least one Rick and Morty post a day if I could.
Given your thinking that Arbital prioritized engineering too much over acquiring users (aka validating demand), consider testing this claim of self-demand before investing effort toward it.
If you use online dating, I just launched a site called WittyThumbs to analyze and improve your conversations, in order to get better dates. Let me know what you think!
Seems like first you objected that TST’s lesson is meaningless, and now you’re objecting that it’s meaningful but limited and wrong. Worth noting that this isn’t a back and forth argument about the same objection.
The rest of LW’s epistemology sequences and meta-morality sequences explain why the foundations in TST also help understand math and morals.
By “reality-controlled”, I don’t just mean “external reality”, I mean the part of external reality that your belief claims to be about.
Understanding truth in terms of “correspondance” brings me noticeably closer to coding up an intelligent reasoner from scratch than those other words.
The simple truth is that brains are like maps, and true-ness of beliefs about reality is analogous to accuracy of maps about territory. This sounds super obvious, which is why Eliezer called it “The Simple Truth”. But it runs counter to a lot of bad philosophical thinking, which is why Eliezer bothered writing it.
Yes, apparently this is a thing! Look up professional organizers.
Can you send me an email? I’m happy to proofread some stuff for you for free in order to better understand your needs. (I’m actually good at that.)