Are supercomputers the right target to benchmark against? My naive model is that they’ll be heavily optimized for things like FLOPs and bandwidth and not be particularly concerned with power usage or weight. What about systems that are more concerned about power-efficiency or weight?
There is evidence that pescetarians have better health outcomes than vegans. These studies aren’t definitive, but it’s also worth noting, that Asian diets are often high in fish, and there are some populations with very good health outcomes there, such as the blue-zone population in Okinawa, Japan. Dietary science is very much a mess, but I strongly believe that if vegans advocates aren’t clear about the dietary science, this issue could cause a LOT of blowback. If, eg, in 10 years, it’s definitively show that consumption of fish adds, say 3 years to health-span and vegans have been misleading the public about this, I predict that it will be very bad for the social acceptance of veganism.
Beyond the considerations of being misleading about the dietary science, IF some amount of fish consumption is indeed healthy, then the moral case is far from clear. Humans are animals too. Many of us find staying healthy an intrinsic good, and having our loved ones be healthy is also an intrinsic good. This would trade off against the welfare of fish.
Personally, I would be very very happy if fish consumption was show to be neutral or negative for health outcomes, both for the animal welfare considerations, as well because of the state of the oceans, and also for aesthetic reasons: fish are incredibly beautiful and majestic animals and I find it unsettling to consume them. Currently, I eat sardines from the seafoodwatch.org’s recommended list a couple of times a week—this is my main deviation from veganism. I’d be happy to return to full veganism if the evidence supported it.
Because I’m a professional DeFi thought leader, I had never actually deployed a contract to Ethereum before.
Holy shit, I found some real interesting articles on ether ’sploits by following links off of this write up. Did you know that people sometime write buggy “smart” contracts?
You mean it’s not a play on LSTM? Boo.
How do we get LCTM from Long-Tailed Capital Management? Is the Tail so long that the T slides right past the C?
Or does it stand for something ENTIRELY DIFFERENT??
Capable enough to find them but not to make sure everyone is home before pulling the trigger??
60% that the parents are still alive.
I can share observations/thoughts about some similar experiences.
Sometimes I lie.
But let me back up: for a long time, I’ve been aware of two different modes in which I speak—one is fluent and “real-time”; the other is slow, halting, and feels somewhat like wearing mittens, somewhat like trying to squash high-dimensional objects down into a lower-dimensional space, somewhat like.. tasting alternatives for the right connotations and inflections. Sometimes this second mode is evoked when some part of me becomes concerned about doing PR with the person I’m talking to, or if its a topic where I care about nuance or precision. I’d estimate that ~80% of my speech is in the fluent mode.
A few years ago (a few years into my meditation practice) I realized that speech in the fluent mode is, in some, way non-conscious. Sometimes I could become aware of speech just… unspooling through me, without intervention from… “me”. The part of me that calls itself “me”. The “global work space” of consciousness. Sometimes it was sophisticated speech! This has never happened in mittens-mode. Needing to sieve up a new/novel/unusual semantic meaning from the depths of language-space seems to require… “me”.
Once, fluent-mode upset someone and I tried to explain this to them. It… wasn’t well-received.
More than 99.99% of the time, I’m entirely happy with what, fluent mode utters. Very rarely, though (probably on the order of once every year or two) it will just lie. “How do you know X?” “Oh, through my dance community” <I actually met them through OkCupid>.
This tends to happen when I sense that the other person will get a negative emotional charge out of the truth, and/or if I have some negative emotional content around it. @AnnaSalamon, you gave me the model I use for thinking about this years ago: the parent who on picking up the phone angrily asks “Why don’t you call more often?!” and then wonders why their child doesn’t call more often. If a person has shown me systematically in the past that they will have a strong negative emotional reaction to the truth, then I powerfully learn their revealed preference that in some situations they hate the truth. I also learn an upsetting reciprocal lesson about myself and find myself in an awkward to back out of situation.
These moments are particularly vexing because they’re extremely difficult to “train against”—I have to be caught unaware, and in the right mental state for this to happen.
(Conversely, sometimes the babble system which Scott dubs the guf isn’t generative enough in real-time. There have been various babble exercises posted here in the past year; other ideas I’ve had include learning to rap (badly! Badly! I don’t expect to learn to rap well!) and trying comedy or theatre improv (Keith Johnstone’s Impro seemed to be recommended a lot in rationalist circles a few years ago). One thing I wonder about: will training the guf to be more generative increase the risk of the guf make utterances I wouldn’t fully endorse?)
Also a scene in Bladerunner 20-blah-blah.
Wait, this AGI is just wearing a person as a sock-puppet/love-interest? I hope the dynamics of those relationships/Vi’s motivations are fleshed out more.
Standby fees: $180/year (waived if you overfund your life insurance policy by $20,000, which you should do anyway) = $0/month
Just curious what the logic of overfunding by $20k is. I’ve been getting quotes from agents that are a bit high (I’m assuming because I’m 40), plus I’m in Canada, so there’s a historical 1.3x price premium due to currency conversion. Bumping my coverage up $20k adds about $720/year. I have enough in the bank that it’s unlikely I couldn’t handle future Alcor’s price increases. Unless there’s another factor I’m not considering, it seems like it would make more sense to just eat the $180.
The 538 distribution currently has Biden falling between… <squints> ~255 − 440 electoral votes 80% of the time (47% − 82%). Updating your guesstimate sheet with those ranges give a mean proportion to Trump of .38 with a range of .17 − 0.54
There would also almost certainly be some on-going costs. The Lagrange points aren’t fully stable, so eventually we’d need to ship up propellant for station keeping. [EDIT]: We could likely use solar radiation for station keeping… Probably on much longer timescales mirrors would need to be replaced. These costs would probably be less than 1% of the upfront investment, but if the system gradually falls apart, you then find yourself in a bad situation.
For context, the US congress passed the CARES act quite quickly (relative to normal timelines for passing legislature) earlier this year and that dropped $2.2 trillion into the economy.
Pleasantly surprised to find that I predicted the first set of results on effectiveness of mockery, but my prior confidence was only 55% for. My model going in was that people will either tend to fold quickly to get out of the mockery if they’re not adamant about the issue OR the mockery will stiffen their resolve, thus mockery might be slightly more effective at changing behaviors than minds..
Yes, this also raises a couple other good points:
How to begin localizing knowledge if you’re *not* in the US. My sense is this is particularly gnarly bec most info assumes you are and quickly gets into weeds related to the US tax system. It seems to me that as a first approximation of understanding investing you should tend to ignore tax structure, but maybe that’s wrong?
Tools and pipelines for programmatic trade bootstrapping. Looks like Robinhood does have an API. Not sure what other packages/api/tools specifically for finance are relevant.
A curriculum for getting started, as well as some guidelines for when it’s worthwhile to begin doing so. Personally, the flow into my savings in on the order of $4k/month, plus a pool currently of $20k cash, and about $90k invested in mostly stock ETFs. I have appetite for risk and tolerance of volatility, but don’t yet know if the sums of money I have to play with make sense for strategies beyond ‘hold the market’. If so, I don’t know how to begin bootstrapping—gilch’s recent posts have been interesting, but are a still a bit light on details and model-building.