Engineer at CoinList.co. Donor to LW 2.0.
ESRogs
In general, responses I’ve seen so far to this have seemed to come more from a “conflict theory” (rather than “mistake theory”) interpretation of what’s going on. And perhaps too much so.
I thought these comments by ricraz were a good contribution to the discussion:Scott Alexander is the most politically charitable person I know. Him being driven off the internet is terrible. Separately, it is also terrible if we have totally failed to internalise his lessons, and immediately leap to the conclusion that the NYT is being evil or selfish.
Ours is a community *built around* the long-term value of telling the truth. Are we unable to imagine reasonable disagreement about when the benefits of revealing real names outweigh the harms? Yes, it goes against our norms, but different groups have different norms.
If the extended rationalist/SSC community could cancel the NYT, would we? For planning to doxx Scott? For actually doing so, as a dumb mistake? For doing so, but for principled reasons? Would we give those reasons fair hearing? From what I’ve seen so far, I suspect not.
I feel very sorry for Scott, and really hope the NYT doesn’t doxx him or anyone else. But if you claim to be charitable and openminded, except when confronted by a test that affects your own community, then you’re using those words as performative weapons, deliberately or not.
https://twitter.com/RichardMCNgo/status/1275472175806451721
- 23 Jun 2020 19:10 UTC; 13 points) 's comment on SlateStarCodex deleted because NYT wants to dox Scott by (
I’ve been meaning for a while to be more public about my investing, in order to share ideas with others and get feedback. Ideally I’d like to write up my thinking in detail, including describing what my target portfolio would be if I was more diligent about rebalancing (or didn’t have to worry about tax planning). I haven’t done either of those things. But, in order to not let the perfect be the enemy of the good, I’ll just share very roughly what my current portfolio is.
My approximate current portfolio (note: I do not consider this to be optimal!):
40% TSLA
35% crypto—XTZ, BTC, and ETH (and small amounts of LTC, XRP, and BCH)
25% startups—Kinta AI, Coase, and General Biotics
4% diversified index funds
1% SQ (an exploratory investment—there are some indications that I’d want to bet on them, but I want to do more research. Putting in a little bit of money forces me to start paying attention.)
<1% FUV (another exploratory investment)
-5% cash
Some notes:
Once VIX comes down, I’ll want to lever up a bit. Likely by increasing the allocation to index funds (and going more short cash).
One major way this portfolio differs from the portfolio in my heart is that it has no exposure to Stripe. If it was easy to do, I would probably allocate something like 5-10% to Stripe.
I have a high risk tolerance. I think both dispositionally, and because I buy 1) the argument from Lifecycle Investing that young(ish) people should be something like 2x leveraged and, 2) the argument that some EAs have made that people who plan to donate a lot should be closer to risk neutral than they otherwise would be. (Because your donations are a small fraction of the pool going to similar causes, so the utility in money is much closer to linear than for money you spend on yourself, which is probably something like logarithmic.)
I am not very systematic. I follow my interests and go with my gut a lot. This has worked out surprisingly well. My crypto investment started with buying BTC at 25 cents in 2010, and my Tesla investment started at $35 in 2013. I’ve also invested in some startups that didn’t work out, but my highest conviction gut bets (Tesla and bitcoin) have been the best performers, and have far more than made up for the misses.
I would like to be more systematic. I think I would have done better up to now if I had been. Especially with tax planning.
- Are index funds still a good investment? by 2 Dec 2020 21:31 UTC; 38 points) (
- Isn’t Tesla stock highly undervalued? by 18 May 2020 1:56 UTC; 30 points) (
- 30 Apr 2020 19:48 UTC; 6 points) 's comment on Review of “Lifecycle Investing” by (
- 26 Nov 2022 8:07 UTC; 5 points) 's comment on Semi-conductor/AI Stock Discussion. by (
- 13 Dec 2020 6:12 UTC; 3 points) 's comment on “Patient vs urgent longtermism” has little direct bearing on giving now vs later by (EA Forum;
Speaking for myself (though I think many other rationalists think similarly), I approach this question with a particular mindset that I’m not sure how to describe exactly, but I would like to gesture at with some notes (apologies if all of these are obvious, but I want to get them out there for the sake of clarity):
Abstractions tend to be leaky
As Sean Carroll would say, there are different “ways of talking” about phenomena, on different levels of abstraction. In physics, we use the lowest level (and talk about quantum fields or whatever) when we want to be maximally precise, but that doesn’t mean that higher level emergent properties don’t exist. (Just because temperature is an aggregate property of fast moving particles, doesn’t mean that heat isn’t “real”.) And it would be a total waste of time not to use the higher level concepts when discussing higher level phenomena (e.g. temperature, pressure, color, consciousness, etc.)
Various intuitive properties that we would like systems to have may turn out to be impossible, either individually, or together. Consider Arrow’s theorem for voting systems, or Gödel’s incompleteness theorems. Does the existence of these results mean that no voting system is better than any other? Or that formal systems are all useless? No, but they do mean that we may have to abandon previous ideas we had about finding the one single correct voting procedure, or axiomatic system. We shouldn’t stop talking about whether a statement is provable, but, if we want to be precise, we should clarify which formal system we’re using when we ask the question.
Phenomena that a folk or intuitive understanding sees as one thing, often turn out to be two (or more) things on careful inspection, or to be meaningless in certain contexts. E.g. my compass points north. But if I’m in Greenland, where it points, and the place where the rotational axis of the earth meets the surface, aren’t the same thing anymore. And if I’m in space, there just is no north anymore (or up, for that matter).
When you go through an ontological shift, and discover that the concepts you were using to make sense of the world aren’t quite right, you don’t have to just halt, melt, and catch fire. It doesn’t mean that all of your past conclusions were wrong. As Eliezer would say, you can rescue the utility function.
This state of having leaky abstractions, and concepts that aren’t quite right, is the default. It is rare that an intuitive or folk concept survives careful analysis unmodified. Maybe whole numbers would be an example that’s unmodified. But even there, our idea of what a ‘number’ is is very different from what people thought a thousand years ago.
With all that in mind as background, when I come to the question of morality or normativity, it seems very natural to me that one might conclude that there is no single objective rule, or set of rules or whatever, that exactly matches our intuitive idea of “shouldness”.
Does that mean I can’t say which of two actions is better? I don’t think so. It means that when I do, I’m probably being a bit imprecise, and what I really mean is some combination of the emotivist statement referenced in the post, plus a claim about what consequences will follow from the action, combined with an implicit expression of belief about how my listeners will feel about those consequences, etc.
I think basically all of the examples in the post of rationalists using normative language can be seen as examples of this kind of shorthand. E.g. saying that one should update one’s credences according to Bayes’s rule is shorthand for saying that this procedure will produce the most accurate beliefs (and also that I, the speaker, believe it is in the listener’s best interest to have accurate beliefs, and etc.).
For me it seems like a totally natural and unsurprising state of affairs for someone to both believe that there is no single precise definition of normativity that perfectly matches our folk understanding of shouldness (or that otherwise is the objectively “correct” morality), and also for that person to go around saying that one should do this or that, or that something is the right thing to do.
Similarly, if your physicist friend says that two things happened at the same time, you don’t need to play gotcha and say, “Ah, but I thought you said there was no such thing as absolute simultaneity.” You just assume that they actually mean a more complex statement, like “Approximately at the same time, assuming the reference frame of someone on the surface of the Earth.”
A folk understanding of morality might think it’s defined as:
what everyone in their hearts knows is right
what will have the best outcomes for me personally in the long run
what will have the best outcomes for the people I care about
what God says to do
what makes me feel good to do after I’ve done it
what other people will approve of me having done
And then it turns out that there just isn’t any course of action, or rule for action, that satisfies all those properties.
My bet is that there just isn’t any definition of normativity that satisfies all the intuitive properties we would like. But that doesn’t mean that I can’t go around meaningfully talking about what’s right in various situations, anymore than the fact that the magnetic pole isn’t exactly on the axis of rotation means that I can’t point in a direction if someone asks me which way is north.
- 12 Jan 2020 2:34 UTC; 15 points) 's comment on Book review: Rethinking Consciousness by (
- 26 Nov 2019 1:31 UTC; 7 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (EA Forum;
- 27 Nov 2019 3:31 UTC; 7 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (EA Forum;
- 21 May 2020 1:20 UTC; 2 points) 's comment on A Problem With Patternism by (
After reading through some of the recent discussions on AI progress, I decided to sketch out my current take on where AI is and is going.
Hypotheses:
The core smarts in our brains is a process that does self-supervised learning on sensory data.
We share this smarts with animals.
What distinguishes us from other animals is some kludgy stuff on top that enables us to:
1) chain our data prediction intuitions into extended trains of thought via System 2 style reasoning
2) share our thoughts with others via language
(and probably 1 and 2 co-evolved in a way that depended on each other)
I say that the sensory data learning mechanism is the core smarts, rather than the trains-of-thought stuff, because the former is where the bulk of the computation takes place, and the latter is some relatively simple algorithms that channel that computation into useful work.
(Analogous to the kinds of algorithms Ought and others are building to coax GPT-3 into doing useful work.)
Modern ML systems are doing basically the same thing as the predictive processing / System 1 / core smarts in our brains.
The details are different, but if you zoom out a bit, it’s basically the same algorithm. And the natural and artificial systems are able to successfully model, compress, and predict data for basically the same reasons.
AI systems will get closer to being able to match the full range of abilities of humans (and then exceed them) due to progress both on:
1) improved intuition / data compression and prediction that comes from training bigger ML models for longer on more data, and
2) better algorithms for directing those smarts into useful work.
This means that basically no new fundamental insights are needed to get to AGI / TAI. It’ll just be a bunch of iterative work to scale ML models, and productively direct their outputs.
So the path from here looks pretty continuous, though there could be some jumpy parts, especially if some people are unusually clever (or brash) with the better algorithms (for making use of model outputs) part.
I’m curious if others agree with these claims. And if not, which parts seem most wrong?
- 8 Jun 2022 3:17 UTC; 70 points) 's comment on AGI Ruin: A List of Lethalities by (
- 12 Dec 2021 6:44 UTC; 6 points) 's comment on More Christiano, Cotra, and Yudkowsky on AI progress by (
In Simulacra and Subjectivity, the part that reads “while you cannot acquire a physician’s privileges and social role simply by providing clear evidence of your ability to heal others” was, in an early draft, “physicians are actually nothing but a social class with specific privileges, social roles, and barriers to entry.” These are expressions of the same thought, but the draft version is a direct, simple theoretical assertion, while the latter merely provides evidence for the assertion. I had to be coy on purpose in order to distract the reader from a potential fight.
I want to quibble with this a little bit (and maybe this is that fight you were trying to avoid), but to me the draft version doesn’t seem so direct and simple.
In a sense it’s simple, but if I just read that statement in isolation, it’s less clear to me as a reader what you mean by it. Maybe largely because I’m not sure what you mean by the “nothing but”. If you took out the “nothing but”, I would agree that it’s a clear and direct (and true!) statement. But with the “nothing but” it seems obviously false on many interpretations, so I’m not quite sure how to make sense of it.
In contrast, the “while you cannot acquire...” version seems much clearer to me about what it’s claiming and complaining about.
Just bumped up my monthly Patreon pledge from $50 to $100.
This is my 1000th LessWrong comment. Hooray!
1. Basing expected returns on the US market is an egregious case of selection bias.
FYI they redo the analysis for the FTSE and the Nikkei and they come to the same conclusion. Also, the theoretical analysis comes out the same even if returns are lower in the future than they have been in the past.
Lower expected return does mean putting a lower share into the risky asset, but expected returns would have to go very low indeed (w/o a corresponding drop in expected volatility) for the analysis not to suggest that those just starting out should use leverage. (2x leverage is way undershooting the target that the math suggests, but they suggest maxing out at 2x leverage for various practical reasons. If expected returns were a bit lower, then 2x would probably still be below the theoretical target for people at the beginning of their careers.)
2. Any mention of the normal distribution...
I am curious about this. It’s my impression that assets tend to become more correlated in a downturn. I’m not sure how much this, or the presence of fat tails, affects things, but their back test on at least three different countries’ data mitigates my concern somewhat.
3. There is a more subtle problem… Books advocating leverage, stocks for the long run, index and forget etc, tend to appear after a run-up in the market
Happily, this was at least not the case here. The book was written in 2008/2009, and published in 2010, just after the financial crisis. And we’re reading this review during the coronavirus pandemic when the S&P is still down 15% from the start of the year.
4. Terrible market returns often coincide with hard times for the portfolio owner, such as unemployment, slumps in the value of other assets and other difficulties.
This is a fair point, which I think was not addressed well enough in the book. But which was addressed well in Jess’s review! (See e.g. his discussion of disability insurance.)
Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair… our writeup about it...
Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with...
FWIW, I think you and Adam are talking about two different pieces of communication. I think you are thinking of the communication leading up to the big community-wide discussion that happened in Sept 2018, while Adam is thinking specifically of CFAR’s follow-up communication months after that — in particular this post. (It would have been in between those two times when Adam and Anna did all that thinking that he was talking about.)
This may be a niche interest, but I would personally love to have an Investing tag (or something along those lines). Here are some topics and posts that I’d want to apply this tag to:
Standard investment advice and modifications thereto
Get Rich Slowly, Get Rich Real Slowly, and Debt is an Anti-investment by Jacobian
87,000 Hours or: Thoughts on Home Ownership by romeostevensit
Review of “Lifecycle Investing” by JessRiedel
Risk-aversion and investment (for altruists) by paulfchristiano
Cryptocurrency
Making money with Bitcoin? by Clippy
Look for the Next Tech Gold Rush? by Wei_Dai
A LessWrong Crypto Autopsy by Scott Alexander
The Efficient-market hypothesis
Markets are Anti-Inductive by Eliezer Yudkowsky
This comment by Wei Dai on the Feb 2020 open thread (if comments could have their own tags)
Refactoring EMH – Thoughts following the latest market crash by Alex_Shleizer
Taking a look at the Tag Index, I think Betting would be the closest existing tag, but it’s not a great fit. And what I’m thinking of should probably go under the Individual Optimization section of World Optimization in the hierarchy, rather than under Rationality.
(It might also make sense to create tags for Money or Career that would also fall under Individual Optimization, to catch posts like Maximizing Your Donations via a Job, but I haven’t thought as much about those as tags.)
I remember reciting “beware trivial inconveniences” to myself in my head when I went through the process of figuring out how to buy BTC in December 2010. It was good advice.
but be like, “let me think for myself whether that is correct”.
From my perspective, describing something as “honest reporting of unconsciously biased reasoning” seems much more like an invitation for me to think for myself whether it’s correct than calling it a “lie” or a “scam”.
Calling your opponent’s message a lie and a scam actually gets my defenses up that you’re the one trying to bamboozle me, since you’re using such emotionally charged language.
Maybe others react to these words differently though.
Boomshanked! (aka done)
Excited to see the results.
Also: use paragraphs.
Said is doing something similar, so I see it as a valuable contribution.
I appreciate hearing this counterpoint.
I wish there was a way to get the benefit of Said’s pointed questioning w/o readers like me being so frustrated by the style. I suspect that relatively subtle tweaks to the style could make a big difference. But I’m not exactly sure how to get there from here.
For now all I can think of is to note that some users, like Wei Dai, ask lots of pointed and clarifying questions and never provoke in me the same kind of frustration that many of Said’s comments do.
why do people think consciousness has anything to do with moral weight?
Is there anything that it seems to you likely does have to do with moral weight?
I feel pretty confused about these topics, but it’s hard for me to imagine that conscious experience wouldn’t at least be an input into judgments I would endorse about what’s valuable.
Unless I missed it, neither this comment nor the main post explains why you ultimately decided in favor of karma notifications. You’ve listed a bunch of cons—I’m curious what the pros were.
Was it just an attempt to achieve this?
I want new users who show up on the site to feel rewarded when they engage with content
I enjoyed this opportunity to relive being Vassar’d.
Relatedly, I think an often underappreciated trick is just to say the same thing in a couple different ways, so that listeners can triangulate your meaning. Each sentence on its own may be subject to misinterpretation, but often (though not always) the misinterpretations will be different from each other and so “cancel out”, leaving the intended meaning as the one possible remaining interpretation.
I have it as a pet peeve of mine when people fail to do this. An example I’ve seen a number of times — when two people with different accents / levels of fluency in a language are talking (e.g. an American tourist talking to hotel staff in a foreign country), and one person doesn’t understand something the other said. And then the first person repeats what they said using the exact same phrasing. Sometimes even a third or more times, after the listener still doesn’t understand.
Okay, sure, sometimes when I don’t understand what someone said I do want an exact repetition because I just didn’t hear a word, and I want to know what word I missed. And at those times it’s annoying if instead they launch into a long-winded re-explanation.
But! In a situation where there’s potentially a fluency or understanding-of-accents issue, using the exact same words often doesn’t help. Maybe they don’t know one of the words you’re using. Maybe it’s a phrasing or idiom that’s natural to you but not to them. Maybe they would know the word you said if you said it in their accent, but the way you say it it’s not registering.
All of these problems are solved if you just try saying it a different way. Just try saying the same thing three different ways. Making sure to use different (simple) words for the main ideas each time. Chances are, if they do at least somewhat speak the language you’re using, they’ll pick up on your meaning from at least one of the phrasings!
Or you could just sit there, uncreatively and ineffectively using the same phrasing again and again, as I so often see...
I think this is actually the part that I most “disagree” with. (I put “disagree” in quotes, because there are forms of these theses that I’m persuaded by. However, I’m not so confident that they’ll be relevant for the kinds of AIs we’ll actually build.)
1. The smart part is not the agent-y part
It seems to me that what’s powerful about modern ML systems is their ability to do data compression / pattern recognition. That’s where the real cognitive power (to borrow Eliezer’s term) comes from. And I think that this is the same as what makes us smart.
GPT-3 does unsupervised learning on text data. Our brains do predictive processing on sensory inputs. My guess (which I’d love to hear arguments against!) is that there’s a true and deep analogy between the two, and that they lead to impressive abilities for fundamentally the same reason.
If so, it seems to me that that’s where all the juice is. That’s where the intelligence comes from. (In the past, I’ve called this the core smarts of our brains.)
On this view, all the agent-y, planful, System 2 stuff that we do is the analogue of prompt programming. It’s a set of not-very-deep, not-especially-complex algorithms meant to cajole the actually smart stuff into doing something useful.
When I try to extrapolate what this means for how AI systems will be built, I imagine a bunch of Drexler-style AI services.
Yes, in some cases people will want to chain services together to form something like an agent, with something like goals. However, the agent part isn’t the smart part. It’s just some simple algorithms on top of a giant pile of pattern recognition and data compression.
Why is that relevant? Isn’t an algorithmically simple superintelligent agent just as scary as (if not moreso than) a complex one? In a sense yes, it would still be very scary. But to me it suggests a different intervention point.
If the agency is not inextricably tied to the intelligence, then maybe a reasonable path forward is to try to wring as much productivity as we can out of the passive, superhuman, quasi-oracular just-dumb-data-predictors. And avoid as much as we can ever creating closed-loop, open-ended, free-rein agents.
Am I just recapitulating the case for Oracle-AI / Tool-AI? Maybe so.
But if agency is not a fundamental part of intelligence, and rather something that can just be added in on top, or not, and if we’re at a loss for how to either align a superintelligent agent with CEV or else make it corrigible, then why not try to avoid creating the agent part of superintelligent agent?
I think that might be easier than many think...
2. The AI does not care about your atoms either
https://intelligence.org/files/AIPosNegFactor.pdf
Suppose we have (something like) an agent, with (something like) a utility function. I think it’s important to keep in mind the domain of the utility function. (I’ll be making basically the same point repeatedly throughout the rest of this comment.)
By default, I don’t expect systems that we build, with agent-like behavior (even superintelligently smart systems!), to care about all the atoms in the future light cone.
Humans (and other animals) care about atoms. We care about (our sensory perceptions of) macroscopic events, forward in time, because we evolved to. But that is not the default domain of an agent’s utility function.
For example, I claim that while AlphaGo could be said to be agent-y, it does not care about atoms. And I think that we could make it fantastically more superhuman at Go, and it would still not care about atoms. Atoms are just not in the domain of its utility function.
In particular, I don’t think it has an incentive to break out into the real world to somehow get itself more compute, so that it can think more about its next move. It’s just not modeling the real world at all. It’s not even trying to rack up a bunch of wins over time. It’s just playing the single platonic game of Go.
Giant caveat (that you may already be shouting into your screen): abstractions are leaky.
The ML system is not actually trained to play the platonic game of Go. It’s trained to play the-game-of-Go-as-implemented-on-particular-hardware, or something like minimize-this-loss-function-informed-by-Go-game-results. The difference between the platonic game and the embodied game can lead to clever and unexpected behavior.
However, it seems to me that these kinds of hacks are going to look a lot more like a system short-circuiting than it out-of-nowhere building a model of, and starting to care about, the whole universe.
3. Orthogonality squared
I really liked Eliezer’s Arbital article on Epistemic and instrumental efficiency. He writes:
I think this very succinctly captures what would be so scary about being up against a (sufficiently) superintelligent agent with conflicting goals to yours. If you think you see a flaw in its plan, that says more about your seeing than it does about its plan. In other words, you’re toast.
But as above, I think it’s important to keep in mind what an agent’s goals are actually about.
Just as the utility function of an agent is orthogonal from its intelligence, it seems to me that the domain of its utility function is another dimension of potential orthogonality.
If you’re playing chess against AlphaZero Chess, you’re going to lose. But suppose you’re secretly playing “Who has the most pawns after 10 moves?” I think you’ve got a chance to win! Even though it cares about pawns!
(Of course if you continue playing out the chess game after the10th move, it’ll win at that. But by assumption, that’s fine, it’s not what you cared about.)
If you and another agent have different goals for the same set of objects, you’re going to be in conflict. It’s going to be zero sum. But if the stuff you care about is only tangentially related to the stuff it cares about, then the results can be positive sum. You can both win!
In particular, you can both get what you want without either of you turning the other off. (And if you know that, you don’t have to preemptively try to turn each other off to prevent being turned off either.)
4. Programs, agents, and real-world agents
Agents are a tiny subset of all programs. And agents whose utility functions are defined over the real world are a tiny subset of all agents.
If we think about all the programs we could potentially write that take in inputs and produce outputs, it will make sense to talk about some of those as agents. These are the programs that seem to be optimizing something. Or seem to have goals and make plans.
But, crucially, all that optimization takes place with respect to some environment. And if the input and output of an agent-y program is hooked up to the wrong environment (or hooked up to the right environment in the wrong way), it’ll cease to be agent-y.
For example, if you hook me up to the real world by sticking me in outer space (sans suit), I will cease to be very agent-y. Or, if you hook up the inputs and outputs of AlphaGo to a chess board, it will cease to be formidable (until you retrain it). (In other words, the isAgent() predicate is not a one-place function.)
This suggests to me that we could build agent-y, superintelligent systems that are not a threat to us. (Because they are not agent-y with respect to the real world.)
Yes, we’re likely to (drastically) oversample from the subset of agents that are agent-y w.r.t. the real world, because we’re going to want to build systems that are useful to us.
But if I’m right about the short-circuiting argument above, even our agent-y systems won’t have coherent goals defined over events far outside their original domain (e.g. the arrangement of all the atoms in the future light cone) by default.
So even if our systems are agent-y (w.r.t. some environment), and have some knowledge of and take some actions in the real world, they won’t automatically have a utility function defined over the configurations of all atoms.
On the other hand, the more we train them as open-ended agents with wide remit to act in the real world (or a simulation thereof), the more we’ll have a (potentially superintelligently lethal) problem on our hands.
To me that suggests that what we need to care about are things like: how open-ended we make our systems, whether we train them via evolution-like competition between agents in a high-def simulation of the real world, and what kind of systems are incentivized to be developed and deployed, society-wide.
5. Conclusion
If I’m right in the above thinking, then orthogonality is more relevant and instrumental convergence is less relevant than it might otherwise appear.
Instrumental convergence would only end up being a concern for agents that care about the same objects / resources / domain that you do. If their utility function is just not about those things, IC will drive them to acquire a totally different set of resources that is not in conflict with your resources (e.g. a positional chess advantage in a go game, or trading for your knight while you try to acquire pawns).
This would mean that we need to be very worried about open-ended real-world agents. But less worried about intelligence in general, or even agents in general.
To be clear, I’m not claiming that it’s all roses from here on out. But this reasoning leads me to conclude that the key problems may not be the ones described in the post above.