That may apply to bonds (am not familiar with that), but I don’t think double entry accounting is used to decide the value of stocks?
tailcalled
The distributivity property is closely related to multiplication being repeated addition. If you break one of the numbers apart into a sum of 1s and then distribute over the sum, you get repeated addition.
When economists talk about “capital assets”, they mean things like roads, buildings and machines. When I read through a company’s annual reports, lots of their assets are instead things like stocks and bonds, short-term debt, and other “financial” assets—i.e. claims on other people’s stuff. In theory, for every financial asset, there’s a financial liability somewhere. For every bond asset, there’s some payer for whom that bond is a liability. Across the economy, they all add up to zero. What’s left is the economists’ notion of capital, the nonfinancial assets: the roads, buildings, machines and so forth.
Can’t stocks be worth a lot due to the profitable positive interaction between different things the company owns and rents, rather than due to their individual value? I’d think companies like Microsoft are to a substantial degree valuable because they’ve hired employees who’ve learned to collaborate to manage the technology sold by Microsoft.
“Probabilities” are a mathematical construct that can be used to represent multiple things, but in Bayesianism the first option is the most common.
Which world gets to be real seems arbitrary.
It’s the one observations come from.
Most possible worlds are lifeless, so we’d have to be really lucky to be alive.
Typically probabilistic models only represent a fragment of the world, and therefore might e.g. implicitly assume that all worlds are lived-in. The real world has life so it’s ok to assume we’re not in a lifeless world.
We have no information about the process that determines which world gets to be real, so how can we decide what the probability mass function p should be?
Often you require need some additional properties, e.g. ergodicity or exchangeability, which might be justified by separation-of-scale and symmetry and stuff.
P represents your uncertainty over worlds, so there’s no “right” P (except the one that assigns 100% to the real world, in a sense). You just gotta do your best.
My impression is that health problems reduce height but height also causes health problems (even in the normal range of height, e.g. higher cancer risk). I’d be surprised if height was causally healthy.
Putting it on bread and crackers seems like it dilutes it. Is it still good on its own?
By “gaygp victim”, do you mean that you are gay and AGP? Or...?
Major survey on the HS/TS spectrum and gAyGP
That’s not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don’t do that would win on the marketplace.
I’m not sure I understand your question. By AI companies “making copying hard enough”, I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.
Non-copyability as a security feature
The neural tangent kernel[1] provides an intuitive story for how neural networks generalize: a gradient update on a datapoint will shift similar (as measured by the hidden activations of the NN) datapoints in a similar way.
The vast majority of LLM capabilities still arise from mimicking human choices in particular circumstances. This gives you a substantial amount of alignment “for free” (since you don’t have to worry that the LLMs will grab excess power when humans don’t), but it also limits you to ~human-level capabilities.
“Gradualism” can mean that fundamentally novel methods only make incremental progress on outcomes, but in most people’s imagination I think it rather means that people will keep the human-mimicking capabilities generator as the source of progress, mainly focusing on scaling it up instead of on deriving capabilities by other means.
- ^
Maybe I should be cautious about invoking this without linking to a comprehensible explanation of what it means, since most resources on it are kind of involved...
- ^
Once you focus on “parts” of the brain, you’re restricting consideration to mechanisms that are activated at sufficient scale to need to balloon up. I would expect the rarely-activating mechanisms to be much smaller in a physical sense than “parts” of the brain are
Idk, the shift happened a while ago. Maybe mostly just reflecting on how evolution acts on a holistic scale, making it easy to incorporate “gradients” from events that occur only one or a few times in one’s lifetime, if these events have enough effect on survival/reproduction. Part of a bigger change in priors towards the relevance of long tails associated with my LDSL sequence.
I’ve switched from considering uploading to be obviously possible at sufficient technological advancement to considering it probably intractable. More specifically, I expect the mind to be importantly shaped by a lot of rarely-activating mechanisms, which are intractable to map out. You could probably eventually make a sort of “zombie upload” that ignores those mechanisms, but it would be unable to update to new extreme conditions.
Fixed
It was quite real since I wanted to negotiate about whether there was an interesting/nontrivial material project I could do as a favor for Claude.
Humans contain the reproductive and hunting instincts. You could call this a bag of heuristics, but it’s heuristics on a different level than AI, and in particular might not be chosen to be transferred to AIs. Furthermore, humans are harder to copy or parallelize, which leads to a different privacy profile compared to AIs.
The trouble with intelligence (both human and artificial and evolution) is that it’s all about regarding the world as an assembly of the familiar. This makes data/experience a major bottleneck for intelligence.
I’m imagining a case where there’s no intelligence explosion per se, just bags-of-heuristics AIs with gradually increasing competence.
Let’s say Alice buys 100 shares of Microsoft stock for $100. Then Microsoft implements a new management style that makes them much more effective, doubling the stock price. For emphasis on the new price, Bob then buys 1 share of Microsoft stock for $2. Alice’s shares are now worth $200, but the extra $100 doesn’t seem to have come from someone’s transactions. This $100 would conventionally be considered capital owned by Alice, but the actual substance of this capital is purely based on the new management style of Microsoft, rather than Microsoft’s assets.