Is this the first time that the word “Boltzmann” has been used to describe contemporary/near future ML? If not, how frequently has the word “boltzmann” been used in this way?
Also, I know this question might be a bit of a curve ball, but what pros and cons can you think of for using the word “boltzmann”? (feel free to DM me if there’s anything you’d rather not say publicly, which is definitely the right way to approach it imo). I’m really interested in AI safety communication, which is why I’m asking these slightly off-topic questions.
Is this the first time that the word “Boltzmann” has been used to describe contemporary/near future ML? If not, how frequently has the word “boltzmann” been used in this way?
Not sure- I haven’t seen it used before in this way, at least.
Also, I know this question might be a bit of a curve ball, but what pros and cons can you think of for using the word “boltzmann”?
Most lesswrong readers have probably encountered the concept of Boltzmann brains and can quickly map some of its properties over to other ideas, but I’d be surprised if “Boltzmann brain” would mean much to the median member of not-lesswrong. Having to explain both sides of the analogy, especially when both sides are complicated and weird, limits the explanatory value.
Worse, that Boltzmann fellow was known for a rather large number of things. If you called something a “Boltzmann distribution” intending this post’s usage, you’d probably get weird looks and a great deal of confusion.
I also really didn’t spend much time searching for the best possible fit- it was the first thing that came to mind that had the properties “conceivable,” “extremely impactful if true,” and “extremely improbable.” There’s probably some other analogy you could make with some extra property that would be even tighter.
So… probably fine if you’re talking about ideas that don’t overload existing terminology, and if whoever you’re talking to has a high probability of associating “Boltzmann” with “brain,” but otherwise iffy.
It’s probably fine-ish to allocate another reference to the concept, though I personally might suggest expanding it all the way out to “boltzmann brain mesaoptimizer”.
Are you familiar with restricted boltzmann machines? I think Hinton has described them as the other branch besides backprop that actually works, though I’m not finding the citation for that claim right now. In any case, they’re a major thread in machine learning research, and are what machine learning researchers will think of first. That said, boltzmann brains have a wikipedia page which does not mention lesswrong; I don’t think they’re a lesswrong-specific concept in any way.
Is this the first time that the word “Boltzmann” has been used to describe contemporary/near future ML? If not, how frequently has the word “boltzmann” been used in this way?
Also, I know this question might be a bit of a curve ball, but what pros and cons can you think of for using the word “boltzmann”? (feel free to DM me if there’s anything you’d rather not say publicly, which is definitely the right way to approach it imo). I’m really interested in AI safety communication, which is why I’m asking these slightly off-topic questions.
Not sure- I haven’t seen it used before in this way, at least.
Most lesswrong readers have probably encountered the concept of Boltzmann brains and can quickly map some of its properties over to other ideas, but I’d be surprised if “Boltzmann brain” would mean much to the median member of not-lesswrong. Having to explain both sides of the analogy, especially when both sides are complicated and weird, limits the explanatory value.
Worse, that Boltzmann fellow was known for a rather large number of things. If you called something a “Boltzmann distribution” intending this post’s usage, you’d probably get weird looks and a great deal of confusion.
I also really didn’t spend much time searching for the best possible fit- it was the first thing that came to mind that had the properties “conceivable,” “extremely impactful if true,” and “extremely improbable.” There’s probably some other analogy you could make with some extra property that would be even tighter.
So… probably fine if you’re talking about ideas that don’t overload existing terminology, and if whoever you’re talking to has a high probability of associating “Boltzmann” with “brain,” but otherwise iffy.
It’s probably fine-ish to allocate another reference to the concept, though I personally might suggest expanding it all the way out to “boltzmann brain mesaoptimizer”.
Are you familiar with restricted boltzmann machines? I think Hinton has described them as the other branch besides backprop that actually works, though I’m not finding the citation for that claim right now. In any case, they’re a major thread in machine learning research, and are what machine learning researchers will think of first. That said, boltzmann brains have a wikipedia page which does not mention lesswrong; I don’t think they’re a lesswrong-specific concept in any way.