I volunteer myself as a test subject; dm if interested
notfnofn
So I’m new here and this website is great because it doesn’t have bite-sized oversimplifying propaganda. But isn’t that common everywhere else? Those posts seem very typical for reddit and at least they’re not outright misinformation.
Also I… don’t hate these memes. They strike me as decent quality. Memes aren’t supposed to make you think deeply about things.
Edit: searched Kat Woods here and now feel worse about those posts
There have been a lot of tricks I’ve used over the years, some of which I’m still using now, but many of which require some level of discipline. One requires basically none, has a huge upside (to me), and has been trivial for me to maintain for years: a “newsfeed eradicator” extension. I’ve never had the temptation to turn it off unless it really messes with the functionality of a website.
It basically turns off the “front page” of whatever website you apply it to (e.g. reddit/twitter/youtube/facebook) so that you don’t see anything when you enter the site and have to actually search for whatever you’re interested in. And for youtube, you never see suggestions to the right of or at the end of a video.
I think even the scaling thing doesn’t apply here because they’re not insuring bigger trips: they’re insuring more trips (which makes things strictly better). I’m having some trouble understanding Dennis’ point.
“I don’t know, I recall something called the Kelly criterion which says you shouldn’t scale your willingness to make risky bets proportionally with available capital—that is, you shouldn’t be just as eager to bet your capital away when you have a lot as when you have very little, or you’ll go into the red much faster.
I think I’m misunderstanding something here. Let’s say you have dollars and are looking for the optimum number of dollars to bet on something that causes you to gain dollars with probability and lose dollars with probability . The optimum number of dollars you should bet via the Kelly criterion seems to be
(assuming positive expectation; i.e. the numerator is positive), which does scale linearly with . And this seems fundamental to this post.
(Epistemic status: low and interested in disagreements)
My economic expectations for the next ten years are something like:
-
Examples of powerful AI misanswering basic questions continue for a while. For this and other reasons, trust in humans over AI persists in many domains for a long time after ASI is achieved.
-
Jobs become scarcer gradually. Humans remain at the helm for a while but the willingness to replace ones workers with AI slowly creeps its way up the chain. There is a general belief that Human + AI > AI + extra compute in many roles, and it is difficult to falsify this. Regulations take a long time to cut, causing some jobs to remain far beyond their usefulness. Humans continue to get very offended if they find out they are talking to an AI in business matters.
-
Money remains a thing for the next decade and enough people have jobs to avoid a completely alien economy. There is time to slowly transition to UBI and distribution of prosperity, but there is no guarantee this occurs.
-
Ah, darn. Are there any other events/meetups you know of at Lighthaven during those weeks?
Is this going to continue in 2025? I’ll be visiting Berkeley from Jan 5th to Jan 17th and would like to come visit.
Here’s a little quick take of mine that provides a setting where centaur > AI (maybe). It’s theory of computation which is close to complexity theory
That’s incredible.
But how do they profit? They say they don’t profit on middle eastern war markets, so they must be profiting elsewhere somehow
There are also gas fees which dramatize this effect, but this is a very important point. A prediction market price gives rise to a function from interest rates to probability ranges for which a rational investor would not bet on the market if they had a probability in that range. The larger the interest rate or the farther out the market, the bigger the range.
Probably an easy widget to make: something that takes as input the polymarket price, gas fees, and interest rate and spits out this range or probabilities.
corank has to be more than 1, not equal to 1. I’m not sure if such a matrix exists; the reason I was able to change its mind by supplying a corank-1 matrix was that its kernel behaved in a way that significantly violated its intuition.
I similarly felt in the past that by the time computers were pareto-better than I at math, there would already be mass-layoffs. I no longer believe this to be the case at all, and have been thinking about how I should orient myself in the future. I was very fortunate to land an offer for an applied-math research job in the next few months, but my plan is to devote a lot more energy to networking + building people skills while I’m there instead of just hyperfocusing on learning the relevant fields.
o1 (standard, not pro) is still not the best at math reasoning though. I occasionally give it linear algebra lemmas that I suspect it to be able to help with, but it always has major errors. Here are some examples:
-
I have a finite-dimensional real vector space equipped with a symmetric bilinear form which is not necessarily non-degenerate. Let be the dimension of , be the subspace of with , and be the dimension of . Let and be dimensional real vector spaces that contain and are equipped with symmetric non-degenerate bilinear forms that extend . Show that there exists an isometry from to that restricts to the identity on . To its credit, it gave me some references that helped me prove this, but its argument was completely bogus.
-
Let be a real finite-dimensinoal vector space equipped with a symmetric non-degenerate bilinear form and let be an isometry of . Prove or disprove that the restriction of to the fixed-point subspace of on is non-degenerate. (Here it sort of had the right idea but its counter-examples were never right).
-
Does there exist a symmetric irreducible square matrix with diagonal entries and non-positive integer off-diagonal entries such that the corank is more than ? Here it gave a completely wrong proof of “no” and, no matter how many times I corrected its errors, kept gaslighting me into believing that the general idea must work and that it’s a standard result in the field that it follows from a book that I happened to actually have read. It kept insisting this, no matter how many times I corrected its errors, until I presented with an example of a corank-1 matrix that made it clear that its idea was unfixable.
I have a strong suspicion that o3 will be much better than o1 though.
-
My decision to avoid satellite view is a relic from a time of conserving data (and even then it might have been a case of using salt to accelerate cooking time). I wonder if there’s a risk of using it in places where cellular data is spotty, though. I’d imagine that using satellite view would reduce the efficiency in which the application saves local map information that might be important if I make a wrong turn where there’s no data available.
From the original post:
The purpose if insurance is not to help us pay for things that we literally do not have enough money to pay for. It does help in that situation, but the purpose of insurance is much broader than that. What insurance does is help us avoid large drawndowns on our accumulated wealth, in order for our wealth to gather compound interest faster.
Think about that. Even though insurance is an expected loss, it helps us earn more money in the long run. This comes back to the Kelly criterion, which teaches us that the compounding effects on wealth can make it worth paying a little up front to avoid a potential large loss later.
Click the link for a more in-depth explanation
If you are making an argument on how much compute can find an intelligent mind, you have to look at how much compute used by all of evolution.
Just to make sure I fully understand your argument, is this paraphrase correct?
“Suppose we have the compute theoretically required to simulate the human brain down to an adequate granularity for obtaining its intelligence (which might be at the level of cells instead of, say, the atomic level). Even so, one has to consider the compute required to actually build such a simulation, which could be much larger as the human brain was built by the full universe.”
(My personal view is that the opposite direction is true: it seems with recent evidence that we can pareto-exceed human intelligence while being very far from the compute required to simulate a brain. An idea I’ve seen floating around here is that natural selection built our brain randomly with a reward function that valued producing offspring so there is a lot of architecture that is irrelevant to intelligence)
Spotify recommended first recommended her to me in September 2023 and later that September I came across r/slatestarcodex, which was my first exposure to the rationalist community. That’s kind of funny.
Huh. Vienna Teng was my top artist, too and this is the only other spotify wrapped I’ve seen here. Is she popular in these circles?
Even a year ago, I would have bet extremely high odds that data analyst-type jobs would be replaced well before postdocs in math and theoretical physics. It’s wild that the reverse is plausible now
Would bet on this sort of strategy working; hard agree that ends don’t justify the means and see that kind of justification for misinformation/propaganda a lot amongst highly political people. (But above examples are pretty tame.)