If you’re going to do something that huge, why not put the cars underground? I suppose it would be more expensive, but adding any extensive tunnel system at all to an existing built up area seems likely to be prohibitively expensive, tremendously disruptive. and, at least until the other two are fixed, politically impossible. So why not go for the more attractive impossibility?
jbash
Why so small? If you’re going to offer wall mounts and charge $1000, why not a TV-sized device that is also actually a television, or at least a full computer monitor? What makes this not want to simply be a Macintosh? I don’t fully ‘get it.’
You don’t necessarily have a TV-sized area of wall available to mount your thermostat control, near where you most often find yourself wanting to change your thermostat setting. Nor do you necessarily want giant obtrusive screens all over the place.
And you don’t often want to have to navigate a huge tree of menus on a general-purpose computer to adjust the music that’s playing.
“Aren’t we going to miss meaning?”
I’ve yet to hear anybody who brings this up explain, comprehensibly, what this “meaning” they’re worried about actually is. Honestly I’m about 95 percent convinced that nobody using the word actually has any real idea what it means to them, and more like 99 percent sure that no two of them agree.
I seem to have gotten a “Why?” on this.
The reason is that checking things yourself is a really, really basic, essential standard of discourse[1]. Errors propagate, and the only way to avoid them propagating is not to propagate them.
If this was created using some standard LLM UI, it would have come with some boilerplate “don’t use this without checking it” warning[2]. But it was used without checking it… with another “don’t use without checking” warning. By whatever logic allows that, the next person should be able to use the material, including quoting or summarizing it, without checking either, so long as they include their own warning. The warnings should be able to keep propagating forever.
… but the real consequences of that are a game of telphone:
An error can get propagated until somebody forgets the warning, or just plain doesn’t feel like including the warning, and then you have false claims of fact circulating with no warning at all. Or the warning deteriorates into “sources claim that”, or “there are rumors that”, or something equally vague that can’t be checked.
Even if the warning doesn’t get lost or removed, tracing back to sources gets harder with each step in the chain.
Many readers will end up remembering whatever they took out of the material, including that it came from a “careful” source (because, hey, they were careful to remind you to check up on them)… but forget that they were told it hadn’t been checked, or underestimate the importance of that.
If multiple people propagate an error, people start seeing it in more than one “independent” source, which really makes them start to think it must be true. It can become “common knowledge”, at least in some circles, and those circles can be surprisingly large.
That pollution of common knowledge is the big problem.
The pollution tends to be even worse because whatever factoid or quote will often get “simplified”, or “summarized”, or stripped of context, or “punched up” at each step. That mutation is itself exacerbated by people not checking references, because if you check references at least you’ll often end up mutating the version from a step or two back, instead of building even higher on top of the latest round of errors.
All of this is especially likely to happen when “personalities” or politics are involved. And even more likely to happen when people feel a sense of urgency about “getting this out there as soon as possible”. Everybody in the chain is going to feel that same sense of urgency.
I have seen situations like that created very intentionally in certain political debates (on multiple different topics, all unrelated to anything Less Wrong generally cares about). You get deep chains of references that don’t quite support what they’re claimed to support, spawning “widely known facts” that eventually, if you do the work, turn out to be exaggerations of admitted wild guesses from people who really didn’t have any information at all. People will even intentionally add links to the chain to give others plausible deniability. I don’t think there’s anything intentional here, but there’s a reason that some people do it intentionally. It works. And you can get away with it if the local culture isn’t demanding rigorous care and checking up at every step.
You can also see this sort of thing as an attempt to claim social prestige for a minimal contribution. After all, it would have been possible to just post the link, or post the link and suggest that everybody get their AI to summarize it. But the main issue is that spreading unverified rumors causes widespread epistemic harm.
- ↩︎
The standard for the reader should still be “don’t be sure the references support this unless you check them”, which actually means that when the reader becomes a writer, that reader/writer should actually not only have checked their own references, but also checked the references of their references, before publishing anything.
- ↩︎
Perhaps excusable since nobody actually knows how to make the LLM get it right reliably.
I used AI assistance to generate this, which might have introduced errors.
Resulting in a strong downvote and, honestly, outright anger on my part.
Check the original source to make sure it’s accurate before you quote it: https://www.courtlistener.com/docket/69013420/musk-v-altman/ [1]
If other people have to check it before they quote it, why is it OK for you not to check it before you post it?
Fortunately, Nobel Laureate Geoffrey Hinton, Turing Award winner Yoshua Bengio, and many others have provided a piece of the solution. In a policy paper published in Science earlier this year, they recommended “if-then commitments”: commitments to be activated if and when red-line capabilities are found in frontier AI systems.
So race to the brink and hope you can actually stop when you get there?
Once the most powerful nations have signed this treaty, it is in their interest to verify each others’ compliance, and to make sure uncontrollable AI is not built elsewhere, either.
How, exactly?
Non-causal decision theories are not necessary for A.G.I. design.
I’ll call that and raise you “No decision theory of any kind, causal or otherwise, will either play any important explicit role in, or have any important architectural effect over, the actual design of either the first AGI(s), or any subsequent AGI(s) that aren’t specifically intended to make the point that it’s possible to use decision theory”.
Computer security, to prevent powerful third parties from stealing model weights and using them in bad ways.
By far the most important risk isn’t that they’ll steal them. It’s that they will be fully authorized to misuse them. No security measure can prevent that.
Development and interpretation of evals is complicated
Proper elicitation is an unsolved research question
… and yet...
Closing the evals gap is possible
Why are you sure that effective “evals” can exist even in principle?
I think I’m seeing a “we really want this, therefore it must be possible” shift here.
I don’t have much trouble with you working with the US military. I’m more worried about the ties to Peter Thiel.
CAPTCHAs have “adversarial perturbations”? Is that in the sense of “things not visible to humans, but specifically adversarial to deep learning networks”? I thought they just had a bunch of random noise and weird ad hoc patterns thrown over them.
Anyway, CAPTCHAs can’t die soon enough. Although the fact that they persist in the face of multiple commercial services offering to solve 1000 for a dollar doesn’t give me much hope...
Using scp to stdout looks weird to me no matter what. Why not
ssh -n host cat /path/to/file | weird-aws-stuff
… but do you really want to copy everything twice? Why not run
weird-aws-stuff
on the remote host itself?
To prevent this, there must be a provision that once signed by all 4 states, the compact can’t be repealed by any state until after the next election.
It’s not obvious that state legislatures have the authority, under their own constitutions, to bind themselves that way. Especially not across their own election cycles.
Thirty million dollars is a lot of money, but there are plenty of smart rich people who don’t mind taking risks. So, once the identity and (apparent) motives of the Trump whale were revealed, why didn’t a handful of them mop up the free EV?
Well, first I think you’re right to say “a handful”. My (limited but nonzero) experience of “sufficiently rich” people who made their money in “normal” ways, as opposed to by speculating on crypto or whatever, is that they’re too busy to invest a lot of time in playing this kind of market personally, especially if they have to pay enough attention to play it intelligently. They’re not very likely to employ anybody else to play for them either. Many or most of them will see as the whole thing as basically an arcane, maybe somewhat disreputable game. So the available pool is likely smaller than you might think.
That conjecture is at least to some degree supported by the fact that nobody, or not enough people, stepped in when the whole thing started. Nothing prevented the market from moving so far to begin with. It may not have been as certain what was going on then, but things looked weird enough that you’d expect a fair number of people to decide that crazy money was likely at work, and step in to try to take some of it… if enough such people were actually available.
In any case, whether when the whole thing started, after public understanding was reasonably complete, or anywhere along the way, the way I think you’d like to make your profit on the market being miscalibrated would be to buy in, wait for the correction, and then sell out… before the question resolved and before unrelated new information came in to move the price in some other way.
But would be hard to do that. All this is happening potentially very close to resolution time, or at least to functional resolution time. The market is obviously thin enough that single traders can move it, and new information is coming in all the time, and the already-priced-in old information isn’t very strong and therefore can’t be expected to “hold” the price very solidly, and you have to worry about who may be competing with you to take the same value, and you may be questioning how rational traders in general are[1].
So you can’t be sure you’ll get your correction in time to sell out; you have a really good chance of being stuck holding hold your position through resolution. If “markets can remain irrational longer than you can remain solvent”, then they can also stay irrational for long enough that trading becomes moot.
If you have to hold through resolution, then you do still believe you have positive expected value, but it’s really uncertain expected value. After all, you believe the underlying question is 50-50, even if one of those 50s would pay you more than the other would lose you. And you have at best limited chance to hedge. So you have to have risk tolerance high enough that, for most people, it’d be in the “recreational gambling” range rather than the “uncertain investment” range. The amount of money that any given (sane) person wants to put at risk definitely goes down under that much uncertainty, and probably goes down a lot. So you start to need more than a “handful” of people.
Also, don’t forget the point somebody made the other day about taxes. Unless you’re a regular who plays many different questions in such volume that you expect to offset your winnings with losses, you’re going to stand to “win” a double-digit percentage less than you stand to “lose”, whether on selling off your position or on collecting after resolution. Correcting 60-40 to 50-50 may just plain not be profitable even if you collect.
There are probably other sources of friction, too.
- ↩︎
I’d bet at least a recreational amount of money that players in betting markets are sharply more pro-Trump politically than, say, the general voting population, and that would be expected to skew their judgement, and therefore the market, unless almost all of them were superhuman or nearly so. And when you’re seeing the market so easily moved away from expert opinion…
- ↩︎
Can’t this only be judged in retrospect, and over a decent sample size?
The model that makes you hope for accuracy from the market is that it aggregates the information, including non-public information, available to a large number of people who are doing their best to maximize profits in a reasonable VNM-ish rational way.
In this case, everybody seems pretty sure that the price is where it is because of the actions of a single person who’s dumped in a very large amount of money relative to the float. It seems likely that that person has done this despite having no access to any important non-public information about the actual election. For one thing, they’ve said that they’re dumping all of their liquidity into bets on Trump. Not just all the money they already have allocated to semi-recreational betting, or even all the money they have allocated to speculative long shots in general, but their entire personal liquidity. That suggests a degree of certainty that almost no plausible non-public information could actually justify.
Not only that, but apparently they’ve done it in a way calculated to maximally move the price, which is the opposite of what you’d expect a profit maximizer to want to do given their ongoing buying and their (I think) stated and (definitely at this point) evidenced intention to hold until the market resolves.
If the model is that makes you expect accuracy to begin with is known to be violated, it seems reasonable to assume that the market is out of whack.
Sure, it’s possible that the market just happens to be giving an accurate probability for some reason unrelated to how it’s “supposed” to work, but that sort of speculation would take a lot of evidence to establish confidently.
I’m assuming that by “every other prediction source” you mean everything other than prediction/betting markets
Well, yes. I would expect that if you successfully mess up Polymarket, you have actually messed up “The Betting Market” as a whole. If there’s a large spread between any two specific operators, that really is free money for somebody, especially if that person is already set up to deal on both.
Another way to look at that is that during a relatively long stretch of time when people most needed an advance prediction, the market was out of whack, it got even more out of whack as the event approached, and a couple of days before the final resolution, it partially corrected. The headline question is sitting at 59.7 to 40.5 as I write this, and that’s still way of line with every other prediction source.
… and the signifcance of the bets has been to show that prediction markets, at least as presently constituted, aren’t very useful for actually predicting events of any real significance. It’s too easy for whackos to move the market to completely insane prices that don’t reflect any realistic probability assessment. Being rich is definitely not incompatible with being irrational, and being inclined to gamble is probably negatively correlated with being well calibrated.
Yeah, I got that, but it’s a very counterintuitive way to describe probability, especially the negative thing.
I’ll be using American odds.
Where do all these bizarre notations come from?
That one seems particularly off the wall.
Stretching your mouth wide is part of the fun!