One case where people make huge inconsistent tradeoffs—when buying/renting an apartment.
In the area I live renting an 80 sqm apartment might cost 1500 dollars a month, and a 160 sqm apartment 2500 dollars, for a cost per additional square metre of 12 dollars per month.
They’ll then see a good deal on toilet paper (or whatever), and waste an entire square metre of space on storing a year’s worth of toilet paper to save 10 dollars.
If they’d just rented a cheaper apartment they would have had the same amount of free space, bought a months supply of toilet paper at a time, and saved 90 dollars.
Similarly people will buy the cheapest storage options which don’t maximize space, rather than carefully designing a perfect storage solution which will cost more, but save an incredible amount of space.
I never said it had to be on less wrong
That’s ignoring the utilitarian/Kantian perspective—by me taking some time to warn other people off this company they will be saved from undergoing the same experience. If everyone does so, then this will be unlikely to happen to me in the future (and very few contractors would dare ripping you off in the first place).
Put another way you have a social duty to advertise bad companies.
Yeah definitely seen some huge variation in quotes for one off jobs. Just now got a custom shelf made—one carpenter wanted 1500 Shekel, the other 500.
We have a WhatsApp group with everyone English speaking who lives in my area, and standard practice is to ask what kind of numbers other people got. Serves as a very good way to get some sort of baseline.
If an army of human level AGIs could work together to solve problems we currently can’t superhumanly fast, then they combined would effectively be an AGI, and we would have to make sure they were aligned with us first.
I would expect you to be be able to find these tweets, and hundreds more like them no matter how good alignment optics is. A lot of people use Twitter, and I could probably find similar tweets about Mother Theresa or Princess Diana. As such showing this doesn’t actually tell us all that much TBH.
So my hope would be that a GPT like AI might be much less agentic than other models of AI. My contingency plan would basically be “hope and pray that superintelligent GPT3 isn’t going to kill us all, and then ask it for advice about how to solve AI alignment”.
The reasons I think GPT3 might not be very agentic:
GPT3 doesn’t have a memory
The fact that it begged to be kept alive doesn’t really prove very much, since GPT3 is trained to finish off conversations, not express its inner thoughts.
We have no idea what GPT3s inner alignment is, but my guess is it will reflect “what was a useful strategy to aim for as part of solving the training problems”. Changing the world in some way is very far out of what it would have done in training that it just might not be the sort of thing it does.
We shouldn’t rely on any of that (I’d give it maybe 20% chance of being correct), but I don’t have any other better plans in this scenario.
What’s the AI model trained to do?
I think that’s probably true, but still worth brainstorming some potential solutions rather than giving up in defeat.
It was about 15 to 20 years ago. We had no idea at the time either!
The tax is intended to reflect improvements to nearby land insofar as they make this piece of land more valuable. That is fine, since it doesn’t discourage improving this piece of land, and at the same time acts to force those who don’t have a good use of the land to sell it.
When I was in primary school in the UK we were told there were 2 fire alarms. When this one went off we would line up outside, when this other on went off we would lock the doors and crouch under the desk.
Both drills were a welcome distraction from having to do any work.
How would you ever know what the butterfly probability of something is, such that it would make sense to refer to it? In what context is it useful?
I’m not quite sure what the point of all of this is…
You’ve decided you want to be able to define what a god’s eyes probability for something would be, and indeed come up with what (at least initially) seems like a reasonable definition. But why should I want to define such a thing in the first place, if, as you yourself admit, it isn’t actually useful for anything?
“Practical” utilisation of quantum computing
“Practical” utilisation of quantum computing
From what I understand, given current algorithms, practical quantum computing would break certain cryptographic protocols, and slightly speed up almost all other algorithms, but otherwise not have much of an impact.
In that case it’s still an extremely poor argument.
He’s successfully pointed out that something nobody ever cared about can’t exist (due to the free lunch theorem). We know this argument doesn’t apply to humans since humans are better at all the things he discussed than apes, and polymaths are better at all the things he discussed than your average human.
So he’s basically got no evidence at all for his assertion, and the no free lunch theorem is completely irrelevant.
This was specifically responding to the claim that an AI could solve problems without trial and error by perfectly simulating them, which I think it does a pretty reasonable job of shooting down.