Interested in math, Game Theory, etc.
A chance of 50% or so here seems reasonable, with the default being ‘you can’t actually please the whole coalition at once and often there’s still a pandemic and people will blame you for it.’
Have we made more or less progress than you thought we would by now? (Or did you not take that into account?)
29.6% as a Shilling point
Is that an intentional spelling? Or is it [Schelling point]?
a major flare up here
a major flare up here
(I didn’t finish reading because this was getting to be like reading twitter, except dryer.)
I also think that this question is deeply confused
It is not. Whether or not it is well used, or applied where it is clearly important...
The critique also has a point.
Is everyone better off after understanding of nuclear physics advances? (And is that not progress?)
I bring this one up, because however ‘things have turned out today’, the concern that the world might come to an end...well, it’s not just an:
assumption of zero-sum behaviour
Aside from that:
If progress means that more people get toilets, this is good
all else being equal. If ‘progress’ mean less people get toilets, this is bad, all else being equal.
Sure, use specific standards.
Job loss and economic upheaval. As technology wrought its “creative destruction” in a capitalist economy, entire professions from blacksmiths to longshoremen became obsolete.
Putting aside issues this quote may have (a capitalist economy), maybe this did make it harder for people to buy toilets. ‘Progress’ (whether it should be called that) where some people are better off, and some people are worse off, can exist. If you decide the question of ‘are things better’ by examining that, and tallying up ‘are more people better of than worse off (ignoring magnitude for the moment)’ then it may no more be assumed that ‘progress is always good’, than that ‘progress is always bad’. And where ‘progress’ results in as many or more lose—say toilets—as gain, then this is, by your measure, not necessarily better, or is worse, respectively..
I think the concerns this article mentions may be worth addressing, not brushing aside.
Do I currently have to worry about nuclear apocalypse? Maybe not, or not a lot. That at some point, services I use will be disrupted by ransomware? A little bit. Hopefully that doesn’t get a lot of people in hospitals killed.
That being said, I haven’t responded to ransomware by saying “Progress has gone to far! The unification of software and cryptography has doomed us all!”
Here are some difficult questions the new progress movement needs to answer:
Try ‘What is progress?’
This was a pretty good essay.
If you thought the answers in that thread backed you up:
It’s a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear.
A lot of the other work is pre-paradigmatic, as others have mentioned, but that doesn’t make it pseudoscience. Falsifiability is the key to demarcation.
That summarizes a few answers.
I agree, I wouldn’t consider AI alignment to be scientific either. How is it a “problem” though?
OpenAI’s desire for everyone to have AI
I didn’t find the full joke/meme again, but, seriously, OpenAi should be renamed to ClosedAI.
Rohin’s opinion: I first want to note my violent agreement with the notion that a major scary thing is “consequentialist reasoning”, and that high-impact plans require such reasoning, and that we will end up building AI systems that produce high-impact plans.
What major scary thing will be next?
“Newton’s flaming laser sword”?
Testing theories? Before making major plans based on them?
Understanding the world?
A convergent drive or instrumental goal, not to ‘avoid dying’ but to create backups, and other copies running? Eventually running on a variety of stacks or substrates to avoid risks across types (like solar storms and EMPs)? Spreading to other planets so planetary risks aren’t existential risks?
For most games, there’s a guide that explains exactly how to complete your objective perfectly, but to read it would be cheating. Your goal is not to master the game, but to experience the process of mastering the game as laid out by the game’s designers, without outside interference. In the real world, if there’s a guide for a skill you want to learn, you read it.
This doesn’t sound like how people actually use them?
If it’s a puzzle, then sure, figuring it out yourself can be fun. But if you get stuck and want to move on...then don’t you pull out a guide?
(That’s not to say that this is optimal for learning skills or knowledge or other things.)
if there’s a guide for a skill you want to learn, you read it.
You open up a calculus textbook, read the questions, read the answers. How much do you think you’ll get out of it?
How do you stay informed*?
*Of things you care about.
Should recursive fanfics go under this tag?
There’s a link from 1 to 2, and from 2 to 1, but not from 2 to 3.
I think of the difference between these as “solipsism”—AIXI gives its own existence a distinguished role in reality.
Why wouldn’t they be the same? Are you saying AIXI doesn’t ask ‘where did I come from?’
Eliezer thinks that what is inside the black box inexorably kills you when the black box is large enough, like how humans are cognitive daemons of natural selection (the outer optimization process operating on the black box of genes accidentally constructed a (sapient) inner optimization process inside the black box) and this is chicken-and-egg unavoidable whenever the black box is powerful enough to do something like predict complicated human judgments, since in this case the outer optimization was automatically powerful enough to consider and select among multiple hypotheses the size of humans, and the inner process is automatically as powerful as human intelligence.
It might be worth pointing out that evolution seems to be doing something different from the oracle in the Original Post.
building something piece by piece, and testing those pieces (in reality), and then building things from those
Wandering the space, adrift from that connection to reality, w/out the checking throughout.
Not only that, most of the plans route through “acquire resources in a way that is unfriendly to human values.” Because in the space of all possible plans, while consequentialism doesn’t take that many bits to specify, human values are highly complexand take a lot of bits to specify.
1) It’s easier to build a moon base with money. And*, it’s easier to steal money than earn it.
*This is a hypothetical
2) Even replacing that plan with a one that ‘human values’ says works, is tricky. What is an acceptable way to earn money?
Just listing the plans.
One does not enumerate all of possibility.
Okay, but if I imagine a researcher who is thoughtful but a bit too optimistic, what they might counterargue with is: “Sure, but I’ll just inspect the plans for whether they’re unfriendly, and not do those plans.”
And here you swap out ‘a plan’ for ‘plans’.
Me: Okay, so partly you’re pointing out that hardness of the problem isn’t just about getting the AI to do what I want, it’s that doing what I want is actually just really hard. Or rather, the part where alignment is hard is precisely when the thing I’m trying to accomplish is hard. Because then I need a powerful plan, and it’s hard to specify a search for powerful plans that don’t kill everyone.
The fact that this is being used as a metaphor, disconnects it from the problem.
Suppose, tomorrow, a ‘cure for cancer’ was created. And the solution was surprisingly simple.
It seems clear that say, ‘beating you at chess’ isn’t that hard to plan. Why would ‘cure cancer’ be so very, very hard?
It seems like the tricky bit about a plan is that...maybe a plan wouldn’t work?
You might have to do experiments, and learn from them, and come up with new ideas...you are not sailing somewhere that is on a map, or doing something that has been done before.
even before the vaccine has time to do [all] its work
Anarchy in the UK! Woo-hoo!
That’s not what anarchy means. To start, it means no monarchy.
Unless you’re already using a backpack or other convenient carrying device and have one that folds up nicely, it means one of your hands is busy and you have another thing to remember all the time. It’s a non-trivial cost, which is why people often get caught without an umbrella. A universal mandatory-umbrella-carrying social norm would be rather expensive and stupid.
You explain how it wouldn’t be costly (to people who can afford backpacks), then insist it would be costly.
He also reports that people are getting rid of their pet hamsters after Hong Kong ordered its hamsters killed. Please don’t do this, especially if you have kids. The Covid-19 risk here is at most minimal.
How does quarantine and treatment work for hamsters?
Hm. Are there any good export options at the moment? (Like, things you use to download stuff from websites to read (or sort) later? Like, say, all the posts by jefftk*?)
*Companion Cubes is not a tag on LW. It will probably never be a tag on LW. But, even if it’s only one post, I can make it a sequence on my computer.
It wasn’t a question. It was a suggested change in that question, the text of the Original Post, as it were, to make it more readable.
(I also edited my comment after noticing another ‘typo’ in the section you quoted.)
By contrast, in the virtue ethics tradition I’m most familiar with, “virtues” are a variety of character traits. Those character traits that tend to help you to succeed at living an excellent human life (or that are themselves ways of living excellently) are virtues; those that interfere with this are vices; any others are just part of life’s rich variety. To consider rationality as a virtue is to consider it as one of the human excellences that individuals can strive to practice characteristically.
As opposed to two extremes of traits, where both ends are bad, and being somewhere in the middle is ideal.
For example, while I was composing this post, I saw a series of tweets from philosopher @AgnesCallard in which she contrasted rationality as a virtue with rationality as a skill.
Ah virtue, those skills which are praiseworthy; oh no! vice! those skills which are banned, condemned and never spoken of!
One problem with trying to restrict yourself to instrumental rationality is that some irrational antipatterns are hard to avoid without epistemic rationality as a back-up. For example, if you are being rational only in order to meet, and only to the extent that you meet certain instrumental goals, you may find that you can efficiently cheat by being less-than-rational in how you evaluate whether those goals have been met.
This sounds odd. If I wish to be rich, then how could I fool myself? Am I not rich or not rich strictly? How could I pretend I have a billion dollars if I do not?
Occasionally you will see the argument that rationality itself is not a component of a flourishing human life but indeed can interfere with human flourishing.
The steadfast pursuit of truth and reason comes with no guarantee of leading to a better life unless it turns out that the steadfast pursuit of truth and reason is itself part of a better life. In other words: If rationality is not a virtue, it might turn out to be a poor use of your time.
These are different claims:
rationality in general is bad. (Drink not more poison than is necessary...when you are drunk enough, stop, lest you loose your eyes.)
rationality in general is neutral. (‘If it helps you achieve your goals, by all means. But when the pursuit of money would take more of your time, put aside riches’ pursuit and enjoy.′
I like some jam with toast, some jam with pancakes, some pancakes with butter, some toasts with cream cheese—I delight in variety, but hate monotony! Use rationality to change your life, not freeze it, to grow not wither. Preserve the pictures you wish, but do not turn the world to stone.*
*Don’t destroy things to make them understandable. (This seems like an extreme, but do people go there for ‘rationality’?)
reason is key to the careful discrimination with which we make those choices and is thereby an ingredient in most if not all virtues.
Then what is reason but decision making? No.
With knowledge you might see more options. With reckoning you might assemble out of parts before you, new possibilities. It may not be true that ‘rationality’ is ‘the ability to make a car, or turn a car into a motorcycle so you can make it out of the desert when your car breaks down’. But… something perhaps similar or related to rationality seems relevant here.
That doesn’t exactly contradict the objection that reason need be carried on only so far as it has practical results.
New possibilities: if you enjoy sudoku, then why not play?
Is there no fun in reason, no delight in solving puzzles or learning new things? If so, then turning away from it if it gets you nothing you desire may seem sensible. But if it is a source of joy—if increased literary analysis does not lessen your enjoyment, but makes things more enjoyable, bring you laughter at new humor in new events and complexity, if reason helps you figure out which films you may better enjoy, so you find them and watch them… then why not?
Because it is ‘rational’? Because someone else has called it so? (Ah, if only my favorite games were not logical, alas!)
Also, in spite of Wilkinson’s objection, people seem to be comfortable making at least some confident judgements about human flourishing. We call things like blindness, deafness, aphasia, paralysis, etc. “disabilities,” “handicaps,” “afflictions,” or what-have-you, because we have a common-sense idea of human flourishing that includes things like sight, hearing, language, locomotion, etc.
Yes. Though an article or two has pointed out Beethoven’s composition as something which contradicts some intuitions one might have around this. I don’t know where to find it now, but one suggested that being hard of hearing, leads to better composition or appreciating music differently (and how the author used this fact to their benefit/enjoyment).
the experience machine
If someone says “I want X.” And you say “do you want the experience machine?” They may still say “No.” (It may be a flaw in theory, but is it a flaw in practice?)
It’s easier to imagine being incorrect about your flourishing than about whether you are “suffering” or “happy”.
It also seems like it could be a harder problem to fix? Recognizing the difficulty not yet resolved seems like a good thing.
Although virtue ethics scholars love to wring their hands worriedly about objections like these, the core of virtue ethics remains mostly easy to swallow. In short, if you believe ① a human life can be a better or worse one to live, ② some significant part of what determines the quality of a human life is the choices that human makes, ③ the better choices are not wholly arbitrary, but have regularities such that choices of-certain-sorts more reliably characterize better lives, and ④ choices of-certain-sorts can become learned habits through deliberate effort, then you implicitly believe in some sort of
‘develop good habits can improve life’ theory
But if you want to attend one of their workshops and get personally-guided, hands-on direction… you may be out of luck.
Post idea: how to teach practical ‘rationality’/stuff online* (or via a book). Or video.
*There’s zoom, there’s Minecraft**...
**Just because you’re learning rationality doesn’t mean you can’t have fun. Or use tools that ‘don’t look serious’ to illustrate things (building the right levels might take a while, but the flexibility to build a very simple computer and show how it works might be useful for some things). Or maybe MTG, not Minecraft, is the way to go.
It is difficult to be textbook-rational in real time, about things whose domains are unclearly bounded, while using squishy hardware. Alas, this describes most of our questions for which rationality would be helpful.
Video games might be...partially bounded I suppose. Some are oriented around figuring out the rules, as well as solving problems (that require learning, trying things out, etc.).
If we were rigorously “scientific” in collecting information for our decisions, it would take us so long to collect the data that the time for action would have passed long before the work had been completed. So how do we know when we have enough?
This seems like the big advantage video games have over static problems—you don’t figure out what to do in time, you die, you start over. There’s more than one solution, but you have to find one quick sometimes.
Of course, this is a terrible password selection strategy. 19 times out of 20, your password will be the very first one the attacker tries, if they know your strategy.
The number of times doesn’t seem like it’d be relevant in practice. The amount of time, does.
Things that would work:
1) Even the most basic dictionary attack (which will include the word “password”).
2) A list of simple/common passwords.
What? Pretty sure chess AIs, aren’t that complex today. They handle a simple world.
Why not blog anonymously?