One addenda I would make: breaks also break your focus and it takes time to get back into it, so if you can make tasks that fit neatly in-between your breaks, then you only need to get back up to speed the same way you would have switching to the new task anyway.
I must admit to a greater degree of ignorance than usual for my comment here, but I have a huge problem with the longtermists [at least from the longtermist paper I read]: their position reeks of begging the question. If we suppose that an immense numbers of people will live in the future, that the short term is not immensely easier to knowingly influence, that improving the short term does not improve the long term, that there is no medium term worth considering, that influences are percentage based, and that we care equally about immensely future people [including nonhuman ones] as we do our loved ones, then and only then does their argument make sense. That said, I’m perfectly fine with pursuing long term benefit, and think that one of their points should be highly pursued; research into how best to influence the future seems worthwhile.
It seems clearly made as a justification for the position they already wanted to take. There’s nothing wrong with that, but I think their premises are unlikely. I think it is obvious that the near term is much easier to influence. Assumptions I find highly questionable are that the short term doesn’t have significant knock on effects [I think it clearly does], that we shouldn’t consider a distinct medium term, and that we shouldn’t care more about those closer to us (in time, space, likeness, and just plain affection). Percentage based is also highly questionable, considering we know requirements to improve tech seem exponential, and things that are naturally easier to quantify are probably much easier to improve [so things other than tech are hard to improve. AI safety is in a philosophy stage]. They also fail to include a scenario where the number of future people doesn’t explode. I also don’t believe in quick AI takeoff if AGI ever happens, and so even if I were one of them, I wouldn’t focus so much AI safety. (I am aware this community was built by people very concerned about AI safety.)
In the linked post, I think they can’t tell which intermediate goals to pursue for two reasons. First, they are looking too far into the future, and two, AI simply isn’t advanced enough yet to build good hypothesis about how they will really end up working [this is one reason it is philosophy]. The possibilities space is immense, and we have few clues where to look. (I do think current approaches are also subtly very wrong, so they are actually looking in the wrong general areas too.)
Also, focusing on AI governance is a bit of a strange way to influence AI safety, and so it is hard to know what effect you will have based on what you do. Influencing the people that influence the laws and norms of the society AI researchers are operating in when there are hundreds of countries, and possibly thousands of cultures is a highly difficult task. Historically, many such influences have turned out to be malignant regardless of whether the people behind them were beneficent or malevolent. There are other approaches to influence, but they are even less reliable. It seems like a genuinely very tricky problem that may be clarified later once AI is really understood, but not until then. Focusing on understanding how and why AI will do things seems likely to be much more valuable than locking in governance before we understand.
This seems like a complete misunderstanding of my point. I said that noticing where you are confused is important. That’s the stuff you know you don’t know. If you learn related things, it may make it easier to realize you were confused, but it still seems obviously necessary.
People love to throw around the word ‘meaningless’ when they simply don’t understand. This is attempting to short circuit your confusion by pretending there is nothing to understand. This is especially true in epistemology.
For someone’s life to be meaningful is not mere inclination (or no one would need to search for meaning, and many do.) I am sometimes inclined to do things I disapprove of, and if I did them, I would not find them meaningful in this sense. Rather than meaning being mere reality, meaning is found in what you will do with it, because what it will be is intrinsically valuable.
In this sense, love is meaningful, and electromagnetism is not. A smiley face is also not meaningful. Playing with magnets and making someone you love smile is very meaningful.
It’s hardly a supernatural claim (though many religious folks do like it, and it is easy to pursue within many religious traditions. Said pursuit does not mean they successfully reach such a point though.) The meaning in your life can be completely chosen by you. ‘There is something worth doing, something worth thinking, and something worth living for’ along with ‘I am pursuing something like that’ combine for a very similar meaning as well.
The feeling is what’s known as a peak experience (I think that’s the term). This is not hedonism (so the wireheading bit is understandable, but completely off topic.) Basically, at certain points feeling an immense joy is a part of what people seek, but that isn’t why they find meaning to be so important -you have such experiences when you find something that is immensely meaningful to you. It can be anything, but it has to be that important to you. You have the experience because it is meaningful; it isn’t meaningful because you experience that. The theory is that you will always seek such things, though perhaps ineffectually, and often subconsciously. Even if you were changed to not feel the joy that goes along with it, you would still seek such things.
You have a strange objection to using the word meaning there. “My life has meaning” is exactly the same as ‘my life is meaningful.’ It is very similar to ‘the things I do, the thoughts I think, and the life I live are not pointless. I exist for a reason.’.
I don’t think you would be expected to see the part of you that matches what people seem to get out of religion. It is almost always posited as being nearly completely subconscious in operation, but your description of how you reacted to Michaelangelo’s David is exactly it. Also, isn’t that exactly what an emotionless robot would say?
While I’m not a rationalist by the standards of this community, it does have a useful exhortation about how important it is to notice when you are confused. It is easy to retreat and pretend not to be confused, or put on a show of bravado, but if you do, how will you learn to understand?
Melatonin actually causes a shift much larger than ten to twenty minutes -when taken early. Melatonin taken in the morning causes a large shift to delay the cycle (this can cause a shift of several hours). Melatonin taken after several hours hastens the cycle, also by hours. If this weren’t the case, it would be useless as I currently use it. The ten to twenty minutes is as a sedative, when taken twenty minutes before bedtime.
There are, of course, a number of pathways affecting sleep timing, including the uninformatively named System X that just tries to keep track of time by dead reckoning. I believe, perhaps wrongly, that the SCN’s sleep related functions are mostly directly by melatonin; melatonin reduces the firing rates of the parts of the SCN that increase in firing rate in the presence of light (according to Wikipedia). This is the core timing mechanism of how light affects the SCN, isn’t it?
Edit: Looking at it again, the relevant part of the SCN article ( https://en.wikipedia.org/wiki/Suprachiasmatic_nucleus )(in the electrophysiology section) does not have direct citations, but I’ll assume it’s correct unless this activity of melatonin is directly disputed.
Edited again: An edit changed the structure of what I was saying, making for a strange sentence I don’t endorse.
It should be noted that this is not true in a large number of other cultures / languages. Because it is the standard greeting here, ‘How are You?’ is part of call and response rather than a question, but in places that haven’t standardized on it, it is a genuine question.
Honestly, I find it awkward to have questions that aren’t meant to be actually answered too.
I don’t know the theory itself, but from your description it seems likely that it is a simple ease of thinking thing. ‘What should I believe is the likelihood that the result of a coinflip is heads?’ isn’t any different in meaning than ‘estimating the probability of heads from data’ or ‘how plausible is heads?’ as far as our actions go. We have formal ways of doing the middle of the three easily, so it is easier to think of that way, and we have built up intuitions about coinflips that require it.
Whether or not it is a physical property, it is easier to describe properties of individual things rather than of large combinations of things and actions. If his description of how the evidence should be weighed includes large parts of his theory, it could still be a valuable example.
Yes, I suppose I left out that you can determine that something can’t be computed if you couldn’t do it with a Turing machine. Proofs of impossibility are actually somewhat important.
Practical costs today are a shifting sand, but worthwhile for difficult and important tasks, while being useless on a number of other ones due to the difficulty of determining them. What algorithm, and what should be the reference computer? (Or reference computer building / buying process. It would be silly to include a massive R&D program, but what about a small one that doesn’t take very long to make a component that vastly improves results?)
Reversible computing is a huge question mark, but so are the lower bounds of minimum energy used to delete information. [I think the theories haven’t really been tested in that area, because we have no idea how to, and are many orders of magnitude away.] On a side note, I expect based purely on my own intuition that reversible computing will actually end up taking more energy in all practical cases than the minimum reachable efficiency of nonreversible.
Quantum computing does change the mark quite a bit cost on very specific algorithms if quantum computers actually manage to scale to large enough sizes for them, but that is still many orders of magnitude for most problems they are likely superior for. Nonetheless, I think that an algorithm that is improved sufficiently by a quantum computer should be considered using a quantum version as soon as we have good number for it.
Honestly, it’s a little strange that light therapy would help and melatonin not (since light therapy shifts circadian rhythms via [probably] lowering your melatonin levels in the morning). It’s good you have your sleep issues under control.
Such a table cannot really be created because it is too large, not just for computing, but even for storing in memory if it were somehow given to you. It is not out of the question that computing resources continues to grow enough such that it eventually becomes feasible, but we have no idea if they will, and it would be a long time in the future.
Theoretical Turing machines are very simple, but have infinite resources, and are thus a bad way of determining the difficulty of things.
Calculating at compile time is still obviously computation! Obviously, if you can, it is better to do so most of the time, but it is also irrelevant to the point. This isn’t something that simply takes a long time to calculate, but if you run it for a few hours or days when creating the program it can be recorded. You cannot, in fact, calculate this beforehand because it is computationally infeasible. (In some cases, where the heuristics mentioned earlier work well enough, it can be computed, but that relies on the structure of the problem, and still requires a lot of computation.)
Obviously, we are just talking past each other, so I’ll stop responding here.
Note: I already responded to him directly about his reply to me.
The fact that the specific and general differ is unimportant to my point. You don’t have the answer to start with, and so you have to calculate it. The calculation is what computation is. You can’t just assume that you already know the answer, and claim that makes computing it trivial.
The cities being constant changes nothing in this discussion, since you still had to compute it before putting the answer in the lookup table, and knowing which cities it was is only a precondition, not an answer. Memoization (the technical term for keeping track of the intermediate results of your computation) is useful, but not a panacea.
I can’t say I remember noticing either one of them being listed; perhaps they were glossed over in my remembering things as going the easy way?
I do think that learning to be more accurate through checking implications and checking alternatives is absolutely necessary for high level general intelligence unless you want to include brute force checking the entire possible state of the universe as intelligent. Bootstrapping seems very necessary for getting from where we are now.
Honestly, if it isn’t self-reflective, I view it as an ordinary algorithm.
In truth, I expect many respondents believe that the trajectory for such things is slowing down massively as we speak. [We do appear to have passed an inflection point toward slowing down]. There are good technological reasons for this belief, and we would need extremely advance technologies we have no current concept of to reach even a small fraction of that far. Now, that has always worked before, but it is quite understandable not to expect it to continue.
Believing extrapolations of trends have the status of law is very dangerous, but very tempting.
An interesting idea. It would definitely need to be explained, but it is very easy to see and understand after that [though it is also a Norse God.] Perhaps it should be tried mainly in the longer form at first, for people to get used to it, replacing the phrase ‘if and only if’.
I see a major problem here, which is all the worse for simplicity. It sounds like a censored swear word. The types that would actually use this in the beginning are being technical, and it isn’t very technical to be swearing about if and only if [publicly]. It sounds more appropriate for a rant. The current implementation has problems, but ‘iff’ is clearly superior to the proposed alternative.
You fail to see the issue. There are 100 cities, and an extreme number of paths to get between them [literally infinite if you include cycles, which you have to convince the algorithm to exclude.] You do not know the length of these composite paths between cities, so you have to calculate them.
Theoretically, you need to know the length of every path to be sure you have the shortest path. In practice, we can exclude cycles, and use heuristics [such as map distance] to narrow down which paths to compute, but it is still an overly difficult problem. (It is an equally difficult variant of the traveling salesman problem in computer science). When I just googled it, a team in Japan was being lauded for 16 ‘cities’ using a different kind of processor in 2020. (I don’t know how links work here. https://www.popularmechanics.com/science/a30655520/scientists-solve-traveling-salesman-problem/ )
If that’s what you meant, it is rather unclear in the initial comment. It is, in fact, very important that we do not know what the sequence is. You could see it as the computation is to determine which book in the library of Babel to look at. There is only one correct book [though some are close enough], and we have to find that one [thus, it is a search problem.] How difficult this search is, is actually a well defined problem, but it simply has multiple ways of being done [for instance, by a specialist algorithm, or a general one.]
Of course, I do agree that a lookup table can make some problems trivial, but that doesn’t work for this sort of thing [and a lookup table of literally everything is basically is what the Library of Babel would be.] Pure dumb search doesn’t work that well, especially when the table is infinite.
Edit: You can consider finding it randomly the upper bound on computational difficulty, but lowering bound requires an actual algorithm [or at least a good description of the kind of thing it is], not just the fact that there is an algorithm. The Library of Babel proves very little in this regard. (Note: I had to edit my edit due to writing something incorrect.)