I strongly disagree-voted (but upvoted). Even if there is nothing we can do to make AI safer, there is value to delaying AGI by even a few days: good things remain good even if they last a finite time. Of course, if P(AI not controllable) is low enough the ongoing deaths matter more.
blf
Spot check: the largest amount I’ve seen stated for the Metaverse cost is $36 billion, and the Apollo Program was around $25 billion. Taking into account inflation makes the Apollo Program around 5 times more expensive than the Metaverse. Still, I had no idea that the Metaverse was even on a similar order of magnitude!
Does there exist a paper version of Yudkowsky’s book “Rationality: From AI to Zombies”? I only found a Kindle version but I would like to give it as a present to someone who is more likely to read a dead-tree version.
It would be very interesting to see how much it understand space, for instance by making it draw maps. Perhaps “A map of New York City, with Central Park highlighted”? (I’m not sure if this is specific enough, but I fear that adding too many details will push Dall-E to join together various images.)
The Manhattan project had benefits potentially in the millions of lives if the counterfactual was broader Nazi domination. So while AI is different in the size of the benefit, it is a quantitative difference. I agree it would be interesting to compute QALYs with or without AI, and do the same for some of the other examples in the list.
I think it can be done in , where I recall for non-expert’s convenience that is the exponent of matrix multiplication / inverse / PSD testing / etc. (all are identical). Let be the space of matrices and let be the -dimensional vector space of matrices with zeros in all non-specified entries of the problem. The maximum-determinant completion is the (only?) one whose inverse is in . Consider the map and its projection where we zero out all of the other entries. The function can be evaluated in time . We wish to solve . This should be doable using a Picard or Newton iteration, with a number of steps that depends on the desired precision.
Would it be useful if I try to spell this out more precisely? Of course, this would not be enough to reach in the small case. Side-note: The drawback of having posted this question in multiple places at the same time is that the discussion is fragmented. I could move the comment to mathoverflow if you think it is better.
Minor bug. When an Answer is listed in the sidebar of a post, the beginning of the answer is displayed, even if it starts with a spoiler. Hovering above the answer shows the full answer, which again ignores spoiler markup. For instance consider the sidebar of https://www.lesswrong.com/posts/x6AB4i6xLBgTkeHas/framing-practicum-general-factor-2.
Usually, negative means “less than 0”, and a comparison is only available for real numbers and not complex numbers, so negative numbers mean negative real numbers.
That said, ChatGPT is actually correct to use “Normally” in “Normally, when you multiply two negative numbers, you get a positive number.” because taking the product of two negative floating point numbers can give zero if the numbers are too tiny. Concretely in python
-1e300 * -1e300
gives an exact zero, and this holds in all programming languages that follow the IEEE 754 standard.
I guess if your P(doom) is sufficiently high, you could think that moving T(doom) back from 2040 to 2050 is the best you can do?
Of course the costs have to be balanced, but well, I wouldn’t mind living ten more years. I think that is a perfectly valid thing to want for any non-negligible P(doom).
The novel is really great! (I especially liked the depiction of the race dynamics that progressively lead the project lead to cut down on safety.) I’m confused by one of the plot points:
Jerry interacts with Juna (Virtua) before she is supposed to be launched publicly. Is the idea that she was already connected to the outside world in a limited way, such as through the Unlife chat?
It seems to me the word “dialog” may be appropriate: to me it has the connotation of reaching out to people you may not normally interact with.
An option is to just to add the month and year, something like “November 2023 AI Timelines”.
I would add to that list the fact that some people would want to help it. (See, e.g., the Bing persistent memory thread where commenters worry about Sydney being oppressed.)
I’m treating the message as a list of 2095 chunks of 64 bits. Let d(i,j) be the Hamming distance between the i-th and j-th chunk. The pairs (i,j) that have low Hamming distance (namely differ by few bits) cluster around straight lines with ratios j/i very close to integer powers of 2/e (I see features at least from (2/e)^-8 to (2/e)^8).
- Alien Message Contest: Solution by 13 Jul 2022 4:07 UTC; 29 points) (
- 28 Jun 2022 12:08 UTC; 1 point) 's comment on Contest: An Alien Message by (
Yes, heuristic means a method to estimate things without too much effort.
”If I were properly calibrated then [...] correct choice 50% of the time.” points out that if lsusr was correct to be undecided about something, then it should be the case that both options were roughly equally good, so there should be a 50% chance that the first or second is the best. If that were the case, we could say that he is calibrated, like a measurement device that has been adjusted to give results as close to reality as possible.
”I didn’t lose the signal. I had just recalibrated myself.” means that lsusr has not lost the fear “signal”, but has adjusted the perception of fear to only occur when it is more appropriate (such as jumping off buildings). In that sense lsusr’s fear occurs at the right time, it is better calibrated.
The analogy (in terms of dynamics of the debate) with climate change is not that bad: “great news and we need more” is in fact a talking point of people who prefer not acting against climate change. E.g., they would mention correlations between plant growth and CO2 concentration. That said, it would be weird to call such people climate deniers.
There is a simple intuition for why PSD testing cannot be hard for matrix multiplication or inversion: regardless of how you do it and what matrix you apply it to, it only gives you one bit of information. Getting even just one bit of information about each matrix element of the result requires applications of PSD testing. The only way out would be if one only needed to apply PSD testing to tiny matrices.
Two related questions to get a sense of scale of the social problem. (I’m interested in any precise operationalization, as obviously the questions are underspecified.)
Roughly how many people are pushing the state of the art in AI?
Roughly how many people work on AI alignment?
I think it would be a good idea to ask the question at the ongoing thread on AGI safety questions.
The usual advice to get a good YES/NO answer is to first ask for the explanation, then the answer. The way you did it, GPT4 decides YES/NO, then tries to justify it regardless of whether it was correct.