Are you willing to provide a link to this GitHub repo?
Anon User
There’s probably more. There should be more—please link in comments, if you know some!
Wouldn’t “outing” potential honeypots be extremely counterproductive? So yeah, if you know some—please keep it to yourself!
Oftentimes downvoting without taking time to commet and explain reasons is reasonable, and I tends to strongly disagree with people who think I owe an incompetent write an explanation when downvoting. However, just this one time I would ask—can some of the people downvoting this explain why?
It is true that our standard way of mathematically modeling things implies that any coherent set of preferences must behave like a value function. But any mathematical model of the world is new essarily incomplete. A computationally limited agent that cannot fully foresee all consequences of its choices cannon have a coherent set of preferences to begin with. Should we be trying to figure out how to model computational limitations in a way that acknowledges that some form of preserving future choice might be an optimal strategy? Including preserving some future choice on how to extend the computationally limited objective function onto uncertain future situations?
This looks to be primarily about imports—that is, primarily taking into account Trump’s new tariffs. I am guessing that Wall Street does not quite believe that Trump actually means it...
It would seem that my predictions of how Trump would approach this were pretty spot on… @MattJ I am curious what’s your current take on it?
Why would the value to me personally of existence of happy people be linear in the number of them? Does creating happy person #10000001 [almost] identical to the previous 10000000 as joyous as when the 1st of them was created? I think value is necessary limited. There are always diminishing returns from more of the same...
> if you have a program computing a predicate P(x, y) that is only true when y = f(x), and then the program just tries all possible y—is that more like a function, or more like a lookup?
In order to test whether y=f(x), the program must have calculated f(x) and stored it somewhere. How did it calculate f(x)? Did it use a table or calculate it directly?
What I meant is that the program knows how to check the answer, but not how to compute/find one, other than by trying every answer and then checking it. (Think: you have a math equation, no idea how to solve for x, so you are just trying all possible x in a row).
Aligned with current (majority) human values, meaning any social or scientific human progress would be stifled by the AI and humanity would be doomed to stagnate.
Only true when current values are taked naively, because future progress is a part of current human values (otherwise we would not be all agreeing with you that preventing it would be a bad outcome). It is hard to coherently generalize and extrapolate the human values, so that future progress is included in that, but not necessarily impossible.
Your timelines do not add up. Individual selection works on smaller time scales than group selection, and once we get to a stage of individual selection acting in any non-trivial way on AGI agents capable of directly affecting the outcomes, we already lost—I think at this point it’s pretty much a given that humanity is doomed on a lot shorter time scale that that required for any kinds of group selection pressures to potentially save us...
This seems to be making a somewhat arbitrary distinction—specifically a program that computes f(x) in some sort of a direct way, and a program that computes it in some less direct way (you call it a “lookup table”, but you seem to actually allow combining that with arbitrary decompression/decoding algorithms). But realistically, this is a spectrum—e.g. if you have a program computing a predicate P(x, y) that is only true when y = f(x), and then the program just tries all possible y—is that more like a function, or more like a lookup? What about if you have first compute some simple function of the input (e.g. x mod N), then do a lookup?
Yes, and I was attempting to illustrate why this is a bad assumption. Yes, LLMs subject to unrealistic limitations are potentially easier to align, but that does not help, unfortunately.
You ask a superintendent LLM to design a drug to cure a particular disease. It outputs just a few tokens with the drug formula. How do you use a previous gen LLM to check whether the drug will have some nasty humanity-killing side-effects years down the road?
Edited to add: the point is that even with a few tokens, you might still have a huge inferential distance that nothing with less intelligence (including humanity) could bridge.
Agreed on your second part. A part of Trump “superpower” is to introduce a lot of confusion around the bounds, and then convince at least his supporters that he is not really stepping over that where it should have been obvious that he does. So the category “should have been plainly illegal and would have been considered plainly illegal before, but now nobody knows anymore” is likely to be a lot better defined that “still plainly illegal”. Moreover, Trump is much more likely to attempt the former than the latter—not because he actually cares about not doing the latter, but because anything he actually does has a tendency to be reclassified from latter to former. Including after the fact—e.g. many of his past actions were moved from the latter category to former one by the Supreme Court presidential immunity decision...
Yes, potentially less that ASI, and security is definitely an issue, But people breaching the security would hoard their access—there will be periodic high-profile spills (e.g. celebrities engaged in sexual activities, or politicians engaged in something inappropriate would be obvious targets), but I’d expect most of the time people would have at least an illusion of privacy.
I found Eliezer Yudkowsky’s “blinking stars” story (That Alien Message — https://search.app/uYn3eZxMEi5FWZEw5) persuasive. That story also has a second layer of having the extra smart Earth with better functioning institutions, but at the level of intuition you are going for it is probably unnecessary and would detract from the message. I think imagining a NASA-like organisation dedicated to controlling a remote robot at say 1 cycle of control loop per month (where it is perhaps corresponding to 1/30 of a second for the aliens), showing how totally screwed up the aliens are in this scenario, then flipping it around, should be at least somewhat emotionally persuasive.
For the specific example of arguing in a podcast, would not you expect people to already be aware of a substantial subset of arguments from the other side, and so would not it be entirely expected that there would be 0 update on information that is not new, and so not as much update overall, if only a fraction of information is actually new?
Hm, not sure about it being broadcast vs consumed by a powerful AI that somebody else has at least a partial control over.
Getting to the national math Olympiad requires access to regional Olympiad first, then being able to travel. Smart kids from “middle of nowhere” places—exactly to the kinds of kids you want to reach—are more likely to participate in the cities tournament. I wonder whether kids who were eligible for the summer camp, but did not make it there are more of your target audience than those who participated in the camp.
P.S. my knowledge of this is primarily based on how things were ~35 years ago, so I could be completely off.
What about trying to use the existing infrastructure in Russia, e.g.
Donating to school libraries of math magnet schools (starting with “usual suspects” of 57, 2, 43 in Moscow, 239 in St Petersburg, etc, and then going down the list)?
Contacting a competition organizers (e.g. for тургор - турнир городов which tends to have a higher diversity of participants compared to the Olympiad system) and coordinating to use the books as prises for finalists?
Besides not having to reinvent the wheel, kids might be more open to the ideas if the book comes from a local, more readily trusted party.
I think this is also a burden of proof issue. Somebody who argues I ought to sacrifice my/my children’s future for the benefit of some extremely abstract “greater good” has IMHO an overwhelming burden of proof that they are not masking a mistake in their reasoning. And frankly I do not think the current utilitarian frameworks are precise enough / universally accepted enough to be capable of truly meeting that burden of proof in any real sense.