yes, in some cases a much weaker (because it’s constrained to be provable) system can restrict the main ai, but in the case of llm jailbreaks there is no particular hope that such a guard system could work (eg jailbreaks where the llm answers in base64 require the guard to understand base64 and any other code the main ai could use)
Tao Lin
interesting, this actually changed my mind, to the extent i had any beliefs about this already. I can see why you would want to update your prior, but the iterated mugging doesn’t seem like the right type of thing that should cause you to update. My intuition is to pay all the single coinflip muggings. For the digit of pi muggings, i want to consider how different this universe would be if the digit of pi was different. Even though both options are subjectively equally likely to me, one would be inconsistent with other observations or less likely or have something wrong with it, so i lean toward never paying it
Train two nets, with different architectures (both capable of achieving zero training loss and good performance on the test set), on the same data.
...
Conceptually, this sort of experiment is intended to take all the stuff one network learned, and compare it to all the stuff the other network learned. It wouldn’t yield a full pragmascope, because it wouldn’t say anything about how to factor all the stuff a network learns into individual concepts, but it would give a very well-grounded starting point for translating stuff-in-one-net into stuff-in-another-net (to first/second-order approximation).I don’t see why this experiment is good. This hessian similarity loss is only a product of the input/output behavior, and because both networks get 0 loss, their input/output behavior must be very similar, combined with general continuous optimization smoothness would lead to similar hessians. I think doing this in a case where the nets get nonzero loss (like ~all real world scenarios), would be more meaningful, because it would be similarity despite input-output behavior being non-identical and some amount of lossy compression happening.
yeah, i agree the movie has to be very high quality to work. This is a long shot, although the best rationalist novels are actually high quality which gives me some hope that someone could write a great novel/movie outline that’s more targeted at plausible ASI scenarios
it’s sad that open source models like Flux have a lot of potential for customized workflows and finetuning but few people use them
yeah. One trajectory could be someone in-community-ish writes an extremely good novel about a very realistic ASI scenario with the intention to be adaptable into a movie, it becomes moderately popular, and it’s accessible and pointed enough to do most of the guidence for the movie. I don’t know exactly who could write this book, there are a few possibilities.
Another way this might fail is if fluid dynamics is too complex/difficult for you to constructively argue that your semantics are useful in fluid dynamics. As an analogy, if you wanted to show that your semantics were useful for proving fermat’s last theorem, you would likely fail because you simply didn’t apply enough power to the problem, and I think you may fail that way in fluid dynamics.
Great post!
I’m most optimistic about “feel the ASI” interventions to improve this. I think once people understand the scale and gravity of ASI, they will behave much more sensibly here. The thing I intuitively feel most optimistic (whithout really analyzing it) is movies or generally very high quality mass appeal art.
you can recover lost momentum by decelerating things to land. OP mentions that briefly
And they need a regular supply of falling mass to counter the momentum lost from boosting rockets. These considerations mean that tethers have to constantly adapt to their conditions, frequently repositioning and doing maintenance.
If every launch returns and lands on earth, that would recover some but not all lost momentum, because of fuel spent on the trip. it’s probably more complicted than that though
two versions with the same posttraining, one with only 90% pretraining are indeed very similar, no need to evaluate both. It’s likely more like one model with 80% pretraining and 70% posttraining of the final model, and the last 30% of posttraining might be significant
if you tested a recent version of the model and your tests have a large enough safety buffer, it’s OK to not test the final model at all.
I agree in theory but testing the final model feels worthwhile, because we want more direct observability and less complex reasoning in safety cases.
With modern drones, searching in places with as few trees as Joshua tree could be done far more effectively. I don’t know if any parks have trained teams with ~$50k with of drones ready but if they did they could have found him quickly
I am guilty of citing sources I don’t believe in, particularly in machine learning. There’s a common pattern where most papers are low quality, and no can/will investigate the validity of other people’s papers or write review papers, so you usually form beliefs by an ensemble of lots of individually unreliable papers and your own experience. Then you’re often asked for a citation and you’re like “there’s nothing public i believe in, but i guess i’ll google papers claiming the thing i’m claiming and put those in”. I think many ML people have ~given up on citing papers they believe in, including me.
I don’t particularly like the status hierarchy and incentive landscape of the ML community, which seems quite well-optimized to cause human extinction
the incentives are indeed bad, but more like incompetent and far from optimized to cause extinction
the reason why etched was less bandwidth limited is they traded latency for throughput by batching prompts and completions together. Gpus could also do that but they don’t to improve latency
the reason airplanes need speed is basically because their propeller/jet blades are too small to be efficient at slow speed. You need a certain amount of force to lift off, and the more air you push off of at once the more force you get per energy. The airplanes go sideways so that their wings, which are very big, can provide the lift instead of their engines. Also this means that if you want to go fast and hover efficiently, you need multiple mechanisms because the low volume high speed engine won’t also be efficient at low speed
yeah learning from distant near misses is important! Feels that way in risky electric unicycling.
No, the mi300x is not superior to nvidias chips, largely because It costs >2x to manufacture as nvidias chips
This makes a much worse lesswrong post than twitter thread, it’s just a very rudimentary rehashing of very long standing debates
there’s steganography, you’d need to limit total bits not accounted for by the gating system or something to remove them