# P.

Karma: 427
• 26 Nov 2022 14:34 UTC
9 points
3 ∶ 0

Doesn’t minimizing the L1 norm correspond to performing MLE with laplacian errors?

• 4 Oct 2022 16:51 UTC
2 points
0 ∶ 0

If the optimal norm is below the minimum you can achieve just by re-scaling, you are trading-off training set accuracy for weights with a smaller norm within each layer. It’s not that weird that the best known way of making this trade-off is by constrained optimization.

# Make-A-Video by Meta AI

29 Sep 2022 17:07 UTC
9 points
(makeavideo.studio)
• 25 Sep 2022 15:49 UTC
4 points
0 ∶ 0
in reply to: Martin Randall’s comment

But the outcome IS uncertain. I want to know how low the karma threshold can go before the website gets nuked. There are other fun games, but this one is unique to LW and seems like an appropriate way of celebrating Petrov Day.

• 24 Sep 2022 16:50 UTC
9 points
0 ∶ 0

I wish I had a better source, but in this video, a journalist says that a well-equipped high schooler could do it. The information needed seems to be freely available online, but I don’t know enough biology to be able to tell for sure. I think it is unknown whether it would spread to the whole population given a single release, though.

If you want it to happen and can’t do it yourself nor pay someone else to do it, the best strategy might be to pay someone to translate the relevant papers into instructions that a regular smart person can follow and then publish them online. After making sure to the best of your capabilities (i.e. asking experts the right questions) that it actually is a good idea, that is.

• 14 Sep 2022 13:25 UTC
1 point
0 ∶ 0
AF

The simplest possible acceptable value learning benchmark would look something like this:

• Data is recorded of people playing a video game. They are told to maximize their reward (which can be exactly computed), have no previous experience playing the game, are actually trying to win and are clearly suboptimal (imitation learning would give very bad results).

• The bot is first given all their inputs and outputs, but not their rewards.

• Then it can play the game in place of the humans but again isn’t given the rewards. Preferably the score isn’t shown on screen.

• The goal is to maximize the true reward function.

• These rules are precisely described and are known by anyone who wants to test their algorithms.

None of the environments and datasets you mention are actually like this. Some people do test their IRL algorithms in a way similar to this (the difference being that they learn from another bot), but the details aren’t standardized.

A harder and more realistic version that I have yet to see in any paper would look something like this:

• Data is recorded of people playing a game with a second player. The second player can be a human or a bot, and friendly, neutral or adversarial.

• The IO of both players is different, just like different people have different perspectives in real life.

• A very good imitation learner is trained to predict the first player’s output given their input. It comes with the benchmark.

• The bot to be tested (which is different from the previous ones) has the same IO channels as the second player, but doesn’t see the rewards. It also isn’t given any of the recordings.

• Optionally, it also receives the output of a bad visual object detector meant to detect the part of the environment directly controlled by the human/​imitator.

• It plays the game with the human imitator.

• The goal is to maximize the human’s reward function.

It’s far from perfect, but if someone could obtain good scores there, it would probably make me much more optimistic about the probability of solving alignment.

• 13 Sep 2022 19:17 UTC
1 point
0 ∶ 0

By pure RL, I mean systems whose output channel is only directly optimized to maximize some value function, even if it might be possible to create other kinds of algorithms capable of getting good scores on the benchmark.

I don’t think that the lack of pretraining is a good thing in itself, but that you are losing a lot when you move from playing video games to completing textual tasks.

If someone is told to get a high score in a video game, we have access to the exact value function they are trying to maximize. So when the AI is either trying to play the game in the human’s place or trying to help them, we can directly evaluate their performance without having to worry about deception. If it learns some proxy values and starts optimizing them to the point of goodharting, it will get a lower score. On most textual tasks that aren’t purely about information manipulation, on the other hand, the AI could be making up plausible-sounding nonsense about the consequences of its actions, and we wouldn’t have any way of knowing.

From the AI’s point of view being able to see the state of the thing we care about also seems very useful, preferences are about reality after all. It’s not obvious at all that internet text contains enough information to even learn a model of human values useful in the real world. Training it with other sources of information that more closely represent reality, like online videos, might, but that seems closer to my idea than to yours since it can’t be used to perform language-model-like imitation learning.

Additionally, if by “inability to learn human values” you mean isolating them enough so that they can in principle be optimized to get superhuman performance, as opposed to being buried in its world model, I don’t agree that that will happen by default. Right now we don’t have any implementations of proper value learning algorithms, nor do I think that any known theoretical algorithm (like PreDCA) would work even with limitless computing power. If you can show that I’m wrong, that would surprise me a lot, and I think it could change many people’s research directions and the chances they give to alignment being solvable.

• Do you have plans to measure the alignment of pure RL agents, as opposed to repurposed language models? It surprised me a bit when I discovered that there isn’t a standard publicly available value learning benchmark, despite there being data to create one. An agent would be given first or third-person demonstrations of people trying to maximize their score in a game, and then it would try to do the same, without ever getting to see what the true reward function is. Having something like this would probably be very useful; it would allow us to directly measure goodharting, and being quantitative it might help incentivize regular ML researchers to work on alignment. Will you create something like this?

• Do you mean from what already exists or from changing the direction of new research?

• What are your thoughts on having 1-on-1s with the top researchers in similar fields (like maths) instead of regular researchers and with people that are explicitly trying to build AGIs (like John Carmack)?

• 23 Aug 2022 13:26 UTC
12 points
1 ∶ 1

Positive:

People will pay way less for new pretty images than they did before.

Thanks to img2img people that couldn’t draw well before now finally can: https://​​www.reddit.com/​​r/​​StableDiffusion/​​comments/​​wvcyih/​​definitely_my_favourite_generation_so_far/​​

Because of this, a lot more art will be produced, and I can’t wait to see it.

Since good drawings are now practically free, we will see them in places where we couldn’t before, like in fanfiction.

Stable Diffusion isn’t quite as good as a talented artist, but since we can request hundreds of variations and pick the best, the quality of art might increase.

Ambiguous or neutral:

It can produce realistic images and it is easier to use and more powerful than Photoshop, so we will see a lot of misinformation online. But once most people realize how easy it is to fabricate false photographs hopefully it will lead them to trust what they see online way less than they did before, and closer to the appropriate level.

Anyone will be able to make porn of anyone else. As long as people don’t do anything stupid after seeing the images, this seems inconsequential. As discussed on HN, it might cause people to stop worrying about others seeing them naked, even if the photos are real.

Anyway, both of these will cause a lot of drama, which I at least, perhaps selfishly, consider to be slightly positive.

Negative:

I expect a lot of people will lose their jobs. Most companies will prefer to reduce their costs and hire a few non-artists to make art rather than making more art.

New kinds of scams will become possible and some people will keep believing everything they see online.

Unlike DALL-E 2, anyone can access this, so it will be much more popular and will make many people realize how advanced current AI is and how consequential it will be, which will probably lead to more funding.