social system designer http://aboutmako.makopool.com
mako yass
(I’m aware of most of these games)
I made it pretty clear in the article that it isn’t about purely cooperative games. (Though I wonder if they’d be easier to adapt. Cooperative + complications seems closer to the character of a cohabitive game than competitive + non-zero-sum score goals do...)
Gloomhaven seems, and describes itself as being a cooperative game. What competitive elements are you referring to?
The third tier is worth talking about. I think these sorts of games might, if you played them enough, teach the same skills, but I think you’d have to play them for a long time. My expectation is that basically all of them end with a ranking? as you said, first, second, third. The ranking isn’t scored, (ie, we aren’t told that being second is half as good as being first) so there’s not much clarity about how much players should value them, which is one obstacle to learning. Rankings also keep the game zero sum on net, and zero sum dynamics between first and second or between first and the alliance have the focus of your attention most of the time. The fewer or the more limited mutually beneficial deals are, the less social learning there will be. Zero sum dynamics need to be discussed in cohabitive games, but the games will support more efficient learning if they’re reduced.
And there really are a lot of people who think that the game that humans are playing in the real world is zero sum, that all real games are zero sum, so, I also suspect that these sorts of games might never teach the skill, because to teach the skill you have to show them a way out of that mindset, and all they do is reinforce it.competitive [...] not usually permanent alliances are critical to victory: Diplomacy, Twilight Imperium (all of them), Cosmic Encounter
This category is really interesting, because the alliances expire and have to be remade multiple times per game, and I’ve been meaning to play some games from this category, but they’re also a lot more foggy, the agreements are of poor quality, they invite only limited amounts of foresight and social creativity, in contrast, writing good legislation in the real world seems to require more social creativity than we can currently produce.
Imagining a pivotal act of generating very convincing arguments for like voting and parliamentary systems that would turn government into 1) an working democracy 2) that’s capable of solving the problem. Citizens and congress read arguments, get fired up, problem is solved through proper channels.
Yeah.
Well that’s the usual reason to invoke it, I was more talking about the reason it lands as a believable or interesting explanation.
Notably, Terra Ignota managed to produce a mcguffin by having the canner device be extremely illegal by having even knowledge of its existence be a threat to the world’s information infrastructure, so I’d guess that’s the reason, iirc, they only made one.
I’m guessing they mean that the performance curve seems to reach much lower loss before it begins to trail off, while MLPs lose momentum much sooner. So even if MLPs are faster per unit of performance at small parameter counts and data, there’s no way they will be at scale, to the extent that it’s almost not worth comparing in terms of compute? (which would be an inherently rough measure anyway because, as I touched on, the relative compute will change as soon as specialized spline hardware starts to be built. Due to specialization for matmul|relu the relative performance comparison today is probably absurdly unfair to any new architecture.)
Theoretically and em-
pirically, KANs possess faster neural scaling laws than MLPsWhat do they mean by this? Isn’t that contradicted by this recommendation to use the an ordinary architecture if you want fast training:
It seems like they mean faster per parameter, which is an… unclear claim given that each parameter or step, here, appears to represent more computation (there’s no mention of flops) than a parameter/step in a matmul|relu would? Maybe you could buff that out with specialized hardware, but they don’t discuss hardware.
One might worry that KANs are hopelessly expensive, since each MLP’s weight
parameter becomes KAN’s spline function. Fortunately, KANs usually allow much smaller compu-
tation graphs than MLPs. For example, we show that for PDE solving, a 2-Layer width-10 KAN
is 100 times more accurate than a 4-Layer width-100 MLP (10−7 vs 10−5 MSE) and 100 times
more parameter efficient (102 vs 104 parameters) [this must be a typo, this would only be 1.01 times more parameter efficient].I’m not sure this answers the question. What are the parameters, anyway, are they just single floats? If they’re not, pretty misleading.
often means “train the model harder and include more CoT/code in its training data” or “finetune the model to use an external reasoning aide”, and not “replace parts of the neural network with human-understandable algorithms”.
The intention of this part of the paragraph wasn’t totally clear but you seem to be saying this wasn’t great? From what I understand, these actually did all made the model far more interpretable?
Chain of thought is a wonderful thing, it clears a space where the model will just earnestly confess its inner thoughts and plans in a way that isn’t subject to training pressure, and so it, in most ways, can’t learn to be deceptive about it.
This is good! I would recommend it to a friend!
Some feedback.
An individual human can be inhumane, but the aggregate of human values kind of visibly isn’t and in most ways couldn’t be: Human cultures are getting more humane reliably as transparency/reflection and coordination increases over time, but also inevitably if you aggregate a bunch of concave values it will produce a value system that treats all of the subjects of the aggregation pretty decently.
A lot of the time, when people accuse us of conflating something, we equate those things because we have an argument that they’re going to turn out to be equivalent.
So emphasizing a difference between these two things could be really misleading, and possibly kinda harmful, given that it could undermine the implementation of the simplest/most arguably correct solutions to alignment (which are just aggregations of humans’ values). This could be a whole conversation, but could we just not define humane values as being necessarily distinct from human values? How about this:People are sometimes confused by ‘Human values’, as it seems to assume that all humans value the same things, but many humans have values that conflict with the preferences of other humans. When we say ‘Humane values’, we’re defining a value system that does a decent job at balancing and reconciling the preferences of every human (Humans, Every one).
[graph point for “systems programmer with mlp shirt”] would it be funny if there were another point, “systems programmer without mlp shirt”, and it was pareto-inferior
“What if System 2 is System 1”. This is a great insight, I think it is, and I think the main reason nerdy types often fail to notice how permeable and continuous the boundary is a kind of tragic habitual cognitive autoimmune disease, and I have a post brewing about this after I used a repaired relationship with the unconscious bulk to cure my astigmatism (I’m going to let it sit for a year just to confirm that the method actually worked and myopia really was averted)
Exponential growth is usually not slow, and even if it were slow, it wouldn’t entail that “we’ll get “warning shots” & a chance to fight back”, it only takes a small sustained advantage to be able to utterly win a war (though contemporary humans don’t like to carry wars to completion these days, the 20th century should have been a clear lesson that such things are within our abilities at current tech levels). Even if progress in capabilities over time continued to be linear, impact over capabilities is not going to be linear, it never has been.
But overall I think it addresses a certain audience who I know much better than my version of this that I hastily wrote last year when I was summoned to speak at a conference would have (and so I never showed it to them. Maybe one day I will show them yours.).
Uh I’m saying I think henry’s is better. Except for the title maybe.
this one is better
Possibly incidental, but if people were successfully maintaining continuous secure access to their signal account you wouldn’t even notice because it doesn’t even make an attempt to transfer encrypted data to new sessions.
I don’t think e2e encryption is warranted here for the first iteration. Generally, keypair management is too hard, today, everyone I know who used encrypted Element chat has lost their keys lmao. (I endorse element chat, but I don’t endorse making every channel you use encrypted, you will lose your logs!), and keypairs alone are a terrible way of doing secure identity. Keys can be lost or stolen, and though that doesn’t happen every day, the probability is always too high to build anything serious on top of them. I’m waiting for a secure identity system with key rotation and some form of account recovery process (which can be an institutional service or a “social recovery” thing) before building anything important on top of e2e encryption.
Then, users can put in their own private key to see a post
This was probably a typo but just in case: you should never send a private key off your device. The public key is the part that you send.
So I wrote a feature recommendation: https://www.lesswrong.com/posts/55rc6LJcqRmyaEr9T/please-stop-publishing-ideas-insights-research-about-ai?commentId=6fxN9KPeQgxZY235M
On infrastructures for private sharing:
Feature recommendation: Marked Posts (name intentionally bland. Any variant of “private” (ie, secret, sensitive, classified) would attract attention and partially negate the point)
This feature prevents leaks, without sacrificing openness.
A marked post will only be seen by members in good standing. They’ll be able to see the title and abstract in their feed, but before they’re able to read it, they have to click “I declare that I’m going to read this”, and then they’ll leave a read receipt (or a “mark”) visible to the post creator, admins, other members in good standing. (these would also just serve a useful social function of giving us more mutual knowledge of who knows what, while making it easier to coordinate to make sure every post gets read by people who’d understand it and be able to pass it along to interested parties.)
If a member “reads” an abnormally high number of these posts, the system detects that, and they may have their ability to read more posts frozen. Admins, and members who’ve read many of the same posts, are notified, and you can investigate. If other members find that this person actually is reading this many posts, that they seem to truly understand the content, they can be given an expanded reading rate. Members in good standing should be happy to help with this, if that person is a leaker, well that’s serious, if they’re not a leaker, what you’re doing in the interrogation setting is essentially you’re just getting to know a new entrant to the community who reads and understands a lot, talking about the theory with them, and that is a happy thing to do.
Members in good standing must be endorsed by another member in good standing before they will be able to see Marked posts. The endorsements are also tracked. If someone issues too many endorsements too quickly (or the people downstream of their endorsements are collectively doing so in a short time window), this sends an alert. The exact detection algorithm here is something I have funding to develop so if you want to do this, tell me and I can expedite that project.
There never will be an infrastructure for this.
I should be less resolute about this. It would kind of be my job to look for a design that could do it.
One thing we’ve never seen is a system where read receipts are tracked and analyzed on the global level and read permissions are suspended and alerts are sent to admins if an account is doing too many unjustified reads.
This would prevent a small number of spies from extracting a large number of documents.
I suppose we could implement that today.
You think that studying agency and infrabayesianism wont make small contributions to capabilities? Even just saying “agency” in the context of AI makes capabilities progress.
“So where do I privately share such research?” — good question! There is currently no infrastructure for this.
This is why I currently think you’re completely wrong about this. There never will be an infrastructure for this. Privacy of communities isn’t a solvable problem in general, as soon as your community is large enough to compete with the adversary, it’s large enough and conspicuous enough that the adversary will pay attention to it and send in spies and extract leaks. If you make it compartmented enough to prevent leaks/weed out the spies, it wont have enough intellectual liveliness to solve the alignment problem.
There is nothing that makes differentially helping capabilities “fine if you’re only differentially helping them a little bit”.
If your acceptable lower limit for basically anything is zero you wont be allowed to do anything, really anything. You have to name some quantity of capabilities progress that’s okay to do before you’ll be allowed to talk about AI in a group setting.
It would seem to me that in this world brains would be much more expensive (or impossible) to copy. Which is worth talking about, because there are designs in our own era for very efficient very dense neural networks that have that same quality. They can be trained, but the weights can’t be accessed.
what does it even mean?
There actually is a meaningful question there: Would you enter the experience machine? Or do you need it to be real. Do you just want the experience of pleasing others or do you need those people being pleased out there to actually exist.
There are a lot of people who really think, and might truly be experience oriented. If given the ability, they would instantly self-modify into a Victory Psychopath Protecting A Dream.
:( that isn’t what cooperation would look like. The gazelles can reject a deal that would lead to their extinction (they have better alternatives) and impose a deal that would benefit both species.
Cooperation isn’t purely submissive compliance.