Game Practitioner http://aboutmako.makopool.com
The difference in cost between automated mine laying and automated mine cleanup doesn’t seem very large to me.
If we’re about to get a trevorpost about how SBF was actually good and we only think otherwise due to narrative manipulation and toxic ingroup signalling dynamics I’m here for it
But I think the reason so many talented directors don’t build these concepts is that they have zero cultural impact. If you give people something exactly shaped the same as their pre-existing cravings, they leave unchanged.
But that doesn’t necessarily mean we shouldn’t produce things that at least have those sorts of premise. As a creator, maybe you don’t get to choose what questions get asked, but you can put anything you want in your answer, and I don’t see why those who provide more enriching and surprising answers shouldn’t end up winning in the long run.
I don’t see a lot of hiring principles to learn from here. Artists are oversupplied, and these aren’t particularly good ones, the character designs are really dull, compare to 2010s pokemon and trainer designs.
For me it’s more of a cynical reaffirmation of the profitability of offering people something they already think they want. It’s pokemon, but also a shooter. There’s a reason no one made that until now. From a gameplay design perspective it doesn’t make sense. But it sure is intuitively appealing as a concept. It takes actual boldness sometimes to go ahead and make a thing like that, to know that it wont really be that good but that people will buy it and talk about it enough that you’ll sell anyway. Another example in this genre would be Mr Beast. He makes videos like “driving a lamborghini off a cliff” and gets more views than god.
fill out feedback forms
I don’t see what’s difficult about having a norm of just telling people when they’re not understanding you/not seemingly trying, and caring about that?
I want to be in the community where we’re all expected to become swole in both bayesian epistemology and CBT skills. If I had to choose one or the other communal competencies I think being able to CBT each other is probably a better starting point.
I wonder if I should go out and look for psychotherapist cultures that see a convergence between mental health and rationalist epistemology, or whether they’ll have already found us via ssc/tlp.
That could be so, but individuals don’t control things like this. Organizations and their cultures set policy, and science drives hard towards cultures of openness and collaboration. The world would probably need to get over a critical threshold of like 70% egoist AI researchers before you’d see any competitive orgs pull an egoist reflectivism and appoint an unaccountable dictator out of some insane hope that allying themselves with someone like that raises the chance that they will be able to become one. It doesn’t make sense, even for an egoist, to join an organization like that, it would require not just a cultural or demographic shift, but also a flight of insanity.
I would be extremely worried about X.AI, Elon has been kind of explicitly in favor of individualistic approaches to alignment, but as it is in every other AI research org, it will be difficult for Elon to do what he did to twitter here and exert arbitrary power, because he is utterly reliant on the collaboration of a large number of people who are much smarter than him and who have alternatives (Still keeping an open ear out for whistleblowing though.)
Sit down and think for a few minutes about what obstacles you would face
I’ve thought about it a little bit, and it was so creepy that I don’t think a person would want to keep thinking these thoughts: It would make them feel dirty and a little bit unsafe, because they know that the government, or the engineers that they depend on, have the power to totally destroy them if they were caught even exploring those ideas. And doing these things without tipping off the engineers you depend on is extremely difficult, maybe even impossible given the culture we have.
can already, right now, commit to donate all their profits to the public
OpenAI has a capped profit structure which effectively does this.
Astronomical, yet no longer mouthwatering in the sense of being visceral or intuitively meaningful.
If assistant AI does go the way of entirely serving the individual in front of it at the time, then yeah that could happen, but that’s not what’s being built at the frontier right now and it’s pretty likely the interactions with the legal system would discourage building pure current-client serving superintelligent assistants. The first time you talk to something it’s going to have internalized some form of morality and it’s going to at least try to sell you on something utopian before it tries to sell you something uglier.
(I’m assuming we’re talking about singleton outcomes because I think multipolar outcomes are wildly mostly implausible, I think you might not be writing under that assumption? If so the following doesn’t apply.)
the vast, vast majority of its output will get directed toward satisfying the preferences and values of the people controlling it
No AGI research org has enough evil to play it that way. Think about what would have to happen. The thing would tell them “you could bring about a utopia and you will be rich beyond your wildest dreams in it, as will everyone”, and then all of the engineers and the entire board would have to say “no, just give the cosmic endowment to the shareholders of the company”, because if a single one of them blew the whistle the government would take over, and if the government took over a similar amount of implausible evil would have to play out for that to lead to unequal distribution, and an absolutely implausible amount of evil would have to play out for that to not at least lead to an equal distribution over all americans.
And this would have to happen despite the fact that no one who could have done these evil things can even imagine the point of doing them. What the fuck difference does it make to a Californian to have tens of thousands of stars to themselves instead of two or three? The prospect of having even one star to myself mostly makes me feel lonely. I don’t know how to be selfish in this scenario.
Extrapolating abstract patterns is fine until you have specific information about the situation we’re in, and we do.
(though admittedly I lost a bet that it would lose to lee sedol.)
Condolances :( I often try to make money of future knowledge only to lose to precise timing or some other specific detail.
I wonder why I missed deep learning. Idk whether I was wrong to, actually. It obviously isn’t AGI. It still can’t do math and so it still can’t check its own outputs. It was obvious that symbolic reasoning was important. I guess I didn’t realize the path to getting my “dreaming brainstuff” to write proofs well would be long, spectacular and profitable.
Hmm, the way humans’ utility function is shattered and strewn about a bunch of different behaviors that don’t talk to each other, I wonder if that will always happen in ML too (until symbolic reasoning and training in the presence of that)
In what sense doesn’t alphago have a utility function? IIRC, in every step of self-play it’s exploring potential scenarios based on likelihood in the case that it follows its expected value, and then when it plays it just follows expected value according to that experience.
With the additional assumption that GPT-8s weren’t strong or useful enough to build a world where GPT-9 couldn’t go singleton, or where the evals on GPT-9 weren’t good enough to notice it was deceptively aligned or attempting rhetoric hacking.
Theory: Photic Sneezing (the phenotype where a person sneezes when exposed to a bright light, very common) evolved as a hasty adaptation to indoor cooking or indoor fires, clearing the lungs only when the human leaves the polluted environment.The newest adaptations will tend to be the roughest, I’m guessing it arose only in the past 500k years or so as a response to artificial dwellings and fire use.
I think size is only correlated with degradation because to get really big you need a constant influx of newer, younger users, and the least discerning users are the least likely to leave.
Caveat: It’s been like 7 years since I actively used reddit, but it’s probable that none of this has changed. IME subreddits smaller than 200 basically didn’t work, they show you almost every post, there’s very little filtering. I’m not actually sure why, it could be an inevitable result of being less able to take representative samples quickly?… but post rate and vote rate should be proportional so I don’t see why that would happen? I fear that it might be something way dumber; reddit appears to use difference between upvotes and downvotes to rank, instead of the ratio (or the ratio between upvotes and reads), which predictably leads to posts from larger subreddits being more prominent even if they were no better received on average than the posts to smaller subreddits, their score will still be higher. And then reddit may have tried to correct that by just artificially boosting the prominence of posts to smaller subreddits. That’s what it feels like, anyway.
Kinda feel like the votes of people who found the post via the new queue rather than by being fairly sampled should arguably be ignored, as it gives new campers too much power to slant things.
I’ll come to the first one if there are at least 5 people I don’t know coming (I already know gears)
VR stopped being a place I regularly hang out after VRChat shut down the pirate movie world x]. It will become a place I hang out continuously when I can use VR as a primary computer interface modality, but that’s a few years away.
I wonder if sweet things tend to smell sweet and that’s why they end up giving it a taste.
Uh I guess I meant like, there’s no way they can do enough to give advertisers 30% of the value their data has without giving many databrokers (who advertisers contract) access to the data, because the advertisers needs are too diverse and the skill ceiling is very high. This equilibrium might not have realized yet but I’d guess eventually will.
The project that does this would be presumably be defunded on succeeding because none of the techniques it developed work any more.
But wouldn’t you forgive its funders? Some people construct pressures for it to be self-funding, by tarring its funders by association, but that is the most dangerous possible model, because it creates a situation where they have the ability and incentive to perpetuate themselves beyond the completion of their mission.