I certainly have the moral instinct to.
I don’t have a lot of experience with people within my friend group hooking up, or necessarily a lot of experience hearing about the details of hookups enough to have explicitly put me in that situation.
I have had several personal experiences where I reciprocated advances from women, then later been hit by the fallout of the lack of explicit verbal negotiation of what was going to happen. And I certainly reprimand friends (including women) for failing to communicate in their relationships at a broader level when I do know about it.
In my experience I endorse affirmative consent as a *strongly* enforced social norm. Having sex or even kissing someone without explicitly asking first is something that I would reprimand friends if I knew they did.
I am probably in some very strongly selected communities but I like living in a world where affirmative consent is the explicit norm and I would not want to go back outside that.
How do you connect with tutors to do this?
I feel like I would enjoy this experience a lot and potentially learn a lot from it, but thinking about figuring out who to reach out to and how to reach out to them quickly becomes intimidating for me.
I think an important point with this system (and RE: “Not a Taxonomy”) is that it’s possible to mix and match norms.
For example, in a recreational sports team you see inclusion and membership having Civic norms (sometimes moving slightly toward Guest norms for something like pickup games) but praise and feedback being closer to Kaizen norms.
I bring up this specific example because I think it’s the default assumption I made about the sort of space LessWrong was when I discovered it. In particular because the cost of admitting additional members is very low, I expect the minimum standards for expertise in the community to be very low, but the expectations around feedback and discussion to be goal-driven. This contrasts with something like a sports team or a workplace, where there is often a limit on the number of people who can join a community or a high cost to adding more members, and each member relies directly on the work of others.
Strongly agree that I probably would have bought some crypto on LW advice had there been a nearby meetup to go through the process of doing it. Otherwise my priors about not giving my credit card info (or whatever) to strange websites were too strong to believe I would even successfully engage in the strategy.
I have an explicit but vague memory from childhood about doing exactly this, not on an essay question, but on a silly questionnaire like “What are you thankful for this Thanksgiving?”
All the other kids wrote things like “I’m thankful for the food and my family” and I had a very difficult time with it because I felt like it was not at all allowed for me to be thankful for food or for family (or a couple other things that had already been said aloud) and I had trouble thinking of other things to be thankful for.
I remember someone eventually understanding why I was having trouble but I don’t remember how they reacted or what ended up happening though.
Something I noticed in your summarization process is that it seems like you have a lot of records about what you’ve done; in particular for example you mentioned that when coming up with a new monthly theme you went over your activities from every day of the previous month. You also mentioned your first month of the year being structured data. Can you give some more detail about what kind of records you keep, and how you use them? And in particular, what do you think the absolute minimum amount of information is that’s necessary to implement something like this?
(I ask because I currently forget most of what I do and it seems like that would make it very difficult to take any of this advice.)
Regarding bureaucracy day:
1) Is there a list of the sorts of tasks that have been accomplished? I often find myself questioning whether there are bureaucratic things I should be engaging with but am not remembering.
2) I really want someone near me (south bay) to host a bureaucracy day :|
I don’t interpret this as an attempt to make tangible progress on a research question, since it presents an environment and not an algorithm. It’s more like an actual specification of a (very small) subset of problems that are important. Without steps like this I think it’s very clear that alignment problems will NOT get solved—I think they’re probably (~90%) necessary but definitely not (~99.99%) sufficient.
I think this is well within the domain of problems that are valuable to solve for current ML models and deployments, and not in the domain of constraining superintelligences or even AGI. Because of this I wouldn’t say that this constitutes a strong signal that DeepMind will pay more attention to AI risk in the future.
I’m also inclined to think that any successful endeavor at friendliness will need both mathematical formalisms for what friendliness is (i.e. MIRI-style work) and technical tools and subtasks for implementing those formalisms (similar to those presented in this paper). So I’d say this paper is tangibly helpful and far from complete regardless of its position within DeepMind or the surrounding research community.
I am very proud of myself for calling this one.
The obvious choice in this environment is a Clique-y defensebot; send the clique signal and cooperate with them, but instead of being an attackbot otherwise be a defensebot.
Since you wouldn’t use this logic against other cliquebots it would be hard for them to punish you without giving up the ability to cooperate with themselves. You’ll outperform other cliquebots if the other cliquebots dominate by dominating sooner and riding off their punishments. In the mid game either cliquebots dominate with you slightly ahead and you win, or cliquebots die off but you’ve survived the early game and cooperate with them so you get the benefits of attackers in the pool without the costs.
If enough cliquebots defect like this, then I’m not sure what would happen, and it’ll depend a lot on what the initial distribution is I guess. If there are very few cliquebots this one is also vulnerable to other attackbots, but I think enough other possibilities (cliquey equitybot or cliquey foldbot) make me strongly suspect that someone will win by defecting from the clique.
As an unrelated aside, I often rename the hogwarts houses as the four basic D&D classes since the mapping is obvious. I also used to attach these to directives that are somewhat practical on a daily or weekly basis (but which I almost never checked in about or followed up on).
Gryffindor—Fighter—Do something you’re afraid of
Ravenclaw—Wizard—Learn something new
Hufflepuff—Cleric—Help someone or contribute to a group
Slytherin—Thief—Benefit from work someone else has done
I definitely intended the implied context to be ‘problems people actually use deep learning for,’ which does impose constraints which I think are sufficient.
Certainly the claim I’m making isn’t true of literally all functions on high dimensional spaces. And if I actually cared about all functions, or even all continuous functions, on these spaces then I believe there are no free lunch theorems that prevent machine learning from being effective at all (e.g. what about those functions that have a vast number of huge oscillations right between those two points you just measured?!)
But in practice deep learning is applied to problems that humans care about. Computer vision and robotics control problems, for example, are very widely used. In these problems there are some distributions of functions that empirically exist, and a simple model of those types of problems is that they can be locally approximated over an area with positive size by taylor series at any point of the domain that you care about, but these local areas are stitched together essentially at random.
In that context, it makes sense that maybe the directional second derivatives of a function would be independent of one another and rarely would they all line up.
Beyond that I’d expect that if you impose a measure on the space of such functions in some way (maybe limiting by number of patches and growth rate of power series coefficients) that the density of functions with even one critical point would quickly approach zero, even while infinitely many such functions exist in an absolute sense.
I got a little defensive thinking about this since I felt like the context of ‘deep learning as it is practiced in real life’ was clear but looking back at the original post it maybe wasn’t outlined in that way. Even so I think your reply feels disingenuous because you’re explicitly constructing adversarial examples rather than sampling functions from some space to suggest that functions with many local optima are “common.” If I start suggesting that deep learning is robust to adversarial examples I have much deeper problems.
Why did the SoE and OoH switch spheres?
And anyway Void Engineers are obviously there to pick up the slack of the dying dreamspeakers and get spirit back into the technocratic paradigm.
Tradesies for the order of hermes? They can run universities or something?
Thanks! Yeah a lot of the “content” that I have right now is on the order of “I spent all day writing a function that could have been a single line library call :(” because it makes me keep working the next day even if I have to spend all day on another function that could have been a single line library call. Hopefully I’ll “get past” some of that at some point and then be able to conduct some experiments that are interesting in and of themselves and/or provide some notebooks alongside things, which could move in the direction of front page stuff instead of personal life blogging.