While I have no specific article, I find the German Heilpraktiker system a good example where there’s a stable system that exist 70+ years that separates the two.
Working on something personal, reading some blog, general web surfing
That’s also what people do at the office.
r = 1.2-1.3 is not stable.
I’m happy to be at the point where in Berlin the U-Bahn now tells people to open windows and put stickers on the windows to direct people to open them, the S-Bahn however still doesn’t and there are unnecessary many closed windows.
An S-Bahn with open windows has felt airflow, so it’s likely similar to being outdoors.
I think they do that for the grunt level and not management positions. If the concern of the employer is that people leave the job because it’s too boring it’s likely not elite work.
Thinking of masking as a binary is not useful. The key question isn’t whether or not to mask but in what conditions you want to wear a mask. Either absolutist position is likely going to lead to a lot of suboptimal decisions.
When it comes to chosing universities there’s:
One could also do academic research at any university, though it helps to be somewhere with enough people working on related issues to form a critical mass. Examples of universities with this sort of critical mass include the University of Oxford, University of Cambridge, UC Berkeley, MIT, the University of Washington, and Stanford.
While that passage isn’t directly about where to do your masters, they are places where there are people who can support you in learning about AI safety research.
Human nature suggests that an all-powerful council-of-elders always becomes corrupt
Human nature is relatively irrelevant to the behavior of AIs. At the same time that’s basically saying that the alignment is a hard problem.
The alignment problem is one of the key AI safety problems.
I don’t myself work in AI risk so I’m not the ideal person to respond but I’m in the community for quite a while so given that nobody who actually works in the field answered I will try to give my answer:
80,000 hours has a general guide for AI risk: https://80000hours.org/articles/ai-policy-guide/ the also published a podcast.
One of the key features is that there’s a pretty high bar to be payed to work in AI safety.
I don’t want to apply to programs that aren’t worth it (it’s possible my qualifications are sufficient for some of the ones I’ll apply to, but I have little context to tell).
The bar to do a MIRI internship is not lower then the bar to getting into a top university. I would expect that applying for a master at the universities that the 80,000 article lists is one of your best bets.
While those universities do have high tution and you likely will be in debt after leaving, a computer science degree in those universities allows for access to very high paying jobs, so the debt can be worth it even in the case you don’t end up going into AI risk.
After going to an actual sleep doctor one of the surprising suggestions from the doctor was that sleeping on a flat surface can be supoptimal and having a mattress at an angle where the head is significantly higher then the feet can be helpful. Of course instead of changing the actual angle of the mattress cushions can produce similar effects.
The effect of applying the suggestion lead to me body relaxing in ways I didn’t expect but I had the impression that after 1-2 months of adaption the effect wasn’t there as strong anymore.
Changing the angle of the mattress from time to time is likely useful and underrecommended.
The CDC page and the one on noise machine seem to me to make claims about the maximum noise being the problem.
Your above post seems to additionally make the claim that there’s a recovery process that only happens when there’s very little sounds which seems to me like an interesting separate claim from loud noise causes hearing damage.
A practical alternative to white noise machines are nature sounds. Noise patterns like rain did exist in the natural enviroment and are likely more healthy then white noise. You likely still shouldn’t make them too loud.
I did try different doses, up to the sedative level, and it never really helped
There’s no sedative level and most melatonin products have doses that are too high to be clinically effective. What was the lowest dose you took?
For some reason, we seem to be very sensitive to those exhaust products (tho it also seems like this might be a dimension that people vary on significantly).
It’s been a while since I took physiology 101, but I think there was a fairly straightforward explanation. My guess from memory would be that it was something like CO2 having to leave your body during breathing and that depending on the amount of CO2 in the air.
CO2 makes up around 0.04% of the air while oxygen makes up 21%. If you go from 21% of oxygen in the air to 20% that’s not a significant change. The corresponding change of CO2 from 0.04 to something on the order of 0.4% is however massive (the real numbers are a bit off because I don’t want to look up how to calculate it but it goes in that direction).
On practical way that helped my to pay more attention to CO2 was to get a device that measures it 24⁄7 and creates alerts when the levels go over a maximum.
Keeping in mind that they are not research-backed and should rather be interpreted as experience-based heuristics at best, we can look at the recommendations made by people who sell mattresses for a living.
People who sell mattresses for a living have bad incentives in getting you to make useful purchasing decisions. For organizations such as consumer reports the incentives are much better aligned. Paying customer reports for their mattress ratings costs money but information is valuable and worth paying for.
This is heavily cultural, and Elon’s proposal (let everyone grid-link themselves to their own all-powerful AI) is in line with culturally Protestant values, while the LW proposal (appoint an all-powerful council of elders who decree who is and is not worthy to use AI technology, based on their own research into the doctrine) is in line with culturally Catholic values.
Deciding based on the two approaches based on which values they align with misunderstands the problem. A good strategy depends on what’s actually possible.
The idea that human/AI hybrids are competitive at requiring resources in an enviroment with strong AGIs is doubtful. That means that over time all the resources and power go to the AGIs.
Are you saying making memes about climate change is better time spent than sorting your own trash?
The EA community put a lot of thought into how effective action works.
I don’t think separating plastic from other trash is an effective use of my time. I do think that seperating out batteries or other hazadous material makes sense. Climate change isn’t the only enviromental issue that matters and I care a lot about mercury from bacteries not leaking into my drinking water.
Preventing plastic from accumulating in the ocean is a worthy cause but given the way my local trash system works whether or not I separate my trash has little expect impact on that question.
Generally, effective action needs focus and I don’t think that the time equivalent of sorting the trash is enough to create significant change. If I would chose this as an area where my goal is to make a significant difference I would think deeper about it and create a theory of change.
How can you convince billionaires to buy offsetting?
One of the key features of this decade is increased space travel. Making that socially accepted is valuable to those who want to sell space tourism.
How do you effectively petition for them?
One way might be a change.org petition. But I have spent maybe 30-60 minutes about the problem and that’s what comes up. If you are smart and spend more time thinking about how to deal with the problem you might come up with better solutions.
I don’t think a maximalist strategy leads someone like Bezos to change their actions.
If you are an individual making make memes then saying: “Bezos produced X tonnes of CO2 with his launch but he did nothing to offset the CO2 production” would be a productive position.
Offsetting is something that a billionare can simply buy with money. If you care about moralizing then asking for something that can simply solved by throwing money at it is bad. If you however care for reducing CO2 asking for actions that are relatively easy to do is the way to go.
Another thought would be to think up a possible agreement where space companies publically accept the responsibility to do offsetting and then push for that agreement (maybe make a petition for it).
The fair trade mentality makes people think they need to offset going to space with some carbon reducing activity when in reality the net gain would be bigger if they did both the offset and not going to space.
That depends on the value you ascribe to them going to space. I do think that being in the great stagnation is a big problem and I think it’s good that new space technology gets developed and this event is part of the development of new space technology.
In general there are always option that can be taken to optimize more into one direction. In a public context like this calling for maximalist position is unlikely to change actions.
Recycling as practiced was highly shaped by PR departments of companies that produce a lot of waste and want to shift the responsibilty somewhere else. That’s how we ended up with a horrible recycling system. Even to the extend that separating waste does something it’s unclear to me why people would hope it does something about climate change.
Memes that criticse this can be helpful. Being clear about that it’s the responsibilty of the companies that produce the massive amounts of plastic to reduce waste is important.
The packaging for the meat I’m buying recently changed in a way that reduces plastic waste by 70% (number from memory). This is the kind of action that manages to reduce waste effectively.
Just pointing to big players and critizising them however won’t create change. You actually have to push for alternatives.
When it comes to the question of a billionaire going to space and producing a lot of CO2 in the process the important question is whether they billionaire does something to offset that CO2.
If Brandson doesn’t buy offsets then pressure on him to buy offsets has a reasonable change of getting him to buy offsets. On the other hand just complaining about him being an evil billionaire doesn’t.
That does make me update towards trusting him less. The video which I previously watched where mostly good with a few irrelevant factual errors.