I suspect, like many things in politics, that the main issue here is domestic politics more than foreign affairs.
If you’ve ever compared election results between single and multi-member systems, you’ll have noticed a trend. Even if via first preference count, a minor party seems to best represent a significant chunk of the population, unless they’re geographically concentrated you can expect them to pick up on the order of ~0 seats.
Similarly, if we’re not going to abandon democratic principles, we should probably have the consent of the majority in an area before we perform an experiment on them. Problem with this is that even if world/country wide there’s a quorum of people who would consent to a given experiment, it’s highly unlikely that they all live in the same place.
While something like a Schengen area might in principle alleviate some of these concerns, it introduces two main additional ones:
1) Does your experiment actually improve society? Or does it just attract the types of people who improve society themselves?
2) Most people aren’t a big fan of being told they have to move cities/countries to continue living their lifestyle. I suspect that Lesswrong users as a cohort undervalue stability relative to the rest of the population.
BryceStansfield
It’s worth noting that factory farming isn’t just coincidentally out of the limelight, in some (many?) areas it’s illegal to document. https://en.m.wikipedia.org/wiki/Ag-gag
While many of these laws seem somewhat reasonable on the surface, since they’re billed as strengthening trespass law, you can’t gather video evidence of a moral crime taking place on private property without at least some form of trespass.
I think a different use of MI is warranted here. While I highly doubt the ability to differentiate whether a value system is meshing well with someone for “good” or “bad” reasons, it seems more plausible to me that you could measure the reversibility of a value system.
The distinguishing feature of a trap here isn’t so much the badness, as the fact that it’s irreversible. If you used interpretability techniques to check whether someone could be reprogrammed from a belief, you’d avoid a lot of tricky situations.
Apologies for the late reply.
With a bit over 600k 0-3 year Olds in swim lessons at the time of the linked report, and around 1.2 million children in that age range in Australia, I’d estimate at least half of kids below 4 have taken swim lessons. So quite common, but not to the extent that I had thought.
Notably, swim lessons for young children are highly subsidized by most states, with many offering a fixed number of free lessons.
A bit later in primary school, the majority of kids will be given free swim lessons at their local public pool though.
Are child swim lessons common in America? Over here, free swim lessons are now provided for children, and mandatory swim lessons are provided as part of primary school. My understanding is that it’s made a relatively large dent in the rate of child drowning injury.
In particular, once your child is proficient at swimming, you can get lessons on plain clothes swimming incase of a trip, fall, or if another kid needs rescuing.
A transplant seems unnecessary if there’s any realistic change of probe technology advancing. Surely it’d be possible to grow the same neurones in wet lab, use brain probes to connect them to a living person, and keep the tinkering inside someone’s head to a minimum.
(Putting aside the profound ethical issues) In that case, neuronal material could even be swapped out on the fly if one batch is proving ineffective for a given task (or, a new batch could have old signals replayed to it to get it up to speed).
Is there something I’m missing on the neuroscience end? I’m not at all familiar with the field.
I think there’s a difference between consequences and suffering (as written in the OP) though.
If a child plays too many videogames you might take away their switch, and while that might decrease their utility, I’d hardly describe it as suffering in any meaningful sense.
Similarly, in the real world, people generally get quite low utility from physical violence. It’s either an act of impulse not particularly sensitive to severity of punishment (like in people with anger management issues), or of very low utility. It’s therefore easy to imagine that the optimal level of punishment for crime might be a decrease in access to some goods, and seperation from broader society to decrease the probability of future impulsive acts harming anyone.
This is the closest I got, by probing ChatGPT for details on Muhammad’s conquests, and seeming very inclined towards divine inspiration.
https://pastebin.com/CUQbAew8
I probably could’ve done a better job if I was a (ex or otherwise) Muslim, and I imagine it might’ve been more receptive in arabic.
I think a big part of the problem is that people fundamentally misunderstand what the funnel is. The way to get people into a field isn’t rousing arguments, it’s cool results, accessible entry research, and opportunity.
As a kid, I didn’t go into pure mathematics because someone convinced me that it was a good use of my time, it was because I saw cool videos about mathematical theorems and decided that it looked fun. I didn’t move into applied maths because someone convinced me, but because there was interesting, non-trivial modelling that I could pick up and work on; and I didn’t move into the trading industry because someone convinced me that options liquidity is the primary measure of a civilizations virtue, it was because nobody else would hire me in Australia, but a trading firm offered me a shit tonne of money.
Doing interesting work is itself an important part of the recruitment funnel, keeping some easy problems on hand for grads is another important part, and (imo) diversifying the industry out of like 2 cities (London and SanFran) would be a great way to remove a thin wedge from the top of the funnel.
Some people are going to go into whatever field they think is maximum utility, but I reckon they’re the exception. Most scientists are fundamentally amoral people who will go into whatever they find interesting, and whatever they can get work in. I’ve seen people change fields from climate research into weapons manufacturing because the opportunity wasn’t there, and ML Safety is squandering most of the world.
It’s been more of a lifestage thing for me, than a day of the week thing. But Easy Days looks like a great feature, and that could very well be the solution for someone else!
I’ve heard people say that you should take the amount of money people tell you to spend on a pram/baby carrier, and swap them. I genuinely can’t remember a time in the last few months that our bub preferred the pram over the carrier.
Even in the sub-tropics, hats do come in handy; just only during winter. You’ve just gotten somewhat lucky having the kid around the Northern Hemisphere Summer! You’ll probably find hats handy in a few months!
It’s worth experimenting with bottles. I, and many parents, have had the most success with Pigeon bottles. A close friend of mine (who is a pediatric nurse) has seconded this. Even then, it can often be hard to bottle train while the breastfeeding parent is in the room, it could be worth asking a friend or family member to help out with this.
Good news, and Bad News:
Over the next month or two, you might find that your baby is becoming less time consuming. Do not be fooled, this is a local minima and nothing else. By the time your baby is crawling you’ll see the folly of your ways, and start pruning back commitments you made during this relaxation period!
I’m glad you’re having a nice time with your baby! I wish you two the best.
Against “Everything on the Back”
I find it a lot easier to memorize content with an “Everything on Back” approach, but I have encountered the problem you’re talking about. Usually, if this starts happening though I go back and merge the cards I’m having issues with. So you can kind of have both approaches at once, if you’re willing to edit your deck aggressively.
Personally, I find it more helpful to scale the review limit back and forth with my life demands, rather than having a set limit.
It kind of screws with the FSRS scheduler (which I highly recommend using instead of SM-2), but it helps keep Anki sufficiently challenging to keep me on board during relaxed periods of my life, and nice and lenient during more rushed parts.
Edit: said something very silly about weighting.
Nothing says you can’t take the geometric mean of a series that includes negative numbers, just that if you have an even number of elements, but an odd number of negative elements, you’ll get a complex answer.
To weight the elements of your series, you should be able to take the geometric mean of p(X)*X, then divide by the geometric mean of your p(X).
I suspect that your post might have more upvotes if there was agreement/disagreement karma for posts, not just comments.
A couple of points that I thought about reading this.
I think it’s probably true that some valuable right wing ideas have been overlooked due to the larger left-wing academic cultural milieu. I very regularly see people on all ends of the political spectrum reject ideas out of turn, just because they don’t fit into something they see as a socially important grouping of political ideas.
However,
I strongly suspect that the most potent wells of useful political thought won’t overlooked ideas from the left or right wing, but non-American, and more broadly, non-Western modes of political thought. Even just as an Australian, it can be frustrating how rigidly many users here stick to the American political overton window; assuming strong correlations between fundamentally unrelated ideas (for example, a correlation between socialism and anti-racism; or between conservatism and fossil fuel spruiking).
As for your actual points:
I don’t think leftist systems of thought have ever particularly had trouble with the notion that people can be more productive or powerful with access to tools or capital, the disagreement is in how to deal with the distributional effects of this. The mention of Nathan Cofnas and their ideas (a thinly veiled retelling of the narrative of racial differences in intellectual capability; a field of science full of bogus sociology, even more bogus biology (I’ve yet to have a proponent explain to me what exactly unites all black people (the skin colour group with the largest amount of genetic diversity of any on the planet) apart from skin colour, or to come up with a compelling narrative for why people from privileged racial groups in the richest areas of the modern world miraculously happen to score highly on these metrics); but that somehow continues to play well in “rationalist” spaces, I presume because it nebulously feels like forbidden knowledge. In retrospect this probably should’ve been a paragraph, not a set of brackets...), seem completely superfluous given the subject matter.
I actually agree with your point about protectionism (although I think protectionism is a strong impulse on all end of politics, just one verboten by modern liberal globalist thought). I’m not sure why you bothered mentioning competition though, an orthogonal economy that doesn’t interact with the outside doesn’t need to compete, it just needs to produce the goods it needs internally. Given that the modern planet somehow produces everything it consumes without the use of AGI, it seems trivially true that that will continue to be possible.
I agree that the distinction between a grouping, and the members of that grouping is important. I often see this distinction fall away during wars, for example, where people are slaughtered for the nebulous national interest. I can see this distinction being very important with the advent of AGI.
However, I’m very uncomfortable with National Conservatism as a whole becoming more popular in America, since I could very easily see a world in which America uses its newfound leverage as the sole operator of AGI to maximize its citizens utility, with apocalyptic consequences for non-Americans (read, most human beings alive). If it was politically viable, I would back some sort of legislation that stated that if a country obtained AGI, that all human beings would automatically become citizens of that country; but I can’t imagine anyone ever passing such a bill. To be frank, even before the advent of such ardent nationalism in the United States, most people I know were already pretty frightened by the prospect of AGI being invented there...
EDIT: Nvm, this dataset was of a niche religious group (The seventh day adventists), I should’ve read more throughly before commenting.
Assuming no major dietary differences between vegetarian converts, and lifelong vegetarians, it appears that they consume less dairy by about a half: https://pmc.ncbi.nlm.nih.gov/articles/PMC4232985/#!po=39.8438
So, assuming that someone moves to the mean lacto-ovo vegetarian diet, you can assume about one half calf less over a lifetime.
Some quick-mafs.
Assuming the use of a high milk yield breed like Holstein Friesian cows.
18KL per lactation: https://www.australiaslivestockexporters.com/holstein-fresians-dairy-cattle/
About 217 ml of milk, 23.6g of cheese (equiv to 236ml milk) and 21.4g of yoghurt (about 21.4ml of milk) are consumed per Australian per day: https://www.statista.com/statistics/1143391/australia-dairy-mean-daily-grams-per-capita-by-food-subgroup/ = (approx) 474ml = 0.47L
Australians live about 83 years, approx 30k days.
30kDays * 0.47L/Day = 14100L of milk over a lifetime. Or a bit less than one calf.
The average Holstein cattle has parity of < 2.7 (https://pmc.ncbi.nlm.nih.gov/articles/PMC8369829/). So we can estimate something like one third of a dairy cow per person, and a little less than one calf.
The primary effect of diary on cows life wouldn’t be the cow that’s milked, it’d be on the calves. I suspect it’d be more than 1⁄2 of a calf per human milk lifetime.
Am I alone in not seeing any positive value whatsoever in humanity, or specific human beings, being reconstructed? If anything, it just seems to increase the S-risk of humanlike creatures being tortured by this ASI.
As for more abstract human values, I’m not remotely convinced that we could either:
a) Convince such a more technologically advanced civilization to update towards our values.
or
b) That they would interpret them in a way that’s meaningful to me, and not actively contra my interests.