I didn’t expect or understand the extreme American reaction, let alone any worldwide reaction, to what seemed to me like just another terrorist attack, largely predictable and differing only in scale from what had happened before.
Scale is important
I didn’t expect or understand the extreme American reaction, let alone any worldwide reaction, to what seemed to me like just another terrorist attack, largely predictable and differing only in scale from what had happened before.
Scale is important
If you’re implying that this is what’s happening with trans women in female sports, then this isn’t accurate. There is no evidence showing that trans women outperform cisgender women by any significant margin. Based on https://pmc.ncbi.nlm.nih.gov/articles/PMC10641525/ trans women get well within the expected ranges for cis women within around 3-4 years. Any remaining difference is far, far below the difference between a 150 and 250 pound boxer. (And, given how few trans women there are, if those few are in the upper 50% percentile (Which is far from universal for all trans people) what actual difference does it make? Also consider how many people take this as an excuse to harass and harm trans people outside of sports, and how the NCAA has a total of what, less than ten? trans women, total)
The model seems very, very benchmaxxed. Third party testing on unconventional or private benchmarks ends up placing even the largest gpt-oss below o4-mini, below the largest Qwen releases, and often it ends up below even the newer 30B~ Qwens in a few situations. It isn’t super capable to begin with, and the frankly absurd rate at which this model hallucinates kills what little use it might have with tool use. I think this model poses next to zero risk because it just isn’t very capable.
That’s pretty much it! If everyone in the world was set to die four minutes after I died, and this was just an immutable fact of the universe, then that would be super unfortunate, but oh well, I can’t do anything about it, so I shouldn’t really care that much. In the situation in which I more directly cause/choose, not only have I cut my and everyone else’s lives short to just a year, I also am directly responsible, and could have chosen to just not do that!
As worthless as you think it is, it’s quite literally the thing that is happening in the real world. Theory is cool and all but reality is the way it is.
Also, yeah, people not being able to afford to support their kids is obvious. It’s literally happening. I know this site leans heavily middle-upper/upper class SF/CA, but the majority of (the US) lives paycheck to paycheck and cannot support a child without serious compromised to QOL, both for themselves and the child.
The only thing that will raise fertility rates is to make it more affordable to have a child. Most people are simply too poor to both have a child and ensure that it is consistently as happy or happier than they were as a child. People in developed countries do not want to have children who they know will have poor childhoods from not being able to afford things they need, such as school, rent in a place with a room for them, childcare while working (as it is very difficult to survive on just a single person’s income, practically impossible for 3!!!! people to do so) and other necessities.
The problem isn’t culture (unless you think blindly producing children who will suffer is a good thing) or status or any of these made up problems, people literally just cannot afford to start families.
Interestingly, the LLMs were not biased in the original evaluation setting, but became biased (up to 12% differences in interview rates) when we added realistic details like company names (Meta, Palantir, General Motors), locations, or culture descriptions from public careers pages.
This is probably because, from a simulators prospective, the model expects a resume screening AI from these companies to be biased. In the generic setting, the model has no precedent so the HHH persona’s general neutrality is ‘more active’.
This isn’t meant to be interpreted as a call to violence in the slightest.… But, why haven’t there been more ‘terroristic’ actions towards firms developing AI systems? Why haven’t any datacenters been firebombed or anti-regulation proponents been shot on the street? I mean if this is a “We literally all die if this goes poorly, possibly if it happens at all.” The cost of a few human lives, or jail time/death penalty/torture for the perpetrator seems like a bargain for getting more time/raising significant awareness/etc
Is qualia (it’s existence or not, how and why it happens) not the exact thing the hard problem is about? If you’re ignoring the hard problem or dismiss it you also doubt the existence of qualia.
human marginal productivity increases, and we get wealthier, so wages might plausible go up for a while,
Why would wages go up? Employers have zero reason to pass the improved productivity gains to the employees, especially in a situation with mass layoffs creating lots of free labor to replace any employees upset about this. Previous gains in productivity have not increased wages, especially in modern (post 2000~) times. If anything, increased productivity allows companies to layoff employees, reducing overall wages.
Overall, the obsession with things like ‘wages’ around high automation is incredibly strange to me and assigns a huge amount of benevolence to the companies and people running them. I don’t think that capitalism, automation, and human flourishing for anyone who doesn’t own one of these companies are compatible, and I think we’re likely to see huge loss of life or upheaval closer to 20% automation, or even less.
If I come back, then I wasn’t dead to begin with, and I’ll start caring then. Until then, the odds are low enough that it doesn’t matter.
After all, the future may be full of great wonder so deep and long, that the present will seem relatively fleeting.
So? If I’m not there to experience it, and it can’t affect me in any way, it may as well not exist at all.
When you say that dying vs. being unconscious is just semantics, that means you will experience future you’s qualia, even if he temporarily stops experiencing qualia, and loses 80% of your memories, right?
To me, death is permanent loss, unconsciousness is temporary loss.
But what if future you loses 100% of your memories? Imagine it’s not just Alzheimer’s, but that the atoms of your brain are literally scrambled, and then rearranged to be identical to Obama’s brain. Would you continue to experience qualia, (with false memories of being Obama)?
No idea, that’s physics’s question, not philosophy. I think if it was a gradual process then probably, yeah, that’s basically what already happens.
get eaten by someone else, who gives birth to a baby. Now suppose this was done in a way most of your atoms eventually end up in this child as he grows up. Will you continue to experience his qualia?
Probably not, but if yes, but if there are no memories of my ‘past’ life it’s impossible for me to know if I had a previous set of memories.
The key question is, how badly do your atoms need to be scrambled, before the person they form no longer counts as “you,” and you won’t experience the qualia that he experiences? Do you agree that there is no objective answer?
Again, this is a physics question, not philosophy, but I believe there will some day be an objective answer to what’s going on with consciousness, I’m partial to naturalistic dualism or some sort of emergent property of algorithms in general, like IIT (Though IIT only says how it can be measured, not what it actually is?)
What if future you gets Alzheimer’s, and forgets 80% of your memories, making him no different than someone else?
The answer to this is super straight forward; Do I continue experiencing qualia from the point of view of this future me? If yes, then absolutely nothing else matters, that’s me. If at some point during the Alzheimer’s I stop experiencing (permanently) then that isn’t me. If at some point after that I begin experiencing again, then whether or not ‘I’ died or was just unconscious is semantics. Memory doesn’t matter, the only thing that matters is the current experience I am having, as that is the only thing I can prove to exist.
In my experience, I end up being the myself of the next day/second/moment, or at least experience that being so, so it makes sense to continue to assume I will be the next moment’s me since that is what I observe of the past, or at least that’s what my memory says anyway, and I gain nothing by not ‘going along with it’.
I think a lot of discussion around what you should consider your successor is way, way too complex and completely misses what is actually going on. Your ‘future self’ is whatever thing you end up seeing out of the eyes of, regardless of what values or substrate or whatever it happens to have. If you experience from it’s POV, that’s you.
I know it’s beyond doubt because I am currently experiencing something at this exact moment. Surely you experience things as well and know exactly what I’m talking about. There are no set of words I could use to explain this any better.
My memory can be completely false, I agree, but ultimately the ‘experience of experiencing something’ I’m experiencing at this exact moment IS real beyond any doubt I could possibly have, even if the thing I’m experiencing isn’t real (such as a hallucination, or reality itself if there’s some sort of solipsism thing going on).
The main issue I have is that, especially in the case of succession but in general too, I see that situations are often evaluated from some outside viewpoint which continues to be able to experience the situation rather than from the individual itself, which while necessary to stop the theorizing after the third sentence, isn’t what would ‘really happen’ down here in the real world.
In the case of dying to save my children (Do not currently have any or plan to, but for the hypothetical) I would not, though I am struggling to properly articulate my reasoning besides saying “if I’m dead I can’t see my children anyway” which doesn’t feel like a solid enough argument or really align with my thoughts completely.
An example given in the selfishness post is either dying immediately to save the rest of humanity, or living another year than all humanity dies, and in that case I would pick to die, since ultimately the outcome is the same either way (I die) but on the chance the universe continues to exist after I die (I think this is basically certain) the rest of humanity would be fine. And on a more micro-level, living knowing that I and everyone else have one year left to live, and that it’s my fault, sounds utterly agonizing.
There are tons of groups with significant motivation to publish just about anything detrimental to transgender people, so yes, it would’ve been published.
Transgender people, total, between both transmasc and transfem individuals, make up around 0.5% of the population of the US. Hardly abundant. And again, the number of trans people in high level sports is in the double digit numbers.
https://worldpopulationreview.com/state-rankings/transgender-population-by-state