479 mJ/cm2 for skin
Source for this? I see just 23 mJ/cm^2 given in this paper
479 mJ/cm2 for skin
Source for this? I see just 23 mJ/cm^2 given in this paper
It’s false IMO, see my comment below.
Also worth pointing out that multiple companies seem to already be selling far-UVC lamps in the 50W electrical power range (though some go all the way up to 500W)
I do suspect that this is a technology we can roll out right now (with some care about safety!)
https://www.firstuvc.com/product/3.html
https://faruvc.xyz/product/222nm-far-uv-disinfection-light-krypton-36-2/
Whilst the LEDs are not around the corner, I think the Kr-Cl excimer lamps might already be good enough.
When we wrote the original post on this, it was not clear how quickly covid was spreading through the air, but I think it is now clear that covid can hang around for a long time (on the order of minutes or hours rather than seconds) and still infect people.
It seems that a power density of 0.25W/m^2 would probably be enough to sterilize air in 1-2 minutes, meaning that a 5m x 8m room would need a 10W source. Assuming 2% efficiency that 10W source needs 500W electrical, which is certainly possible and in the days of incandescent lights you would have had a few 100W bulbs anyway.
EDIT: Having looked into this a bit more, it seems that right now the low efficiency of excimer lamps is not a binding constraint because the legally allowed far-UVC exposure is so low.
“TLV exposure limit for 222 nm (23 mJ cm^−2)”
23 mJ per cm^2 per day is just 0.002 W/m^2 , so you really don’t need much power until you hit legal limitations.
While KrCl lamps are expensive, I think this post overstates how unviable they are. I think an interested organisation could afford to install & run a bunch of these in an office (within the legal limits) basically right now, and see benefits that are worth the cost.
Well it depends how much UVC power you need. If a KrCl lamp with a bandpass filter is 2% efficient you can still run it at 500 Watts electrical and you have a 10-Watt source. 500 Watts electrical is totally feasible from standard electrical outlets (240 Volts @ 2 Amps), and electricity is cheap compared to the cost of lost productivity and the economic cost of death.
I would be interested to hear from some more qualified people on what power density you really need for it to work.
put various parts of their normal life’s functioning on hold on account of AI being an “emergency.” In the interest of people doing this sanely
People generally find it hard to deal with uncertainty in important matters.
Whilst AI risk may hit very soon, it also may not. For most people, doing really drastic things now probably won’t gelp much now, but it might cause problems later (and that will matter in timelines where AI risk hits in 20-40 years rather than now)
There’s no analogous alignment well to slide into.
If one made a series of alignment-through-capabilities-shift tasks, you would get one.
I.e., you make a training set of scenarios where a system gets a lot smarter and has to preserve alignment through that capability shift.
Of course, making such a training set is not easy(!).
Incidentally, has anyone considered the possibility of raising funds for an independent organization dedicated to discovering the true origin of covid-19? I mean a serious org with dozens of qualified people for up to a decade.
It seems like an odd thing to do but the expected value of proving a lab origin for covid-19 is extremely high, perhaps half a percentage point x-risk reduction or something.
The problem with banning risky gain-of-function research seems to be that, surprisingly, most people don’t understand that it’s risky and although there’s a general feeling that COVID-19 may have originated in the lab at Wuhan, we don’t have definitive evidence.
So the people who are trusted on this are incentivized to say that it’s safe, and there also seems to be something of a coverup going on.
I suspect that if there was some breakthrough that allowed us to find strong technical evidence of a lab origin, there would be significant action on this.
But, alas, we’re trapped in a situation where lack of evidence stalls any effort to seriously look for evidence.
Well, there it is.
AI risk has gone from underhyped to overhyped.
Agreed.
The Weak AGI question on metaculus could be solved tomorrow and very little would change about your life, certainly not worth “reflecting on being human” etc.
How could you use this to align a system that you could use to shut down all the GPUs in the world?
I mean if there was a single global nuclear power rather than about 3, it wouldn’t be hard to do this. Most compute is centralized anyway at the moment, and new compute is made in extremely centralized facilities that can be shut down.
One does not need superintelligence to close off the path to superintelligence, merely a human global hegemon.
If Russia were to nuke Ukraine with a tactical nuke, they will put the US into a position of being forced to respond.
If we go all the way up the escalation ladder to a full nuclear exchange, it’s essentially impossible for Russia to win.
So they probably will need to either not escalate, or plan to deescalate at an intermediate point, e.g. if there’s an exchange of tac nukes or a tac nuke is exchanged for a nasty conventional strike, Russia may intend to stop the escalation at that point.
Russia has much more reason to bark about nukes than to bite. The bite might happen but I don’t see a strong reason for it.
No I think you misunderstood me: I do agree that things are “getting weird”—I’m just saying that this is to be expected to make the 2040 date.
Even if you did that, you might need a superhuman intelligence to generate tokens of sufficient quality to further scale the output.
Kurzweil predicted a singularity around 2040. That’s only 18 years away, so in order for us to hit that date things have to start getting weird now.
I think this post underestimates the amount of “fossilized” intelligence in the internet. The “big model” transformer craze is like humans discovering coal and having an industrial revolution. There are limits to the coal though, and I suspect the late 2020s and early 2030s might have one final AI winter as we bump into those limits and someone has to make AI that doesn’t just copy what humans already do.
But that puts us on track for 2040, and the hardware will continue to move forward meaning that if there is a final push around 2040, the progress in those last few years may eclipse everything that came before.
As for alignment/safety, I’m still not sure whether the thing ends up self-aligning or something pleasant, or perhaps alignment just becomes a necessary part of making a useful system as we move forward and lies/confabulation become more of a problem. I think 40% doom is reasonable at this stage because (1) we don’t know how likely these pleasant scenarios are and (2) we don’t know how the sociopolitical side will go; will there be funding for safety research or not? Will people care? With such huge uncertainties I struggle to deviate much from 50⁄50, though for anthropic reasons I predicted a 99% chance of success on metaculus.
as we get into more complex tasks, getting AI to do what we want becomes more difficult
I suspect that much of the probability for aligned ASI comes from this. We’re already seeing this with GPT ; it often confabulates or essentially simulates some kind of wrong but popular answer.
1.4Q tokens (ignoring where the tokens come from for the moment), am I highly confident it will remain weak and safe?
I’m pretty confident that if all those tokens relate to cooking, you will get a very good recipe predictor.
Hell, I’ll give you 10^30 tokens about cooking and enough compute and your transformer will just be very good at predicting recipes.
Next-token predictors are IMO limited to predicting what’s in the dataset.
In order to get a powerful, dangerous AI from a token-predictor, you need a dataset where people are divulging the secrets of being powerful and dangerous. And in order to scale it, you need more of that.
So we cannot “ignore where the tokens come from” IMO. It actually matters a lot; in fact it’s kind of all that matters.
I don’t know of a formal analysis, but informally it seems that it will be much easier to enforce far-UVC than ventilation. People don’t like ventilation because it makes them cold or costs money, so they will tend to shut it off when they think they can get away with it.
Far-UVC is invisible and won’t cost much compared to heating costs.
I also suspect that it’s inherently more effective at the right doses because ventilation can’t really stop transmission, only reduce the rate somewhat, so there will still be transmission and we may eventually find a pathogen that is contagious enough to still cause problems.