Fair enough. Unfortunately you can walk around with a geiger counter and perceive the dangers of nuclear in the 2 disaster areas. You can’t perceive the coal pollution in most areas except when it gets bad enough.
What is your definition of contaminate? If Devanney is correct that low doses of radiation are acceptable—and I believe he is—then much land which is described as ‘contaminated’ is in fact perfectly liveable. (Also see the people who illegally live in the Chernobyl exclusion zone). For a reasonable definition of ’contaminate’ then, it follows that a nuclear accident contaminates much smaller areas of land and is less expensive.
One issue is that it is not possible to rigorously prove it’s livable because the parameter you are trying to measure—extra cancers and subtle damage—won’t show up for 20-30 years. Over such a long timescan it is difficult to even tease out causation. Your data will be incomplete, your subjects won’t all have lived long enough for any radiation damage to matter, some of them smoke, etc. But for the sake of argument I will let the conclusions be conclusive that radiation is harmless below a threshold.
I agree with you that the NRC’s decision making is not rational in that it is not factoring in the consequences of a decision to the host society. It’s factoring in the consequences of the decision to the NRC. This is true for most regulatory agencies, at best they are captured by not wanting to do something that endangers their own reputation.
Anyways even if all of the above is true the innovation cost I mentioned above isn’t there. Nuclear is also small market size in that many advancements do not make economic sense because few reactors are being built, and this would remain true if more were being built up to a point.
Solar and batteries are enormous market scales, and thus many improvements make economic sense.
Things you have neglected:
1. Accidents contaminating large areas of land. These are events that occur infrequently and can negate the lifetime profits from many reactors. (example, fukishima price tag at 187 billion)
2. The very nature of what it means to innovate or cost reduce a product. In any other industry, when you try to make something cheaper, you change the design to remove parts, or cheapen a part that is better than it needs to be. Even if you accept that the NRC is over-zealous, the risk of #1 is a strong incentive not to do either.
Other competing sources of energy, the worst case scenario is acceptable. If you notice, grid-scale battery installations are outdoors and separated by a gap between each metal cabinet. This is so that a lithium fire will be limited to a single battery cabinet. That’s an acceptable failure. Ditto the worst case for other forms of power generation. “Contaminating a nearby city and making it permanently unusable if things go badly enough” is not an acceptable scenario.
Anyways, what this means is that solar/wind/batteries are going to keep getting cheaper. And they also have the potential to decarbonize the planet as well. And you can keep innovating and reducing cost wherever possible because the worst case scenario when a solar panel/battery/wind turbine fails is a warranty claim or small fire.
Right. It’s arguably not morally worse than various HFT and dark pool and other fintech moneymaking tricks though. All these involve buying a mispriced commodity (even by a fraction of a cent) and reselling it for it’s true (market value). And the buying/selling opportunities are unavailable to most people, in the same way you can’t retail scalp effectively unless you are using a computer program to do it.
My point is the ‘less morally repugnant but also still as profitable’ hurdle isn’t an easy one to clear since it’s not that morally repugnant.
I know of an investment that fits all of these criteria. Retail scalping. Due to the fact that you can usually return the items you bought if you fail to sell them for above your cost, the risks of a loss is low. It’s small scale—there might be ’10′ of the desired commodity in an online store showing up in a day, and your bots could snag just 1. The ROI can trivially be 20-50% in a week as you ‘flip’ the item for a markup.
It is generally considered to be despicable behavior but is also currently legal.
Downside risks : sometimes retail scalpers have gotten ejected from online marketplaces for hoarding essential goods. For example early in the pandemic there are some who hoarded hand sanitizer and then they were banned from selling them online.
I don’t practice it myself so I don’t know all of the risks, but it seems to fit your request.
Sure. My point is the OP is not just saying these traditions are traditional but that we should follow them because they are proven to work by the fact of our existence.
And I am just saying this is suboptimal. Even if I can’t make up a new tradition—say a new holiday for my bi roommate and me and our children together and her girlfriend to all celebrate—I should at least steal working ideas from the best.
In slightly clearer terms:
what should I do in my life?
rational answer: Output = max( utility_heuristic( alternative actions) ). Output = watch more catgirl porn.
Conservative Answer: Output = (Query(“What did my parents do”)) Output= “watch more Fox News”
Optimized answer: Output = (“Query(“what did the most successful parents do?”)) Output = “invite parents to live in house to provide child raising help and find me a wife”
Technically speaking that isn’t true but practically speaking it is. (Just like technically speaking you could write a letter of complaint to Stalin)
Congress could find their behavior so egregious they pass a law authorizing you to sue.
The point is that now you’re descending into nonsense. If we cannot use rational thought to decide what to do, but instead have to trust some old irrational idea, which idea is the correct one? Oh, ‘someone’ said that television rots our brains. Ok are all the rest of their ideas good? You are likely to find the answer is no.
Entire cultures have deep respect for their elders and are highly conservative in that whatever advice their elders give is treated as a good idea. This works except when it turns out that the ‘elders’ have 10 different incompatible bits of advice, or things that simply don’t work at all.
Focusing on the main point. I am saying that if evolution has found sets of ideas that work, and you genuinely want your life to use the ideas that work the best (so you have many children), it appears you should adopt the ideas that work the best.
Which are not USA conservative values, they are Chinese and Asian values. Everything else you are saying is simply that ‘the way that work in the past is best’. Which it is—for the purpose of having as much reproductive success as possible. That is the only ‘constraint’ applied to it.
So what you are saying is, the Conservatives have a bunch of ‘settings’ for every aspect of our lives. They ‘worked in the past’ and ‘worked well enough to make it’. Even when a particular setting doesn’t make any rational sense, we should just ‘have faith our ancestors knew what they were doing’.
Also conservatives in many cases want the government to force us through coercion and outright violence to obey laws written from Conservative social ‘values’. For example, the obvious being a marriage, where this is a legal contract that is ‘one size fits all’, you either agree to the terms or you are not married. There is no room for modernization or amendments, just “the arbitrary way inherited from our ancestors is the way or the highway”. (even a pre-nup doesn’t amend the marriage, just exempts pre-marital assets)
Your argument that “it worked well enough to get us here” is moderately compelling. I can point out that other cultures, especially Asia, sometimes do things differently. Therefore the “different settings” are also valid. In fact in terms of success, due to higher population numbers, the Asian way appears to be ‘more correct’. If you really wanted to ‘do what is best for future children’, it seems we need to adopt some mixture of Chinese and Indian cultures, because apparently in objective terms they work the best. Guess you better invite your parents to live with you. Hope they can find you a wife.
My other thought is I have had arguments sometimes with my father, who doesn’t understand why I am not interested in car tinkering or car culture. To me, a car is a machine to reach a destination, and I should buy the one with the lowest total operating costs.
He sees car culture as a conservative value. Except, uh, it isn’t one that has stood the test of time, it was “made up” somewhere in the 1920s by auto manufacturers.
Similarly, conservatives trumpet things like celibacy before marriage as a value that has “stood the test of time”, ignoring the fact that people used to marry far, far younger...
Anyways, back to the main subject. If catgirl porn is your thing, well, you can watch Fox News or Storage Wars or Cops or catgirl porn in the evenings. I’m not seeing a compelling argument how the first 3 are “better” for your life and well being if you really really like catgirls.
Sure, you might now feel unsatisfied with any sexual partners who are not catgirls. But then again, Fox News is designed to make you feel dissatisfied with anything a Democrat is trying to do, feeling a sense of imminent doom, where the President is about to just cut loose with executive orders and let the entire population of Latin America through the border all at once in one day. And defund the police in every city. (this is what conservatives seem to really believe).
Storage Wars makes you feel dissatisfied that you are not running your own business scavenging millions in value. Cops makes you feel unsafe and a Conservative might check that their firearm is loaded and aimed at the door after an episode.
Just not seeing a difference.
I think I see Hintons point. Unless cryo works or a few other long shots, every human on earth is scheduled to die within a short period of time.
Our descendants will be intelligent and have some information we gave them from genes to whatever records we left and ideas we taught them.
This is true regardless of whether the descendants are silicon or flesh. So who cares which.
Either way the survival environment will require high intelligence so no scenario save a small number of true annihilation scenarios are distinguishable as good or bad outcomes.
Embryo selection is a weak form of genetic engineering though, literally just restricting certain rolls from a die.
This is not how you get someone with a 1000 iq, its how you make 130 iq more common.
Can’t do it without enough power to overthrow a western government. Only thing that could even theoretically do that would be a TAI fighting on your side...
Oh. The reason you shouldn’t go into genetics as a career is you will not be permitted to do anything on humans until after we have TAI. Your career will just be wasted. You should work on AI unless you are already in a PhD program.
There are countless legal and structural barriers in the way.
My support for the last paragraph is that many of the things we credit “exceptionally smart” people with doing like solving equations can be automated. Or exploring function spaces for a better solution. Or, well, any problem that has a checkable answer, which are the very things iq tests measure.
It’s not on an IQ test how to imagine a better aircraft that is both creative and meets design specs. It’s always problems that a clear answer exists for.
Anyways in my personal experience I have met a lot of “brittle” people. They have no outer visualization for how a machine actually works and just get stuck the moment they hit a problem that wasn’t in a training exercise at school. Basic ideas just don’t occur to them.
But yeah if you put me up against them on rigidly defined problems taught in a book I might be slightly slower.
Note that I personally test at around 80-97th percentile depending on the test. (MCATs was 97). This tells me that whatever intelligence I have lucked into having is substantially above average but not the best.
I am saying an army of people only as good as me—top quintile—can and will create TAI decades before genetic engineering will matter.
There’s a hole in the assumptions in your last paragraph. Implicitly you are saying that you believe TAI will benefit from or require the actions of a few ‘super-genius’ human beings to make possible.
There are some flaws in your statements to unpack:
a. The existence of human ‘super geniuses’. Nature can only do so much to improve our intelligence, being stuck with living cells as computational circuits in a finite brain volume, with finite energy supply. It isn’t clear how meaningful the intelligence differences really are in terms of utility on actual tasks.
b. The kind of tasks that intelligence testing can measure being relevant to the task of designing a TAI. Thing is, the road to get there isn’t going to involve a whole lot of someone solving math problems in their head as they pound a keyboard through the night writing reams of custom code. A whole lot of it will be careful, methodical organization of your problem into clear layers and carefully checked assumptions to prevent math leaks (a math leak would be where a heuristic being optimized for is slightly incorrect, leading to the system building a suboptimal solution. I think of it as ‘leaking’ the delta between the incorrect approximation and the correct approximation). A lot of the “keyboard pounding” can be automated by building early bootstrap agents that find for us a near optimal algorithm for a given piece of the AI problem. Moreover, most code should be reused so we don’t have humans just re-resolving the same problems over and over.
c. A lot of the pieces needed to get there from here are probably organizational. You need thousands of people and some way to standardize everyone’s efforts and build APIs and frameworks and other mechanisms to gain benefit from all these separate workers. A single person is not going to meaningfully solve this problem by themselves. You’ll very likely need an immense framework of support software, and some method of iteratively improving it over time without significant regression. (the failure mode of most large software projects)
If a-c has a 90% chance of being correct, then the actual probability would be 0.1*0.25 or 2.5%, and probably not worth the hassle. Note that there is a cost—the medical procedures to create genetically modified embryos have risks of screwing something up, giving you humans who are doomed to die some horrific way.
Just as a general policy, anything current flesh and blood humans with are having trouble with, that smarter humans have less trouble with, current humans can probably write a piece of software that is better than the efforts of any humans. With today’s techniques.
So intentional problems would be markets, where noise is being injected and any clear pattern is being drained dry by automated systems, preventing you from converging to a model. Or public private encryption where you aren’t supposed to be able to solve it? (But possibly you can)
building fusion power plants, treating and preventing cancer, high-temperature superconductors, programmable contracts, genetic engineering, fluctuations in the value of money, biological and artificial neural networks.
building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites.
Note that there is a way to split these sets into “problems we can easily perform experiments both real and simulated” and “problems where experimentation is extremely expensive and sometimes unethical”.
Perhaps the element making this problems less tractable is we cannot easily obtain a lot of good quality information about the problem itself.
Fusion you need giga-dollars to actually tinker with the plasmas at the scale you would get net power from. Cancer, you can easily find a way to kill cancer in a lab or lab rat but there are no functioning mockups of human bodies (yet) to try your approach on. Also there are government barriers that create shortages of workers and slow down any trial of new ideas. HTSC, well, the physical models predict these poorly and it is not certain if a solution exists under STP. Programmable contracts are easy to write but difficult to prove impervious to assault. Genetic engineering, easy to do on small scales, difficult to do on complex creatures like humans due to the same barriers behind cancer treatment. Money fluctuations—there are hostile and irrational agents blocking you from learning clean information about how it works, so your model will be confused by the noise they are injecting [in real economies]. And biological NNs have the information barrier, artificial NNs seem to be tractable they are just new.
How is this relevant? Well to me it sounds like if we invent a high end AGI, it’ll still be throttled for solving this problems until the right robotics/mockups are made for the AGI to get the information it needs to solve them.
The AGI will not be able to formulate a solution merely reading human writings and journals on these subjects, we will need to authorize it to build thousands of robotic research systems where it then generates it’s own experiments to fill in the gaps in our knowledge and to learn enough to solve them.
I think you are missing something critical.
What do we need AGI for that mere 2021 narrow agents can’t do?
The top item we need is for a system that can keep us biologically and mentally alive as long as possible.
Such an AGI is constrained by time and will constantly be in situations where all choices cause some harm to a person.
One comment: for a realtime control system, the trolley problem isn’t even an ethical dilemna.
At design time, you made your system to consider the minimum[expected harm done(possible options)].
In the real world, harm done is never zero. For a system calculating the risks of each path taken, every possible path has a non zero amount of possible harm.
And every timestep [30-1000 times a second generally] the system must output a decision. “leaving the lever alone” is also a decision and there is no reason to privilege it over “flipping it”.
So a properly engineered system will, the instant it is able to observe the facts of the trolley problem (and maybe several frames later for filtering reasons), switch to the path with a single person tied to the tracks.
It has no sense of empathy or guilt and for the programmers looking at the decision later, well, it worked as intended.
Stopping the system when this happens has the consequence of killing everyone on the other track and is incorrect behavior and a bug you need to fix.