Right. The first step to a real solution would be to sector the forest into zones separated by a firebreak. Then schedule each zone for a burn every (time interval).
However, cutting such barriers into a forest could island certain species and cause their extinction via genetic drift and diversity loss. (Essentially each isolated population has a high probability of losing genetic diversity each generation and so eventually the whole population will become fragile and will die out)
A person can write things down, I suspect that an incorrect answer on a test with unlimited time is :
The person got bored and didn’t check enough to catch every error or didn’t possess a fact that the test writer expected every taker to know.
The question itself is wrong. (a correct question is one where after all constraints are applied one and only one answer exists)
As in if there were no time limit and the test taker were allowed to read any reference that doesn’t directly have the answer and had unlimited lifespan and focus. Note also that harder iq test questions as they are written today in absolute terms the questions are wrong in that multiple valid solutions that satisfy all constraints exist. (With the usual cop out of “best” answer without defining the algorithm used to sort answers for best)
The MCAT and the dental one is another example of such a test. Every well prepared student has the ability to answer every question but there is a time limit.
While I agree these are 2 different quantities when we say “intelligence test” we mean cognitive capacity. Every problem on an IQ test can be eventually solved by someone without gross brain deficits. They might need some weeks of training first to understand the “trick” a test maker looks for but after this they can solve every question. So an IQ test score measures problems solved by a time limit (that cannot provide enough time for any living human being to solve all questions or the test has an upper range it can measure) plotter on a gaussian.
So IQ testing an AI system will be tough since obviously it would need about a second to run all questions in parallel though however many stages of neural networks and other algorithms it uses. And then it will either miss a question because it doesn’t have the algorithm to answer one of a particular type or because it doesn’t have information that the test maker assumed all human beings would have.
This is true. Keep in mind that the AGI is trying to make money, it’s having to find securities where it predicts humans are going to change the price in a predictable direction in a short time horizon.
Most securities will change their price purely by random chance (or in a pattern no algorithm can find) and you cannot beat the market.
Now there is another strategy. This has been used by highly successful hedges. If you are the news you can make the market move in the direction you predict. Certain hedges do their research and from a mixture of publicly available and probably insider data find companies in weak financial positions. They then sell them short with near term strike prices on the options and announce publicly their findings.
This is a strategy AGI could probably do extremely well.
Error in paragraph one. Suppose the drug company stock is $10 and from your sleuthing you predict it will be $20 once the trial results release. There are a finite number of shares you can buy in the interval between (10 and 20). In the short term you will exhaust the order book for the market and longer term you will drive the price to $20. Hedge funds who can leverage trillions routinely cause things like this.
Error in paragraph 2: the return on increasing intelligence is diminishing. You will not get double the results for double the intelligence. (Note I still think the singularity is possible but because the intelligence increase would be on the order of a million to a billion times the combined intelligence of humanity once you build enough computers and network them with enough bandwidth)
Yeah this is one where it seems like as long as the delegator and task engine are both rational (aka manager and worker) it works fine.
The problems show up in 2 ways : when what the organization is itself incentived by is misaligned with the needs of the host society, or when the incomplete bookkeeping at a layer or corruption or indifference creates inefficiencies.
For example prisons and courts are incentivized to have as many criminals needing sentencing and punishment as possible. While a host society would benefit if there were less actual crime and less members having to suffer through punishment.
But internal to itself a court system creating lots and lots of meaningless hearings (meaningless in that they are rigged to a known outcome or a random outcome that doesn’t depend on the inputs and thus a waste of everyone’s time) or a prison having lots of people kept barely alive through efficient frugality is correct for these institutions own goals.
This is correct. The reason is the stock market has exhaustible gradients. Suppose you have an algorithm that can find market beating investment opportunities. Due to EMH there will be a limited number of these and there will only be finite shares for sale at a market beating price. Once you buy out all the underpriced shares, or sell all the overpriced shares you are holding (by “shares” I also include derivatives) the market price will trend to the efficient price as a result of your own action.
And you have a larger effect the more money you have. This is why successful hedge funds are victims of their own success.
To add a simple observation to more detailed analysis : human brains have real world noise affecting their computations. So the preference they are going to exhibit when their internal pretences are almost the same is going to be random. This is also the optimal strategy for a game like rock paper scissors: to randomly choose from the 3 classes, because any preference for a class can be exploited like you found out.
We can certainly make AI systems that exhibit randomness whenever 2 actions being considered are close together in value heuristic.
The reason to compare it to fission is from self gain. For a fission reaction that quirk of physics is called criticality where the neutrons produced self amplify, leading to exponential gain. Up until sufficient fissionable material was concentrated (the Chicago pile) there was zero fission gain and you could say fission was only a theoretical possibility.
Today human beings design and participate in building AI software, AI computer chips, and robots for AI to drive. They also gather the resources to make these things.
The ‘quirk’ we expect to exploit here is that human minds are very limited in I/O and lifespan, and have many inefficiencies and biases. They are also millions of times slower than computer chips that already exist, at least for individual subsystems. They were designed by nature to handle far more limited domain problems than the ones we are faced with now, and thus we are bad at them.
The ‘quirk’ therefore is that if you can build a superior than a human mind but also as robust and broad in capabilities, and the physical materials are a small amount of refined silicon or carbon with small energy requirements (say a cube that is 10cm*10cm and requiring 1 kW) you can order those machines to self replicate, getting the equivalent of adding trillions of workers to our population without any of the needs or desires of those trillions of people.
This will obviously cause explosive economic growth. Will it be over 30% in a single year? No idea.
One comment : nuclear fission generated explosive bursts of energy and enormous increases in the amount of energy humans could release. (Destructively) Very likely the “megatons per year” growth rate was 30 percent some years on the 60s and 70s.
Yet if you moved the plot backward to 1880 and asked the most credible scientists alive the if we would find a way to do this, most would be skeptical and might argue that the increase in dynamite production with each year didn’t show 30 percent growth.
What was McAfee actually facing? Wouldn’t he have been able to plead and get minimum security (club fed) like most other wealthy defendants accused of financial crimes and tax evasion?
Reference class forecasting is k way regression, right.
One issue is that recent events—the pandemic, cryptocurrency—seem to just be “off the graph” events. You can try to use the “Spanish flu” as a predictor for the pandemic but it was so far away in time and world structure as to be useless. Cryptocurrency can be compared to the Tulip mania and other bubbles but again it’s not the same.
We can’t predict something then with this method if we don’t have references.
Well, sorta. For my entire lifespan the science press is full of breathless optimism. A professor somewhere wrote a paper and got something to vaguely work. And thus flying cars and cyborgs or free energy is 5 minutes away!
Obviously nothing came out of any of that. The things that lead to progress had money—gigadollars—behind them. Like these white leds and the chip in the device I use to write this message and it’s OLED screen and so on. And it took years and years and many generations of the tech past the breathless article stage—at least 20 years for OLED—to not suck.
The view of most people—arguably one that could be considered rational—is that unless an event has a non zero chance of being something we can personally experience, it doesn’t matter.
This is likely the reason for most major civilizations happening to use religion. Most religions contain some promised form of accounting for our actions.
Moreover this is why in this community if there were not cryonics or AI—potential developments that have nonzero chances of allowing at least some of us here to personally see this future—this community wouldn’t exist. If there is no hope there can be no progress.
Sure. For a new build in your climate zone, probably the most efficient setup is a tanked condensing natural gas water heater, ideally sorta centrally located. Then a hydronics air handler and vents that just cover the immediate area around the installation. This gives you the cost advantage of natural gas for most of the heating but you avoid the equipment cost of a second furnace. Tankless condensing is an option but in your biome there probably isn’t a sufficient advantage.
Then mini splits around the periphery for heating/cooling during most days.
There are other metrics such as HPSF meant to factor in aggregate performance. Since by choosing a fixed temperature you neglect all the days where the mini split has a huge efficiency advantage over combustion. Also you overlook the zoning. Larger houses that have extra rooms that are not always in use benefit from not heating those areas. And the solar. At your high local electric rates solar has a rapid payoff.
You made a significant flaw in your calculations. https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=epmt_5_6_a [average price of electricity].
The eia’s data says 10.9 cents per kWh is the national average price per electricity. Essentially for whatever reason (regulatory capture, a mishap with a nuclear plant, onerous local regulations) you are paying 2.4 times what you ‘should’ be paying, given the power company locally should be able to buy natural gas generators and fuel for around the same price as a power company anywhere else.
Second, mini splits are significantly more often on the higher end than your numbers reflect. https://www.energystar.gov/products/most_efficient/central_air_conditioners_and_air_source_heat_pumps
5.29 (COP is EER/3.4) is what the fujitus RLS3 gets, which is the bare minimum your neighbors would be installing. There are more efficient models not listed on this chart, such as this 40 SEER model.
So the ‘high end’ estimate is actually the average and not high enough.
Anyways if you pay 2.4 times less for electricity, then a heat pump would be 65 percent as expensive as your boiler. Combine that with a solar array, and remember the other advantages of mini splits: redundancy, zoning, and air conditioning as well. Redundancy because a typical house will have 2-5 mini splits, so a failure of one is not a failure of climate control. Zoning, aka turning on just the units in the occupied rooms, can add another factor of 2 energy savings on top of the above. And you get air conditioning on the days you will need it.
Do we have a quantitative measurement for “all the time”? We have in living memory the emergence of HIV which presumably also came from an animal host initially. And the previous 2 variants of covid which were not very contagious.
Please note I am not “convinced” either way. I am just noting a gain of function experiment is a specific set of conditions that might take nature decades to centuries to replicate by chance. It is a plausible method for the virus evolving. The other way being that lab field workers are going to collect more exotic specimens than commercial meat sellers, going deeper into caves,etc. All it would have taken is a mistake or counterfeit equipment such as HEPA filters, a problem that appears to be more common with current Chinese industries than in equipment from more mature name brand western companies.
There are multiple hypotheses and insufficient evidence to settle on just one.
The ‘gain of function’ experimental design—where a chain of lab animals are used, with a slightly harder to cross barrier between each animal—would cause similar ‘natural mutation’ patterns. The difference is that it makes the actual creation of a novel pandemic causing virus many many times as likely, as this same infection chain has to occur by chance in nature.
What we have now is like looking at the residue of a nuclear meltdown but we can’t examine the actual reactor, and the owners of the territory the meltdown occurred in are actively suppressing evidence. Nature can produce a nuclear reactor and has at least once it just isn’t likely.
I am not saying it is bullshit. But failing to consider information also has a cost. And for some fields, “consistently good decisions” may not even be possible.