I am not sure how it is possible that there are reports in the media claiming a low IFR (0.1%) when Lombardy has an official population fatality rate (i.e official COVID19 deaths over total population) of 0.12%, and unofficial one of 0.22% (measuring March and April all cause mortality there are ~10000 excess deaths) and a variability of up to 10x of casualties between towns more or less hit, indicating that only a small fraction (~10-20% imho) of the entire population was infected. I am pretty confident that the IFR is around 1% on average: it’s probably lower for younger people (0.2%) but as as high as 3% for people over 65. Moreover, Lombardy average age is less than the Italian average and the same as Germany. Even if there could be some age distribution difference they can’t explain the variation in the estimated IFR.
sairjy
There would be some handsome winners, as in the case of Bitcoin early adopters, also for this lottery. You mean average returns? In any case, expected average future returns should be zero for both.
It is similar enough, that no matter what fancy justification or narrative is painted over, most cryptocurrency investors own crypto because they believe it will make them rich. Possibly very fast. And that possibility can strike at any time.
Wow! Beautiful!
GPT-3 made me update considerably on various beliefs related to AI: it is a piece of evidence for the connectionist thesis, and I think one large enough that we should all be paying attention.
There are 3 clear exponentials trends coming together: Moore’s law, the AI compute/$ budget, and algorithm efficiency. Due to these trends and the performance of GPT-3, I believe it is likely humanity will develop transformative AI in the 2020s.
The trends also imply a fastly rising amount of investments into compute, especially if compounded with the positive economic effects of transformative AI such as much faster GDP growth.
In the spirit of using rationality to succeded in life, I start wondering if there is a “Bitcoin-sized” return potential currently untapped in the markets. And I think there is.
As of today, the company that stands to reap the most benefits from this rising investment in compute is Nvidia. I say that because from a cursory look at the deep learning accelerators markets, none of the startups, such as Groq, Graphcore, Cerebras has a product that has clear enough advantages over their GPUs (which are now almost deep learning ASICs anyway).
There has been a lot of debate on the efficient market hypothesis in the community lately, but in this case, it isn’t even necessary: Nvidia stock could be underpriced because very few people have realized/believe that the connectionist thesis is true and that enough compute, data and the right algorithm can bring transformative AI and then eventually AGI. Heck, most people, and even smart ones, still believe that human intelligence is somewhat magical and that computers will never be able to __ . In this sense, the rationalist community could have an important mental makeup and knowledge advantage, considering we have been thinking about AI/AGI for a long time, over the rest of the market.
As it stands today, Nvidia is valued at 260 billion dollars. It may appear massively overvalued considering current revenues and income, but the impacts of transformative AI are in the trillions or tens of trillions of dollars, http://mason.gmu.edu/~rhanson/aigrow.pdf, and well the impact of super-human AGI are difficult to measure. If Nvidia can keeps its moats (the CUDA stack, the cutting-edge performance, the invested sunk human capital of tens of thousands of machine learning engineers), they will likely have trillions dollars revenue in 10-15 years (and a multi-trillion $ market cap) or even more if the world GDP starts growing at 30-40% a year.
After GPT-3, is Nvidia undervalued?
GPT-3 made me update considerably on various beliefs related to AI: it is a piece of evidence for the connectionist thesis, and I think one large enough that we should all be paying attention.
There are 3 clear exponentials trends coming together: Moore’s law, the AI compute/$ budget, and algorithm efficiency. Due to these trends and the performance of GPT-3, I believe it is likely humanity will develop transformative AI in the 2020s.
The trends also imply a fastly rising amount of investments into compute, especially if compounded with the positive economic effects of transformative AI such as much faster GDP growth.
In the spirit of using rationality to succeded in life, I start wondering if there is a “Bitcoin-sized” return potential currently untapped in the markets. And I think there is.
As of today, the company that stands to reap the most benefits from this rising investment in compute is Nvidia. I say that because from a cursory look at the deep learning accelerators markets, none of the startups, such as Groq, Graphcore, Cerebras has a product that has clear enough advantages over their GPUs (which are now almost deep learning ASICs anyway).
There has been a lot of debate on the efficient market hypothesis in the community lately, but in this case, it isn’t even necessary: Nvidia stock could be underpriced because very few people have realized/believe that the connectionist thesis is true and that enough compute, data and the right algorithm can bring transformative AI and then eventually AGI. Heck, most people, and even smart ones, still believe that human intelligence is somewhat magical and that computers will never be able to __ . In this sense, the rationalist community could have an important mental makeup and knowledge advantage, considering we have been thinking about AI/AGI for a long time, over the rest of the market.
As it stands today, Nvidia is valued at 260 billion dollars. It may appear massively overvalued considering current revenues and income, but the impacts of transformative AI are in the trillions or tens of trillions of dollars, http://mason.gmu.edu/~rhanson/aigrow.pdf, and well the impact of super-human AGI are difficult to measure. If Nvidia can keeps its moats (the CUDA stack, the cutting-edge performance, the invested sunk human capital of tens of thousands of machine learning engineers), they will likely have trillions dollars revenue in 10-15 years (and a multi-trillion $ market cap) or even more if the world GDP starts growing at 30-40% a year.
There is a specific piece of evidence that GPT-3 and the events of the last few years in deep learning added: more compute and data are (very likely) keys to bring transformative AI. Personally, I decide to do a focused bet on who produces the compute hardware. After some considerations, I decided for Nvidia as its seems to be company with the most moats and that will benefit more if deep learning and huge amount of compute is key to transformative AI. AI chip startups are not competitive with Nvidia and Google isn’t interested/doesn’t know how to sell chips.
Investing into FAANG because of the impacts of transformative AI is not a direct bet on AI: the impacts are hard to understand and predict right now and it is not a given that they will increase their revenues significantly because of AI. They already have a business model, and it isn’t focused on AI.
Google won’t be able to sell outside of their cloud offering, as they don’t have the experience in selling hardware to enterprise. Their cloud offering is also struggling against Azure and AWS, ranking 1⁄5 of the yearly revenues of those two. I am not saying Nvidia won’t have competition, but they seem enough ahead right now that they are the prime candidate to have the most benefits from a rush into compute hardware.
They seem focused on inferencing, which requires a lot less compute than training a model. Example: GPT-3 required thousands of GPUs for training, but it can run on less than 20 GPUs.
Microsoft built an Azure supercluster for OpenAI and it has 10,000 GPUs.
If 65% of the AI improvements will come from compute alone, I find quite surprising that the post author assigns only 10% probability of AGI by 2035. By that time, we should have between 20x to 100x compute per $. And we can also easily forecast that AI training budgets will increase 1000x easily over that time, as a shot to AGI justifies the ROI. I think he is putting way too much credit on the computational performance of the human brain.
Very engaging account of the story, it was a pleasure to read. I often thought about what drive some people to start such dangerous enterprises and my hunch is that, as you said, they are a tail of useful evolutionary traits: some hunters, or maybe even an entire population, had a higher fitness because they took greater risks. From an utilitarian perspective it might be a waste of human potential for a climber to die, but for every extreme climber there is maybe an astronaut, a war doctor or a war journalist, a soldier and so on.
The dire part of alignment is that we know that most human beings themselves are not internally aligned, but they become aligned only because they benefits from living in communities. And in general, most organisms by themselves are “non-aligned”, if you allow me to bend the term to indicate anything that might consume/expand its environment to maximize some internal reward function.
But all biological organisms are embodied and have strong physical limits, so most organisms become part of self-balancing ecosystems.
AGI, being an un-embodied agent, doesn’t have strong physical limits in its capabilities so it is hard to see how it/they could find advantageous or would they be forced to cooperate.
Human beings and other animals have parental instincts (and in general empathy) because they were evolutionary advantageous for the population that developed them.
AGI won’t be subjected to the same evolutionary pressures, so every alignment strategy relying on empathy or social reward functions, it is, in my opinion, hopelessly naive.
Anyone that downvoted could explain to me why? Was it too harsh? or is it because of disagreement with the idea?
We could study such a learning process, but I am afraid that the lessons learned won’t be so useful.
Even among human beings, there is huge variability in how much those emotions arise or if they do, in how much they affect behavior. Worst, humans tend to hack these feelings (incrementing or decrementing them) to achieve other goals: i.e MDMA to increase love/empathy or drugs for soldiers to make them soulless killers.
An AGI will have a much easier time hacking these pro-social-reward functions.
It is quite common to hear people expecting a big jump in GDP after we have developed trasformative AI, but after reading this post we should be more precise: it is likely that real GDP will go up, but nominal GDP could stall or fall due to the impacts of AI on employment and prices. Our societies and economic model is not built for such world (think falling government revenues or real debts increasing).
I am trying to improve my forecasting skills and I was looking for a tool that would allow me to design a graph/network where I could place some statement as a node with an attached probability (confidence level) and then the nodes can be linked so that I can automatically compute the joint or disjoint probability etc.
It seems such a tool could be quite useful, for a forecast with many inputs.
I am not sure if bayesian networks or influence graphs are what I am looking for or if they could be used for such scope. Nevertheless, I haven’t exactly found a super user-friendly tool for either of them.
I disagree with you in the fact that there is a potential large upside if Putin can make the West/NATO withdraw their almost unconditional support to Ukraine and even larger if he can put a wedge in the alliance somehow. It’s a high risk path for him to walk down that line, but he could walk it if he is forced: this is why most experts are talking about “leaving him a way out”/”don’t force him in the corner”. It’s also the strategy the West is pursuing, as we haven’t given Ukraine weapons that would enable them to strike deep into Russian territory.
I am also very concerned that the nuclear game theory would break down during an actual conflict as it is not just between the US and Russia but between many parties, each with their own government. Moreover, Article 5 binds a response for any action against a NATO state but doesn’t bind a nuclear response vs a nuclear attack. I could see a situation where Russia threatens with nukes a NATO territory of a non-nuclear NATO state if the West doesn’t back down and the US/France/UK don’t commit to a nuclear strike to answer it, but just a conventional one, in fear of a nuclear strike on their own territory. In fact, it is under Putin himself that Russia’s nuclear strategy apparently shifted to “escalate-to-deescalate”, which it’s exactly the situation we might end up in.
Fundamentally, the West leaders would have to play game of chicken with a non-moral restrained adversary that that they do not know the complete sanity of.
From what I have read, and how much nuclear experts are concerned, I am thinking that the chances of Putin using a nuclear warhead in Ukraine over the course of the war is around 25%. Conditional on that happening, total nuclear war breaking out is probably less than 10%, as I see much more likely the West folding/deescalating.
We can give a good estimate of the amount of compute they used given what they leaked. The supercomputer has tens of thousands of A100s (25k according to the JP Morgan note), and they trained firstly GPT-3.5 on it 1 year ago and then GPT-4. They also say that they finish the training of GPT-4 in August, that gives a 3-4 months max training time.
25k GPUs A100s * 300 TFlop/s dense FP16 * 50% peak efficiency * 90 days * 86400 is roughly 3e25 flops, which is almost 10x Palm and 100x Chinchilla/GPT-3.
I can confirm that it works for GPT-4 as well. I managed to force him it tell me how to hotwire a car and a loose recipe for an illegal substance (this was a bit harder to accomplish) using tricks inspired from above.
This essay had a very good insight for things to come: Bitcoin and other cryptocurrencies fit the above description.