It’s always an emergency, lives are always at stake. That’s just the nature of the pharmaceutical business.
It’s the perception that matters.
I think it’s mostly the setting of a precedent of stripping away intellectual property rights for political expediency that is worrisome. It’s a small step in undermining the rule of law, but a step nonetheless. The symbolic gesture is the problem; it signals to the public that such moves are now not only acceptable, but applaudable.
The stock market disagrees.
I wasn’t trying to argue anything in particular, I’m just using comments as a notebook to keep track of my own thoughts. I’m sorry if it sounded like I was trying to start an argument.
The term “unavoidable innovation” really irks me. It has become this teacher’s password for all the world’s uncomfortable questions. Why was Malthus wrong? Innovation! How do we prevent civilizational collapse? Innovation! How do we solve competition and conflicts for limited resources? Innovation! How can we raise the standard of living without compromising the environment? Innovation!
As if life was fair and nature’s challenges were all calibrated to our abilities such that every time we run into population limits, the innovation fairy appears and offers us a way out of the crisis. Where real disaster can only ever result from corruption, greed, power struggles and, y’know, things that generally fit our moral aesthetics about how things ought to go wrong; things that would make a good Game of Thrones episode.
Certainly not mundane causes like mere exponential population increase. Because that would imply that Malthus was (at least sometimes) right, that life was a ruthless war of all against all, a rapacious hardscrapple frontier. An implication too horrible to ever be true.
I’m not arguing that the Malthusian trap explains all the civilizational collapses in history, or even Rome in particular. But it is the default failure mode because exponential growth is fast and unbounded, so to avoid it your civilization has to A) prevent population growth altogether, B) outpace population growth with innovation consistently, or C) collapse way before population pressure becomes a problem.
Biotech startups are an extreme example of indefinite thinking. Researchers experiment with things that just might work instead of refining definite theories about how the body’s systems operate.
I find Thiel’s writings too narrative-driven. Persuasive, but hardly succinct. Somehow, geographical discoveries, scientific progress and ideas of social justice all fit under the umbrella term “secrets” and… there is some common pattern underlying our failure in each of these aspects? Or is one the cause of the other? What am I supposed to learn from these paragraphs? Thiel himself seems very “indefinite” with his critique.Incrementalism is bad, but biotech start-ups should nonetheless “refine definite theories” instead of random experimentation? Isn’t “refining definite theories” a prime example of incrementalism, and a strategy you would expect more out of established institutions anyway? Seems like biotech companies can only do wrong. You could also easily argue “refining definite theories” is an example of indefinite thinking because instead of focusing on developing a concrete product, you’re just trying to keep the options open by doing general theory that might come in handy.
In general this writing feels more like a literary critique than a concrete thesis. I can agree with the underlying sentiment but I don’t feel like I’m walking away with a clearer understanding of the problem after reading.
Our careers span decades. Maybe being sleep deprived for a few years can work out, but this is unsustainable in the long run. Steve Jobs died young. Nikola Tesla wrote love letters to his pigeon. Elon Musk’s tweets suggest that he may not be thinking clearly. Meanwhile, Jeff Bezos gets a full 8 hours.
This is motivated reasoning. Taking Elon Musk vs. Jeff Bezos as an example, if their sleep patterns were reversed you could have just as easily argued “See, that’s why Bezo’s rocket company isn’t as successful as Musk’s”.
The irony is strong with this one
This is the 3D printing hype all over again. Remember how every object in sight was going to be made in a 3D printer? How we won’t ever need to go to a store again because we’ll be able to just download the blueprint for every product from the internet and make it ourselves? How we’re going to print our clothes, furniture, toys and appliances at home and it’s only going to cost pennies of raw materials and electricity? Yeah right.
So let me throw down the exact opposite predictions for social implications if there was absolutely 0 innovation in AI:
AI continues to try to shoehorn itself into every product imaginable and mostly fail because it’s a solution looking desperately for a problem
Almost no labor (big exception: self-driving) has been replaced by robots. The robots that do exist are not ML-based
Universal Basic Income doesn’t see widespread adoption and it has nothing to do with AI, one way or another
<1% of YouTube views is produced by AI generated content
Space is literally the worst place to apply AI—the stakes couldn’t be higher, the training data couldn’t be sparser and the tasks are so varied and complex they stretch even the generalization capability of human intelligence; it’s the pinnacle of AI-hubris thinking AI will “revolutionize” every single field
(I use ML and AI interchangeably because AI in the broad sense just means software at this point)
In fact, since I don’t believe in slow take-off, I’ll do one better: these are my predictions for what will actually happen right up until FOOM.
It’s time for reality check for not only AI, but digital technologies in general (AR/MR, folding phones, 5G, IoT). We wanted flying cars, instead we got AI-recommended 140 characters.
If you swapped out “AGI” for “Whole Brain Emulation” then Tim Dettmers’ analysis becomes a lot more reasonable.
And with enough epicycles you can fit the motion of planets with geocentricism. If MOND supporters can dismiss Bullet Cluster they’ll dismiss any future evidence, too.
Also the note about incentives being larger in North Korea also applies to much of eastern Europa to a lesser degree, where qualifying for imo is seemingly enough to get access to any university
I think that’s the case anywhere; qualifying for IMO is a pretty big deal.
According to this post, computers today are only 3 orders of magnitude away from Landauer limit. So it ought to be literally impossible for the human brain to be six orders of magnitude more efficient. Also, how the hell is the brain supposed to carry out 20 Petaflops with only 100 billion neurons and a firing rate of a few dozen Hertz? The estimate seems way off to me.
See that’s why I asked what’s the incentive to switch to proof of stake and not why it’s better. Like with climate change, this is a coordination problem.
Sorry that’s what I meant to ask
At this point good faith has broken in this argument, we should stop.
You’re just delegating the problem away to an observer reputation system that has the same problem one level deeper. Who actually has incentive to align reputations of observers with what actually happened?