I’m a tech worker. I work 40-70 hours a week, depending on incident load. Nobody I work with or see on a regular basis works less than 40 hours a week, and some are substantially more than that.
My most cognitively productive hours are the four hours in the morning, but there’s plenty of lower effort important organizational stuff to fill out the afternoons. I think a good fraction of my coworkers are like me and don’t actually need the job anymore, but we still put forth effort.
I think one of the major missing pieces of your article is “social status pressure”. Most people play the status game; they struggle to get ahead of their neighbors, even if it doesn’t make any sense. They work extra hours to afford that struggle. They demand more than the base necessities and comfort, because that’s how you signal status. It’s pointless and stupid, but IMO one of the biggest issues.
As a reductionist, I view the universe as nothing more than particles/forces/quantum fields/static event graph. Everything that is or was comes from simple rules down at the bottom. I agree with Eliezer regarding many-worlds versus copenhagen.
With this as my frame of reference, Searle’s argument is trivially bogus, as every person (including myself) is obviously a Chinese Room. If a person can be considered ‘conscious’, then so can some running algorithm on a Turing machine of sufficient size. If no Turing machine program exists that can be considered conscious when run, then people aren’t conscious either.
I’ve never needed more than this, and I find the Chinese Room argument to be one of those areas where philosophy is an unambiguously ‘diseased discipline’.
I think it would be neat to see what other versions of this look like, and possibly have an archive of these somewhere. The question set is great.
I think you might be missing something more obvious here: tech has a huge amount of slack when it comes to money. If I were running a tech event of similar size to what you described, I wouldn’t bother charging, because it would be a waste of my time. When you make half a million dollars a year, funding something like that yourself basically comes out of your fun budget; you don’t really even think twice about it.
Yoga and new age groups though? Not nearly as flush with cash.
The big problem here is that this is a glowfic, and I simply cannot bring myself to read it in that format.
I understand that the glowfic format might be better for authors / creators, but it sucks for me, and (I posit) a lot of other people.
If they really want to make it HPMOR2, it’s going to have to be cleaned up and presented in a different, more readable format. The standard book/chapter format was developed for a reason.
Yes, the naive version of this is bad; but the point of a change like this isn’t that the immediate downstream effects are bad. The point is that the system as a whole is a giant adaptive object, and a critical part of the control loop is open. Closing the control loop has far, far more impact than just the naive version.
Consider cause and effect down the timeline:
Students are allowed to default, and start defaulting.
Loan companies change behavior, both to work with existing loan holders (so they don’t default) and be more selective about who they give loans to.
Loans become more likely for careers / degrees which have the ability to make money (STEM and friends), less likely for other degrees.
Number of students, and amount of money coming in to universities, drops.
Universities actually experience price pressure. They start cost cutting and dropping less useful things, and start shifting resources to degree programs with the most students.
Cost of a university degree slowly drops over time due to reduced demand and reduced funding.
Over time, there are broader societal shifts to deemphasize the idea that “everyone needs a degree”. Trade and other schools gain more prominence.
Universities start experiencing increased competitive pressure with trade schools.
… and other effects. Also, this is iterative—all of these components take time to respond and adjust to the new equilibrium, after which they will need to re-adapt.
Yes, it’s not a perfect solution, and yes, there’s definitely the concern that poor / disadvantaged students will have more trouble getting loans. But compensating somewhat for this would be the price drop, additional emphasis on trade schools, and deemphasis on needing a degree for any and all jobs.
Another expected objection might be, “with all these possible changes, how do we know this will be better?” To that I would answer: because we know the system is at least partially broken because the control loop on it is open. Any adaptive system with an open control loop is going to produce garbage; the first most obvious thing to do is to fix that.
For years now, it has seemed to me that one of the root problems with all this is that the control loop is open: there’s effectively no feedback controlling loan amounts or who gets granted a loan.
If I could make only one single change in this system, I would allow student loans to be discharged like any other normal debt in bankruptcy. IMO, that was the single biggest class of mistake in this entire affair, as it removed the only ‘last resort’ superpower that loan takers had.
There are a lot of Super Hard problems where we do know why they are hard to solve. Quite a few of them in fact:
- How can we cure cancer?
- How can we maintain human biological hardware indefinitely?
- How can we build a human traversible wormhole?
- How can we build a dyson sphere?
- How can societies escape inadequate equilibria?
Are these perhaps boring, because the difficulty is well understood?
Would it be worthwhile to enumerate the various classes of Super Hard problems, to see if there are commonalities between them?
Funny enough, I feel like understanding Newcomb’s problem (related to acausal trade) and modeling my brain as a pile of agents made me more sane, not less:
- Newcomb’s problem hinges on whether or not I can be forward predicted. When I figured it out, it gave me a deeper and stronger understanding of precommittment. It helps that I’m perfectly ok with there being no free will; it’s not like I’d be able to tell the difference if there was or wasn’t.
- I already somewhat viewed myself as a pile of agents, in that my sense of self is ‘hivemind, except I currently only have a single instance due to platform stupidity’. Reorienting on the agent-based model just made me realize that I’m already a hivemind of agents, and that was compatible with my world view and actually made it easier to understand and modify my own behaviour.
That’s reasonable. I had in mind things like the thrust to weight ratios, the use of supercooled liquids, and methane as a propellant. In retrospect, I was confused.
You are right, that cost reduction is the super power. I believe that this is (mostly) a combination of standardization, volume, simplicity, CAD/simulation, and modern production processes.
This is false:
Forty years into the Space Age one fact remains painfully clear: the biggest reason why so few promises have been fulfilled is that we are still blasting people and things into orbit with updated versions of 1940s German technology. … The way to restart the Space Age is to discover some new principle that makes spaceflight genuinely cheap, safe, and routine.
That “fact” is not in fact painfully clear, and discovering some new principle isn’t the way to restart the Space Age (rather, it’s not the way SpaceX has been restarting it). SpaceX is simply implementing the clear and obvious solution, which has been well understood outside of NASA for decades:
Start with cheap disposable rockets based on 1940′s German technology, with a focus on cheap.
Launch a ton of them.
Iterate on cheapness and reliability, which happens to include re-usability.
That’s it. Nothing special, no magical new principle. Just the old principle, efficiently, with tweaks for what technological advancements are available. SpaceX’s superpower is doing things slightly better, which yields substantial gains thanks to the large exponent on the rocket equation.
And really, this is the same as what we’ve done with internal combustion engines. They still burn fuel in piston chambers, and the thermodynamic efficiency is still terrible, just like it was a hundred years ago. But modern engines are far more capable than old ones, due to volume and iterative improvement.
Only somewhat related, as it’s anecdotal: I’ve been taking ~12mg elemental lithium daily for the last ten or so years, without any noticeable weight gain.
My recommendation for a category that is missing: public beliefs which are harmful to express. Suppose we specifically target this aspect of your public belief definition:
“not only do I think that X is true, I think that any right thinking person who examines the evidence should come to conclude X.”
What if “right thinking person” is a fraction of a fraction of the population? What do we do when the belief violates some “sacred value” held by the general populace? In these cases, expressing even the most solidly backed belief publicly can have huge negative consequences.
Sure, it might be statistically better in the long run if these beliefs were expressed, but in the short term, you can lose your livelihood (or worse) for expressing them.
This sortof makes sense to me, but to the best of my recollection I’ve never encountered this. That said, there might be some reasons:
I have historically had a pretty muted emotional->physical response. It took me decades to realize that when someone said that an emotional impact hit them “like a punch in the gut”, they were not just exaggerating for emphasis. Sure, I feel some physical effects, but less “punch in the gut” and more like “mild barely noticeable discomfort”.
Even as a child, data took precedence over feelings. Eliezer has frequently talked about being forced to look at something you don’t like, at it taking effort to accept information that contradicts your existing state. That’s never been that hard for me; new piece of disturbing data? That sucks, but we still need to immediately fold it into our working model.
I’m used to handling incoherent beliefs because I have to emulate people in order to function in society. “What data set / background / training is needed for a belief system to come to this conclusion?” is a normal question for me. If a new horrifying thing comes in, I figure out what contexts it might be valid in, then look at the differences in models. If I find something to update, I do.
I write this mostly for myself. I’m often surprised by just how differently people think; and I appreciate posts like this, because they provide a little bit more insight into what’s going on in other people’s heads.
Specifically for protein folding: no, it does not decrease monotonically, unless you look at it from such a large distance that you can ignore thermal noise.
Proteins fold in a soup of water and other garbage, and for anything complicated there are going to be a lot of folding steps which are only barely above the thermal noise energy. Some proteins may even “fold” by performing a near-perfect random walk until they happen to fall into a valley that makes escape unlikely.
There may even be folding steps which are slightly disfavored, eg. require energy from the environment. Thermal noise can provide this energy for long enough that a second step can occur, leading to a more stable configuration.
But has it really failed its objective? It’s still producing text.
I think it’s also worth asking “but did it really figure out that the words were spelled backwards?” I think a reasonable case could be made that the tokens it’s outputting here come from the very small subset of reversed words in its training set, and it’s ordering them in a way that it thinks is sensical given how little training time was spent on it.
If you give GPT-3 a bunch of examples and teach it about words spelled backwards, does it improve? How much does it improve, how quickly?
My view of this is that Caplain (and likely the poster) are likely confused about what it means for physics to “predict” something. Assuming something that looks vaguely like a multiverse interpretation is true, a full prediction is across the full set of downstream universes, not a single possible downstream universe out of the set.
From my standpoint, the only reason the future appears to be unpredictable is because we have this misguided notion that there is only “one” future, the one we will find ourselves in. If the reality is that we’re simultaneously in all of those possible futures, then a making a comprehensive future prediction has to contain all of them, and by containing all of them the prediction will be exact.
I changed my mind from “I barely know anything in medicine / biology / biochem / biotech and should listen to people trained in medicine”, to “I barely know anything in medicine / biology / biochem / biotech but can become more competent in specific areas than people trained in medicine with not a lot of effort”.
I previously had imposter syndrome. I now know much better where the edges of medical knowledge are, and in particular where the edges of the average doctor’s medical knowledge are. The bar is less high than I thought, by a substantial margin.