90% certainty that this is bs because I’m waiting for a flight and I’m sleep deprive, but:
For most people there’s not a very clear way or incentive to have a meta model of themselves in a certain situation.
By meta model, I mean one that is modeling “high level generators of action”.
So, say that I know Dave:
Likes peanut-butter-jelly on thin cracker
Dislikes peanut-butter-jelly in sandwiches
Likes butter fingers candy
A completely non-meta model of Dave would be:
If I give Dave a butter fingers candy box as a gift, he will enjoy it
Another non-meta model of Dave would be:
If I give Dave a box of Reese’s as a gift, he will enjoy it, since I thing they are kind of a combination between peantu-butter-jelly and butter fingers
A meta model of Dave would be:
Based on the 3 items above, I can deduce Dave likes things which are sweet, fatty, smooth with a touch of bitter (let’s assume peanut butter has some bitter to it) and crunchy but he doesn’t like them being too starchy (hence why he dislikes sandwiches).
So, if I give Dave a cup of sweet milk ice cream with bits of crunchy dark chocolate on top as a gift, he will love it.
Now, I’m not saying this meta-model is a good one (and Dave is imaginary, so we’ll never know). But my point is, it seems highly useful for us to have very good meta-models of other people, since that’s how we can predict their actions in extreme situations, surprise them, impress them, make them laugh… etc
On the other hand, we don’t need to construct meta-models of ourselves, because we can just query our “high level generators of action” directly, we can think “Does a cup of milk ice cream with crunchy dark chocolate on top sound tasty ?” and our high level generators of action will strive to give us an estimate which will usually seem “good enough to us”.
So in some way, it’s easier for us to get meta models of other people, out of simple necessity and we might have better meta models of other people than we have of our own self… not because we couldn’t construct a better one, but because there’s no need for it. Or at least, based on the fallacy of knowing your own mind, there’s no need for it.
Physical performance is one thing that isn’t really “needed” in any sense of the word for most people.
For most people, the need for physical activity seems to boil down to the fact that you just feel better, live longer and overall get less health related issues if you do it.
But on the whole, I’ve seen very little proof that excelling in physical activity can help you with anything (other than being a professional athlete or trainer, that is). Indeed, it seems that the whole relation to mortality basically breaks down if you look at top perform. Going from things like strongman competitions and american football where life expectancy is lower, to things like running and cycling where some would argue but evidence is lacking, to football and tennis where it’s a bit above average.
If the subject interests you, I’ve personally looked into it a lot, and I think this is the definitive review: https://yorkspace.library.yorku.ca/xmlui/bitstream/handle/10315/32723/Lemez_Srdjan_2016_PhD.pdf
But it’s basically a bloody book, I personally haven’t read all of it, but I often go back to it for references.
Also, there’s the much more obvious problem with pushing yourself to the limits, injury. I think this is hard to quantify and there’s few studies looking at it. In my experience I know a surprising amount of “active” people that got injured in life-altering ways from things like skating, skying, snowboarding and even football (not in the paraplegic sense, more in the “I have a bar of titanium going through my spine and I can’t lift more than 15kg safely” sort of way). Conversely, 100% of my couch-dwelling buddies in average physical shape doesn’t seem to suffer from any chronic pain.
To some extent, this annoys me, though I wonder if poor studies and anecdotal evidence is enough to warrant that annoyance.
For example, I frequent a climbing gym. Now, if you look at climbing, it’s relatively safe, there’s two things people complain about most sciatica and “climbers back” (basically a very weird looking but not that harmful form of kyphosis).
I honestly found the idea rather weird… since one of the main reason I climb (besides the fact that it’s fun) is that it helps and helped me correct my kyphosis and basically got rid of any back/neck discomfort I felt from sitting too much at a computer.
I think this boils down to how people climb, especially how they do bouldering.
A reference as to how the extreme kind of bouldering looks like: https://www.youtube.com/watch?v=7brSdnHWBko
The two issues I see here is:
Hurling limbs at tremendous speeds to try and crab onto something tiny.
Falling on the mat, often and from large heights. Climbing goes two ways up and down, most people doing bouldering only care about up
Indeed, a typical bouldering run might look something like: “Climb carefully and skillfully as much as possible, hurl yourself with the last bit of effort you have hoping you reach the top, fall on the mat rinse and repeat”.
This is probably one of the stupidest things I’ve seen from a health perspective. You’re essentially praying for articulation damage, dislocating a shoulder/knee, tearing a muscle (doesn’t look pretty, I assume doesn’t feel nice, recovery times are long and sometimes fully recovering is a matter of years) and spine damage (orthopedics don’t agree on much, but I think all would agree the worst thing you can do for your spine is fall from a considerable height… repeatedly, like, dozens of time every day).
But the thing is, you can pretty much do bouldering without this, as in, you can be “decent” at it without doing any of this. Personally I approach bouldering as slowly and steadily climbing… to the top, with enough energy to also climb down + climbing down whenever I feel that I’m to exhausted to continue. Somehow, this approach to the sport is the one that give you strange looks. The people pushing themselves above the limits risking injury and getting persistent spine damage from falling… are the standard.
Another things I enjoy is weight lifting, I especially enjoy weighted squats. Weighted squats are fun, they wake you up in the morning, they are a lazy person exercise when you’ve got nothing else in during that day.
I’ve heard people claim you can get lower back pain and injury from weighted squats, again, this seems confusing to me. I actually used to have minor lower back pain on occasions (again, from sitting), the one exercise that seemed to have permanently fixed that is a squat. A squat is what I do when I feel that my back is a bit stiff and I need some help.
But I think, again, this is because I am “getting squats wrong”, my approach to a squat is “Let me load a 5kg ergonomic bar with 25kg, do a squat like 8 times, check my posture on the last 2, if I’m able to hold it and don’t feel tired, do 5-10 more, if I still feel nice and energetic after a 1 minute break, rinse and repeat”.
But the correct squat, I believe, looks something like this: https://www.youtube.com/watch?v=nLVJTBZtiuw
Loading a bar with a few hundred kg, at least 2.5x your body weight, putting on a belt so that your intestines don’t fall out and lowering it “ONCE”, because fuck me you’re not going to be able to do that twice in a day. You should at least get some nosebleed every 2 or 3 tries if you’re doing this stuff correctly.
I’ve seen this in gyms, I’ve seen this in what people recommend, if I google “how much weight should I squat”, the first thing I get is: https://www.livestrong.com/article/286849-normal-squat-weight/
If you weigh 165 pounds and have one of the following fitness levels, the standard for your squat one-rep max is:
Untrained: 110 pounds
Novice: 205 pounds
To say this seems insane is far fetched, basically the advice around the internet seems to be “If you’ve never done this before, aim for 40-60kg, if you’ve been to the gym a few times, go for 100+”
Again, it’s hard to find data on this, but as someone that’s pretty bloody tall who has been using weight to train for years, the idea of starting with 50kg for a squat as an average person seem insane. I do 45kg from time to time to change things up, I’d never squat anything over 70kg even if you paid me… I can feel my body during the move, I can feel the tentative pressure on my lower back if my posture slips for a bit… that’s fine if you’re lifting 30kg, that seems dangerous as heck if you’re lifting more than your body weight, it even feels dangerous at 60kg.
But again, I’m not doing squats correctly, I am in the wrong here as far as people doing weight training are concerned.
I’m also wrong when it comes to every sport. I’m a bad runner because I give up once my lungs are burning for 5 minutes straight. I’m a horrible swimmer because I alter styles and stick with low-speed ones that are overall better for toning all muscles and have less risk of injury… etc
Granted, I don’t think that people are too pushy about going to extremes. The few times people tell me some version of “try harder” phrased as a friendly encouragement. I finish what I’m doing, say thanks and lie to them that I have a slight injury and I’d rather not push it.
But deep inside I have a very strong suspicion that I’m not wrong on this thing. That somehow we’ve got ourselves into a very unhealthy memetic loop around sports, where pushing yourself is seen as the natural thing to do, as the thing you should be doing every day.
A very dangerous memetic loop, dangerous to some extent in that it causes injury, but much more dangerous because it might be discouraging people from sports. Both in that they try once, get an injury and quit. Or in that they see it, they think it’s too hard (and, I think it is, the way most people do it) and they never really bother.
I’m honestly not sure why it might have started…
The obvious reason is that it physically feels good to do it, lifting a lot of running more than your body tells you that you should is “nice”. But it’s nice in the same way that smoking a tiny bit of heroine before going about your day is nice (as in, quite literally, it seems to me the feelings are related and I think there’s some pharmacological evidence to back that up). It’s nice to do it once to see how it is, maybe I’ll do it every few months if I get the occasion and I feel I need a mental boost… but I wouldn’t necessarily advise it or structure my life around it.
The other obvious reason is that it’s a status thing, the whole “I can do this thing better than you thus my rank in the hierarchy is higher”. But then… why is it so common with both genders, I’d see some reason for men to do this, because historically we’ve been doing it, but women competing in sports is a recent things, hardly “built into our nature” and most of the ones I know that practice things like climbing are among the most chilled out dudes I’ve ever meet.
The last reason might be that it’s about breaking a psychological barrier, the “Oh, I totally thought I couldn’t do that, but apparently I can”. But it seems to me like a very very bad way of doing that. I can think of many other safer better ways from solving a hard calculus problem to learning a foreign language in a month to forcing yourself to write an article every day… you know, things that have zero risks of paralysis and long term damage involved.
But I think at this point imitation alone is enough to keep it going.
The “real” reason if I take the outside view is probably that that’s how sports are supposed to be done and I just got stuck with a weird perspective because “I play things safe”.
To the extent that you’re pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.
Well yes, this is basically the crux of my question.
As in, I obviously agree with the E and I tend do agree with the A , buy my issue is why how A seems to be defined in EA (as in, mainly around improving the lives of people that you will never interact with or ‘care’ about on a personal level).
So I agree with: I should donate to some of my favorite writers/video-makers that are less popular and thus might be kept in business by 20$ monthly on pateron is another hundred people think like me. (efficient as opposed, to, say, donating to an org that helps all artists or donating to well-off creators).
I also agree with: It’s efficient to save a life halfway across the globe for x,000$ as opposed to one in the EU where it would cost x00,000$ to achieve a similar addition in healthy life years.
Where I don’t understand how the intuition really works is “Why is it better to save the life of a person you will never know/meet than to help 20 artists that you love” (or some such equivalence).
As in, I get there some intuition about it being “better” and I agree that might be strong enough in some people that it’s just “obvious”, but my thinking was that there might be some sort of better ethic-rooted argument for it.
No worries, I wasn’t assuming you were a speaker for the EA community here, I just wanted to better understand possible motivations for donating to EA given my current perspective on ethics. I think the answer you gave outline on such line of reasoning quite well.
Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn’t either run into paradoxical answers, or be so vague that you’d need to fill in the missing gaps with intuition anyway.
Agree, I wasn’t trying to imply otherwise
Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness but which are not themselves consciously accessible) favoring one decision or the other. You can try to formulate explicit principles (such as utilitarianism) which explain the principles behind those intuitions, but those explicit principles are always going to only capture a part of the story, because the full decision criteria are too complex to describe.
Also agree, as in, this is how I usually formulate my moral decision and it’s basically a pragmatic view on ethics, which is one I generally agree with.
is just “the kinds where donating to EA charities makes more intuitive sense than not donating”; often people describe these kinds of moral intuitions as “utilitarian”, but few people would actually endorse all of the conclusions of purely utilitarian reasoning.
So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ? As in, it might be that they partially justify it with one moral system or another, but at the end of the day it seems “intuitively right” to them to do so.
This fear of continuity breaks is also why I would probably stay clear of any teleporters and the like in the future.
In case you haven’t read it: https://existentialcomics.com/comic/1
But overall I agree, this “feeling” is partially the reason why I’m a fan of the insert slightly-invasive mechanical components + outsource to external device strategy. As in, I do believe it’s the most practical since it seems to be roughly doable with non-singularity levels of technology, but it’s also the one where no continuation errors can easily happen.
What exactly do you mean by “the factors I listed” though ?
As in, I think that my basic argument goes:
“There’s reason to think most kids would feel unsafe in a college environment, desire a social circle and job security, not the kind of transcendent self-actualization style goals that fuel research”. I think this generally holds for anyone at the age of 18-22 outside of outlier, hence why I cited the pyramid of needs, because the research behind that basically points to us needing different things in an age-correlated way (few teenagers feel like they need self actualization). I think this is somewhat exaggerated in the US because of debt&distance but should be noticeable everywhere.
Next, there’s reason to believe research inside universities is slowing down in certain areas, I have no reason to believe the lack of people desiring self-actualization is the cause for this though, except for a gut feeling that self-actualization is a better motivation to research nature than, say, wanting your paycheck at the end of the day. Most famous researcher seem to have been slightly crazy and not driven by societal goals but rather by an inner wish to “set things right” in one way or another, or to leave a mark on the world”
So basically, the best I can do to “prove” any of this would be something like:
Take some sort of comparative research output metric, these are hard to find, and are going to be very confounded with country-wealth (some examples: https://www.natureindex.com/country-outputs/generate/Nature%20&%20Science/global/All/score) … “small socialist countries” produce a surprising amount of researcher per capita, but maybe that’s something inherent to being a small rich country, not to have stronger communities and social support.
See if this correlates with % of the population working, quality of social security, some index measuring security, some index measuring happiness. Assume more research will come out of countries that perform well on this.
This will generally be true in terms of research, publications, books… etc (see Switzerland, Netherlands, Sweeden, Norway, Iceland… which seem to have a disproportionate amount of e.g. nature publications in report to the population), but you will also get outliers (see Israel, which produced a lot of research even dozens of years back when professors&students would be called on a yearly basis to fight to death against an out-numbering enemy that wanted to murder them).
However, you can’t **** draw conclusion from numbers of publications, and things such as “security index” and “happiness index” and even “quality of social security” are very hard to measure. Plus, they are confounded by the wealth of the country.
On the other hand, there’s good data on the idea that research is slowing down overall, that is much easier to place on “universities as a whole”, since by all metrics is seems that research is heavily correlated with academia (see, where most researchers work, where the people that get noble prizes work … etc).
So making the general assumption, of “research is slowing down” is much easier than doing the correlation on a per country basis.
If you can claim there is a valid way to measure basic needs that has a per-country statistic, and a various way to measure “research output” on a per country basis… than I’d be very curios in seeing that, I can even run an analysis based on various standard methods to see if there’s a correlation.
So the generic claim “kids are not researcher and don’t want to be researcher, universities can’t do multiple things at once better than doing one thing, thus if universities have to take care of kids they will have less time to focus on actual research” is easy to look at wholistically, but harder to look at on a per-country basis.
Impossible ? I don’t think so
Worthwhile ? I don’t know. As in, this whole article is closer to “here’s an interesting perspective, say, one that might warrant thinking about, when doing research” rather than “here’s a factual claim about how stuff works”. To make it any better, it would have to be elevated to a factual claim, but then I would basically have to trust the kind of analysis mentioned above (which again, I think would be impossible to run and get significant results since all the metrics I can think of are very leaky).
Honestly, it might have been a better perspective to approach this topic, I might even try to see if there’s relevant data on the subject and update the article if there is, barring that, I literally don’t see how this sort of hunch + basic evidence about generic human psychology plus observing a trend opinion piece differs from anything here. Maybe I’ve been misjudging the epistemic strength of the claims being seen in article around here… in which case, ahm… “sorry ?”, but also, I don’t really see your argument here.
Yes, assuming magical data fell out of the sky or our time to gather data was infinite every single piece of human thought could be improved, but I’m not sure why the stopping condition for this article would be “analysis comparing countries”… as opposed to any other random goalpost.
To the extend that you are interested in knowing whether your thesis is true, it would make sense to check.
How would I specifically go about checking this though ? As in, I do have data and knowledge on US and UK universities, I don’t have data on Germany Universities.
If you have data on German university research output, then I think it’s worth looking at, if not, I feel like what you’re basically doing is saying: “Hey, you don’t have data on this specific thing, it might go either way, your hypothesis is null and void”.
Provided data on German universities existed, why not ask for data about every single country with universities.
You could argue “Well, you should become an expert in the field and have all possible data handy before making any claims”, but then that claim would invalidate literally every single original thought on LessWrong that uses facts and even most academic papers.
Also, German Universities constitute a pretty bad example in my opinion, as in:
a) Murdering, exiling or routing out your highest IQ demographic and most public intellectuals
b) Having the rest taken away by the US, Russia and UK
c) Living for dozens of years in a country that’s been morally, geographically and culturally divided ravaged by WW2 (plus 1⁄3 of it living under a brutal~ish communist dictatorship)
Would make for a pretty weird outlier in all of this no matter what.
As in, if we were to compare other rich academic systems I’d rather do Japan, Italy, France, Spain or Switzerland
It seems that your comment tries to take it apart by looking at whether you like the way the system is designed and not by looking at effects of it. That means instead of trying to see whether what you are seeing is true, you expand on your ideas of how things should be.
What exactly should my reply contain ?
As in, my argument in the original post is basically:
a) Universities evolved to install and provide primary needs (safety and social circle) instead of a more niche need for self-actualization
b) Research is slowing down overall, it could partially be because universities no longer focus on self-actualization and instead focus on providing safety and a social circle.
What I was basically saying is that I’m not sure if a applies to German universities, as in, I agree that they are probably less-so incentivized to focus on providing safety and a social circle.
I have no idea if b applies or not, as in, I’m not sure how well German universities have been doing and it’s hard to measure their progress since the 30s and 40s obviously had a pretty huge negative effect on the whole upper education system.
I do overall think the example of German universities specifically (and Austrian ones, to some extent), because there’s so many of them and many of them are vocation-focused specifically, giving a place to go for people that just want security, not a place in academia, is a good counter to my ideas here.
But also, my knowledge of the German education system is so poor overall, that I can’t really make very specific claims here.
I think German upper education is hardest to pick on, partially because:
a) A small % of the population attends higher education relative to other countries at the income level: https://en.wikipedia.org/wiki/List_of_countries_by_tertiary_education_attainment
b) From my knowledge a lot of what is called “tertiary education” in Germany is basically just a practical 1-2 year professional course that people can get before they’re even through grade 12
c) Anyone living in big cities does indeed experience less environmental change when attending university, though I wouldn’t call it close to zero, unless you happen to live next to the university and unless a lot of your high school friends attend the same university (though again, it could be argued that you get to keep your friend group, since they also live in <insert big city>
d) https://www.bbx.de/grossnet-wage-calculator-germany/ -- There’s no student debt attached to it, but as I mentioned for European institutions in general, debts is analogous economically to the higher taxes one has to pay. Though indeed the “mental” effects of having that debt is non-existent (maybe partially analogous in that it makes “blue collar” professions seems less appealing ? since people end up paying > 50% of their paycheck and thus might value comfort over money more, and university is basically 4 years of comfort that promises future comfortable jobs, whereas in the American model one could work a trade job starting at 16-18 and easily retire at 40.… but I think that’s stretching it, I doubt most students are even aware that taxes are a thing)
The point of my question at the end there is that I would expect any New Improved University Replacement to suffer the same process.
The point of my question at the end there is that I would expect any New Improved University Replacement to suffer the same process.
That seems reasonable, I’d assume the same.
As in, if I could think of an implementable solution I’d have tried implementing it.
My point here as to describe the problem from a certain angle, which is easy, I lay no claim on the harder task of prescribing a solution.
I mean, I think the basic argument I would have here is:
If universities are optimizing for 5, and we can agree that 5 leads to research and that universities are one of the leaders in anything scientific-research related. Why is research slowing down ? And, respectively, why is so little of the interesting research coming out of university.
See points 1-2 and arguably 3⁄4 in the article.
I think there’s also some evidence universities didn’t optimize for 2&3 until recently, because until recently their appeal was much narrower and focused on the very intelligent and/or very well-off (i.e. people that usually want or even need self-actualization).
I was alluding to that.
But at the same time, I’m pretty sure the simpler explanation might apply and people just don’t understand why this study would be valuable + IQ is a sensitive topic, thus the material is hard to find.
Hence why I said I will post any studies anyone finds, I have a pretty high prior that a few exist and I’m just not seeing them.
I have a low prior they will show anything else other than “University is indeed confounded by IQ and/or IQ + income in money earning potential”, but alas I based that on small-sample empirical evidence… so, eh.
Maybe just getting a job will (on average) actually result in learning more valuable things, but frankly I don’t see any reason to believe that. (More things valuable for becoming a cog in someone else’s industrial machine, maybe, though even that isn’t obvious.)
Ok, well I certainly wouldn’t argue that a generic alternative exists, I mean, that’s my original point, that they are wasteful via the fact that they steal signal-strength from any alternative that would crop up.
In my personal experience, getting a job on average is better for learning, if you look for jobs that can provide de-facto mentors/teachers, but that might be because so few young people get a job. Or maybe me and the people I know that took my advice and quite university are just very good at learning from other practitionares rather than professors.
Maybe we need different ways of optimizing 18-20-year-olds’ lives for learning new and valuable things. I’d be interested to see concrete proposals. An obvious question I hope they’d address: why expect that in practice this will end up better than universities?
Well, my proposal in the article is basically that we had such a system, it was called a university, but it got slowly eroded as it went the way of a safety/community provision institution (or at least provisioning an illusion of those two).
My argument for why it worked better in the past are point 1-2 and arguably 3 and 4.
Students’ youth? Even supposing that time spent at university is worthless, it’s only a few years per person.
Is that period not important though ?
As en, even assuming universities are not “magical tower that remove 4 years of life to validate IQ > 100 and conscientiousness in the 80th percentile, you could hardly argue what they teach is perfect.
But those “few years” are basically the most critical years of development we have, as in, the brain is developed enough to actually do stuff yet still plastic.
I won’t go into myelination, because I’m lazy and finding good references is hard, as far as I know Giedd has a few studies on grey matter changes that everyone cites, but maybe there’s better references:
Gist of it is: We loss neuronal bodies as we age starting around the age of 5. The loss doesn’t happen in the prefrontal cortex until we enter our teens and seems to keep happening until the reach 20.
I don’t know of any good studies going after 20, there’s a lot of meh studies and if you aggregate them you get this: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3004040/ (see fig2 and fig3). Though note that many of these use secondary markers or rather oudated imaging methods and basically nobody is doing brain byopsies on living humans… best you can get is DV-MRI and FMRI and postmortem biopsies (which is probably very biased, because only the very poor or the very educated will be fine with their recently dead child’s brain being quickly removed and analyzed for the sake of neuroscience… come to think of it, the other 2 probably are too, either in the same way or by selecting for people with mental disorders)
This process is roughly associated with pruning, essentially making networks more efficient and/or optimizing for resource consumption. This goes in tandem with myelination: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3982854/
I.e. If a neuron is not pruned, the likelihood of it’s various axons being heavily myelinated is increased and vice versa.
So now, assume that the front-cortex is indeed “what makes us different from animals”, what gives us most of our ability to be intelligent in the two math and write and gather evidence part.
Assume that people with pruned cortexes are indeed noticeably smarter in everything related to engineering/science (think people in the 10+ age range vs people in the 10< age range).
Assume that indeed the studies above are true and take into account the fact that we have empirical cultural evidence (see stereotypes of learning new things as people age, underpayment of older workers in non-tenure thinking related jobs like programming and accounting, can’t teach a dog new tricks… etc).
I think these are all pretty safe assumption, not true in the sense of scientific truth in physics, but true in the sense of “safe to operate using them as a rough guidelines”, or at least if they are not based on your model, then I also invite you to throw out all of psychology with them.
Youth is indeed very important, the 15-25 +/- 3 year age range is critical for the development required to be a scientists, engineer, doctor or any other profession where unusual intelligence is required.
So University time might be “only 3 or 4 year per person”, though let’s be honest thins like med school take 6 to 10 depending on location and an alarming number of people are putting in an extra 1-3 years getting a masters. But those are 3-10 years of a person’s most valuable time in life as far as brain plasticity.
<And yes, one could make the same argument about high school, but that would basically be arguing that high schools serve the triple role of counter-biasing aggressive tendencies in people that would otherwise basically be criminals, cultural indoctrination and learning… and that’s a much more taboo argument to make so I’m not making it>
That’s just answering your question though, it’s not the point I’m making in the article, the point of the article is that universities basically have a lot of signaling power for “If you are smart and want to self-actualize this is the place”. So if you want to think in terms of scarce resource being wasted, that’s the way I’d have put it there, universities are wasting critical signaling mechanisms.
I always thought there ought to be a good way to explain neural networks starting at backprop.
Since to some extent the criterion for architecture selection always seems to be whether or no gradients will explode very easy.
As in, I feel like the correct definition of neural networks in the current climate is closer to: “A computation graph structured in such a way that some optimize/loss function combination will make it adjust its equation towards a seemingly reasonable minima”.
Because in practice that’s what seems to define a good architecture design.
Actually, this is backwards; by investing in companies that are worth more in worlds you like and worth less in worlds you don’t, you’re increasing variance
If you treat the “world you dislike” as one where you can still get about the same bang for you buck, yes.
But I think this wouldn’t be the case with a lot of good/bad visions of the future pairs.
BELIEF: You believe healthcare will advance past treating symptoms and move into epigenetically correcting the mechanisms that induce tissue degeneration.
a) You invest in this vision, it doesn’t come to pass. You die poor~ish and in horrible suffering at 70.
b) You invest in a company that would make money on the downside of this vision (e.g. palliative care focused company). The vision doesn’t come to pass. You die rich but still in less horrible but more prolonged suffering at 76 (since you can afford more vacations, better food and better doctors).
c) You invest in this vision, it does come to pass. You have the money to afford the new treatments as soon as they are out on the market, now at 70 you regain most functionality you had at 20 and can expect another 30-40 years of healthy life, you hope that future developments will extent this.
d) You invest in a company that would make money on the downside of this vision, it does come to pass. You die poor~ish and in horrible suffering at 80 (because you couldn’t afford the best treatment), with the added spite for the fact that other people get to live for much longer.
To put it more simply, money has more utility-buying power in “good” world than in “bad” world, assuming the “good” is created by the market (and thus purchasable).